text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Capital structure, business model innovation, and firm performance: Evidence from Chinese listed corporate based on system GMM model
This paper aims to verify the impact of capital structure on business model innovation and firm performance and the mediating effect of business model innovation. We use the data of the Chinese growth enterprises market (GME) listed high-tech firms from 2016 to 2022 as a dynamic panel data model with the system–generalized method of moments (sys-GMM), adopting return on asset and earning per share as firm performance. Our results show that capital structure has a lag effect on firm performance. The total debt ratio in the last period has a significant non-linear impact on the performance and business model innovation level of nowadays, presenting a U-shaped relationship. The first-order lag short-term debt ratio positively improves firm performance. Business model innovation significantly promotes better firm performance, and business model innovation does exist in the mediating effect between enterprise capital structure and its performance. These results remain robust to different sample sizes or proxy variables. This paper proposes some suggestions for firm operations and government policies based on the findings.
Introduction
In the last decades, the Chinese economy has expanded ten times, and the achievement is remarkable to the world.As the National Bureau of Statistics shows, in 2022, the gross domestic product (GDP) exceeded 18 trillion US.Dollars, for the first time, are firmly ranking second in the world.Some firms perform better because of the business model innovation (BMI) and capital investment support [1,2].
The relationship between capital structure (CS) and firm performance (FP) has been a hot research topic in the academic field.Capital structure is an essential factor that a firm can adjust and control by itself, affecting performance [3][4][5][6][7][8].Scholars have used many methods such as multiple regression, structural equation modeling, and data envelopment analysis, taking long-term and short-term leverage ratios and large shareholder ratios as indicators, but have obtained completely different research conclusions.These research results include positive relationships [9][10][11][12], negative relationships [4,[13][14][15][16], and complex relationships [3,[17][18][19][20].In general, the relationship between capital structure and firm performance is complex and is affected by various factors.
With the advancement of technology and the arrival of the mobile internet era, business model innovation is also profoundly impacting the firm performance.It can be seen countless institutional investors take advantage of the new rules of the competition and the power of the capital market to transform the business models of target enterprises, creating a variety of different business models, such as sharing models, community models, free models, long tail models, and platform models.This has changed the performance level of firms and realized investment appreciation by improving the value of firms in the capital market.Firms have also achieved long-term development.
Since 2001, when Amit and Zott began studying e-commerce platform models [21], scholars have researched business models and business model innovation [22][23][24][25][26][27][28][29][30][31].Overall, the research on the relationship between business model innovation and firm performance is still in the early stages.However, most existing research suggests that business model innovation is a promising approach that can help firms to improve their performance.
In recent years, scholars have increasingly paid attention to the role of business model innovation as a mediating mechanism to explain the relationship between firm performance and a variety of factors, including technology [32][33][34], university spinouts [35], value chain activities [36], novel products and services [37], organizational adaptation and resistance [38], and integrative capability [39].
Since capital structure is essentially a matter of firm control, and business model innovation requires sufficient influence to make continuous adjustments and changes at the organizational or strategic level, it is natural to study the transmission mechanism of business model innovation in the relationship between capital structure and firm performance.Therefore, this paper aims to use data from the Chinese growth enterprises market (GEM) listed firms from 2016 to 2022 to explore how capital structure affects business model innovation and how this, in turn, affects firm performance.
The paper proceeds as follows.The literature review section reviews the researches on capital structure, business model innovation, and firm performance.Methodology and Data Source section explains the research methodology, sample and data source.Results section reports the regression results.Discussion section discusses the every results.Conclusion section provides the conclusion, implications, limitations, and future works.
Capital structure and firm performance
Capital structure can be divided into two perspectives: narrow and broad.In the narrow sense, capital structure refers to the composition of debt and equity financing of a firm, focusing on the firm's financing structure with the total debt divided by total assets [7,13,[40][41][42].Leverage is the most widely used indicator of capital structure [41,[43][44][45][46].In a broad sense, capital structure includes the composition of long-term and short-term debt ratios and the firm's top shareholders' equity ratio [20,47,48].Over the years, scholars have worked hard to find the optimal capital structure for firms and identify the relevant influencing factors in choosing different capital structures.They hope to provide firms with a unified conclusion that can guide them to adjust their capital structure and thus enhance enterprise value.This has led to a wealth of research results.
Regarding the impact of capital structure on firm performance, scholars have proposed the following theories, such as the Modigliani-Miller theory [49,50], the agency theory [51], the pecking-order theory [52], the trade-off theory [20,53], the market timing theory, the signaling theory, the efficient-risk hypothesis and the franchise value hypothesis [6,54,55].These theories and hypotheses provide theoretical support for subsequent complex capital structure research.Scholars also constantly use the corresponding theories to explain their research findings and eventually form different research conclusions.
Some scholars conclude they are positive relationships [9,12,[56][57][58][59][60][61].For example, [62] used the generalized method of moments (GMM) and applied data from 367 firms in growth markets to construct a model by indicators such as debt ratio and return on investment, obtaining a positive conclusion.[63] used the ordinary least squares with data from 493 nonfinancial firms in different industries to investigate the relationship between total debt rate and firm ROE/ROA, they found a positive conclusion.Moreover, some scholars find they are negative relationships [64][65][66][67][68][69].For the conclusion of a negative relationship, a broader range of indicators and scope is used [15,70].[42] used WarpPLS analysis to study the relationship between debt-asset ratio, debt-equity ratio with ROA, ROE, and Tobin's Q for 182 publicly listed manufacturing firms and obtained negative findings.Meanwhile, some other scholars believe that capital structure and firm performance have a more complex relationship [71][72][73][74][75][76].It is not linear but possibly U-shaped, inverted U-shaped, or different for different indicators [71,[77][78][79][80], even with no links [81][82][83][84].Furthermore, Table 1 summarizes the research findings on the relationship between capital structure and firm performance.
Business model innovation and firm performance
Scholars have gradually paid attention to the concept of business model since 2001 and have distinguished the concept of business model from existing concepts such as strategy and profit model, considering business model as the overall operation of a firm, which relies on operations to go beyond competitors and provide value to customers [21,87,88].Since 2010, scholars have gradually shifted their research focus from business models to business model innovation.This is because innovation usually leads to better performance.Therefore, scholars are more interested in the process and manner of business model innovation [33,89,90].In practice, firms hope to improve business performance through innovation in the business model.Whether it is efficiency-based or novel-based business model innovation, it often involves the adjustment and change of elements to achieve business model innovation, thus achieving improved performance [24,91,92].Research on business model innovation has evolved from a linear to a complex analysis, from an internal to a holistic perspective, and from an independent to a multidimensional approach.
The research on the relationship between business model innovation and firm performance has become more complex and specific as the elements considered and the methods used in the study have become more sophisticated.Scholars have conducted case studies, hierarchical regression, partial least squares, structural equation modeling, and fuzzy sets qualitative
Capital Structure and Firm Performance Literature
Positive [12, 56-59, 61, 85] Negative [4,[64][65][66][67][68][69][70] U-shaped [75,76] inverted U-shaped [17,[71][72][73][74]78] No links [43,[81][82][83][84] Different for different indicators [8,18,80,86] https://doi.org/10.1371/journal.pone.0306054.t001 comparative analysis to study firms in manufacturing, technology, insurance, and fashion and apparel industries in China, Sweden, Italy, and Southeast Europe [32,34,[93][94][95][96][97].Most findings show that business model innovation has a significant positive relationship with firm performance.[39] used structural equation modeling to analyze data from 165 Chinese firms and found that the relationship between business model innovation and firm performance is not simply linear but influenced by many factors.[98] used two-step cluster analysis to analyze data from 72 international construction contracts and found similar results.[95] conducted a case study of a mobile technology provider in the technology industry.They found that the relationship between business model innovation and firm performance was complex, with the impact of business model innovation depending on many factors, such as the industry, the firm's competitive position, and the strategic fit of the business model innovation.[94] conducted a case study of an insurance firm and found similar results.Table 2 summarizes the research findings on the relationship between business model innovation and firm performance.
Mediating role of business model innovation
Business model innovation has a relatively large scope and degree of impact.Scholars have used case studies [32,36,37], regression analysis [39], structural equation modeling [34,38], and literature review methods [33] to analyze the mediation effect of business model innovation.The research subjects include Xerox firm [32], 150 peer-reviewed scholarly articles [33], 165 Chinese firms [39], an agricultural information service provider in India [37], and 104 organizations from different industries [38].
Scholars have studied the role of business model innovation as a mediator between technology [32,33], value chain activities [36], and other different variables with firm performance.It was found that business model innovation mediates in improving firm performance.Table 3 summarizes the research findings on the mediation effect of business model innovation on firm performance.
Our study completes the literature by surveying the impacts of capital structure on firm performance using a dynamic panel data model and sys-GMM.For further analysis, we examined the impact of capital structure on business model innovation and the influence of business
IV DV Research method Literature
Technology Economic value Case study [32] University Spinouts Firm performance Case study [35] Value chain activities Firm performance Case study [36] Technology Economic value Literature reviews [33] Novel products and services Firm performance Case study [37] Organizational adaptation/resistance organizational performance SEM [38] Integrative Capability Firm Performance Regression [39] Technological innovation Success performances SEM [34] https://doi.org/10.1371/journal.pone.0306054.t003model innovation on firm performance in China.The literature supports a common finding on the role of capital structure on firm performance and business model innovation.
Methodology
Firm performance is not only affected by external factors but may also be closely related to the firm's past performance level and present a certain stickiness.The specific model settings are as follows: Where t = 1, . .., T, and i = 1, . .., N. T and N denote the time and the firm, respectively.y it is the dependent variable, y it−1 and y it−2 are the lagged of the dependent variable.(L)x it represents all independent variables and their lagged terms, and 2 C it are control variables.λ t is the unobserved time-invariant effect, η i is the unobserved firm-specific effect and ε it is the error term.
In different models, y it represent the level of firm performance (FP) and business model innovation (BMI).The firm performance is captured by the return on asset (ROA) [70,72,[99][100][101] and earnings per share (EPS) [20,44,54,77,85].ROA and EPS are the most widely used indicators to measure firm performance, mainly to reflect the efficiency of a company's assets in generating income for the company's shareholders.
Business model innovation is calculated by weighting six different indexes using principal component analysis and entropy weight method [102,103].It is a composite of the firm's six variables (R&D expense ratio, Fixed and Intangible Assets Ratio, Customer Concentration, Main Income Revenue share, Total Asset Turnover, Efficiency of Labor), which is according to the structure of value creation, value proposition, and value capture innovation [104].So, business model innovation can increase firm performance and should positive impact.
In this paper, the independent variable is mainly capital structure (CS), including total debt rate (TDTA), short-term debt rate (STDTA), and ownership concentration (OC10).The total debt rate, which indicates the level of debt, is obtained by comparing total debt to total assets [20,40].The level of a company's total debt reflects its financing ability and the current level of risk it is taking, and it will have different impacts on the company's operating decisions.We believe there is an optimal capital structure as the result of an equilibrium where the benefits of control are equal to the costs of bankruptcy [20,53,61], so the relationship between debt rate and firm performance or business model innovation should be U shape or inverted U shape.
The short-term debt rate represents the short-term debt to total asset ratio [20,40,41,54,105].The level of a company's short-term debt reflects the proportion of debt that needs to be repaid within one year.It is an essential reflection of the company's risk exposure and promotes the flexibility and richness of operations.So, we propose that short-term debt will promote the firm's performance and the business model innovation.
Ownership concentration (OC10), which is measured by the sum of the share rate of the top ten shareholders [20,47,48], reflects the dispersion of the shareholding ratios in one company and embodies the control of the major shareholders over the company and the mutual checks and balances between them.So, we expect a positive sign.
C it are control variables that comprise: Firm size (SIZE) is calculated as the natural logarithm of total assets at the end of each year, and firm size is correlated with capital structure and firm performance [4,8,68,92,105].
Firm age (AGE) indicates the operational maturity level of one firm.The age of a company is an indicator of the time it has been in operation.Scholars reduce the dispersion of company age by taking the logarithm of the difference between the observation year (2023) and the establishment year (the year the company was founded).This helps to reduce the variability of company age, making the data more reliable [10,22,23].
Non-debt tax shield (DEP) is the depreciation of a company's fixed assets, and it is taxdeductible, like interest on debt.This type of factor is not only debt but also tax-deductible.It is measured by the depreciation of fixed assets divided by total assets [4,105].
Board size of directors (BSIZE) generally uses the log of the number of directors on the board to measure [106,107].The size of the board of directors is often related to the level of corporate governance and impacts the company's operating decisions and efficiency.Therefore, it is included as a control variable in this paper.
Independent directors ratio (INDDIR) is the ratio of the number of independent directors to the total number of directors on the board [4].According to the Chinese Company Law, the number of independent directors on the board of a listed company shall not be less than onethird.Independent directors can also provide independent opinions to the board of directors to help the board of directors make correct decisions.The proportion of independent directors usually reflects a firm's corporate governance level.
Quick ratio (QR) represents the firm's solvency and is defined as current assets minus inventories divided by current liabilities [68,105].A higher quick ratio indicates that a company has a more vital short-term debt repayment ability.Generally, a quick ratio of 1.5 or more is considered reasonable.A quick ratio excessively low usually indicates that a company has a weak short-term debt repayment ability and may be in a liquidity crisis, which can affect corporate decision-making and operating performance.
All the variables are listed in Table 4.
Estimation technique
This paper generally adopts a dynamic panel design and applies the generalized method of moments (GMM) to estimate the model.If the model is formulated using a dynamic approach, GMM is a proper estimation technique [110].GMM helps to estimate a model when there is suspicion of unobservable data [111].[112] used all possible lag variables as instrumental variables (there may be more instrumental variables than endogenous variables) for estimation, a method also known as differential GMM.In order to overcome problems such as the inability of differential GMM to estimate variables that do not change with time and the strong persistence of sequences.[113] returned to the horizontal equation before the difference and proposed the horizontal GMM estimation.[114] combined differential GMM and horizontal GMM for GMM estimation and proposed system GMM.The advantage of system GMM over differential GMM is that it can improve the efficiency of estimation and can estimate variables that do not change with time [114].Sys-GMM estimation also corrects the simultaneous bias between the variable of interest and control variables [110,111].
Proper instrument variables could help us to obtain unbiased results.Without valid instruments, reverse causality undoubtedly leads to biased estimates and results [112,114].Since sys-GMM uses more moment conditions, uses more information, and is more efficient in estimation, this paper will prioritize the use of system GMM when selecting estimation strategies.
The validity of the instruments is essential in the GMM estimator.So, two different but necessary tests will be used to guarantee they do not have the issues.First, difference-GMM and sys-GMM are valid on the premise that the error term ε does not have a serial correlation.Otherwise, it would lead to endogeneity problems.Therefore, it is necessary to test the secondorder serial correlation of error terms [112].The second test is over identifying test.Test whether the instrument variable is related to the error term and whether the instrument variable is exogenous.So, the Sargan test will be used after GMM regression [112][113][114].
Sample and data
We use the Chinese GEM high-tech listed companies' data from the Wind database.The sample period is 2016-2022.There are 374 enterprises belonging to high-tech industries.The total number of listed companies in Chinese GEM is 1273(19/06/2023).Firstly, we delete enterprises that are not high-tech and were established after 2015.Secondly, the elements with serious missing data in the observation period (2016-2022) are removed.Thirdly, exclude the enterprises with ST.Fourthly, a few enterprises with missing values were supplemented manually by referring to the data in the annual reports.The number of enterprises involved in the model estimation is 374, and their data is applied for seven years from 2016 to 2022, and the total number of observations is 2583.Finally, to remove the influence of outliers on the model estimation results, the original data are shrunken at the 1% and 99% quantile of the above data.
We use six variables (RD/FINTAN/CC/MIR/TAT/EOL) to represent business model innovation.This indicator follows the structure and gives us a good framework and reference to measure business model innovation [104].This thesis adopts the research framework and divides business model innovation into three dimensions: value creation innovation (RD and FINTAN), value proposition innovation (CC and MIR), and value capture innovation (TAT and EOL).Then, we use the principal component analysis method and entropy weight method to reduce the above six indicators into one indicator.BMI and BMIe are the indicators used to measure the business model innovation level of different firms.
First, principal component analysis is used to extract the common factors, the selected factor extraction criterion is eigenvalue � 1, and the orthogonal varimax method is chosen to rotate the factors to obtain the rotated factor loadings matrix.Then, the composite evaluation value of business model innovation is obtained based on the factor score coefficient matrix and factor analysis table.The steps of the entropy weight method mainly include (1) standardization of all indicators, (2) calculation of the weight of indicator j in year i, (3) calculation of information entropy and redundancy of indicators, (4) calculation of indicator weight, and (5) calculation of comprehensive indicators.
Table 5 reports the descriptive statistics results of the variables used in this study.We find that the means and median of ROA are positive, although there is a minimum value of -31.26, the profitability of most sample enterprises is still strong, and the overall state is profitable.The mean of TDTA is 36%, which means that in China, for the GEM high-tech listed enterprises from 2016 to 2022, the average debt ratio is 36%, which is relatively modest and similar to the median.The minimum values of TDTA and STDTA are both 0.044, indicating that for some enterprises, there is only short-term debt and no long-term debt.The average value of OC10 is 51.5%, which is similar to median.However, the difference in ownership structure from 22.2% to 77% shows that the difference in ownership control in China's capital market is still huge.
Table 6 presents the correlation matrix between the different variables.Most of the correlation values are not high enough.However, the correlation of TDTA and STDTA is 0.93, which means it is likely that the debts of plenty firms consist mainly of short-term debt, with little or no long-term debts.The correlation between QR and TDTA is -0.67, and with STDTA is -0.65.The correlation between ROA and EPS is 87%.However, this value does not matter because they are not simultaneously implemented in the same model.
In order to avoid serious multicollinearity problems, a collinearity test is carried out.Variance Inflation Factor, or VIF, is the most widely used indicator to test the multicollinearity issue.If the value of VIF is more than 10 for the independent and control variables, a multicollinearity problem tends to exist in the model [110,115].Table 7 shows the results, in which the highest value of VIF is 8.57 and still less than 10.Therefore, we believe that although some variables have a high degree of correlation, the multicollinearity issue is not severe and could not be considered.
Results
This section reports and discusses the sys-GMM regression results using the basic model and the robustness analyses.
Results between capital structure and firm performance
Table 8 reports the estimated results from using return on assets (ROA) as a dependent variable and capital structure (CS) as independent variables, including total debt rate (TDTA), short-term debt rate (STDTA), and ownership concentration (OC10).Because we believe a complex relationship exists between debt ratio and firm performance, we include the square term of TDTA and the first-order lag item in the equation.
In Table 8, column (1) shows the results of mixed OLS regression based on panel clustering without considering the dynamic panel properties, and column (2) shows the results of regression based on panel clustering and fixed effect model.Considering the potential heteroscedasticity or correlation issues, in column (3), the fixed effect model with Driscoll and Kraay robust standard error is adopted for estimation because Driscoll-Kraay standard errors are robust to very general forms of cross-sectional ("spatial") and temporal dependence when the time dimension becomes large [116].The results of these three columns are all used for comparison.Column ( 4) is the basic model using the sys-GMM estimator and column ( 5) is a robustness analysis with EPS as the dependent variable.In column (6), we still use ROA as a dependent variable but delete the samples if the industry has less than five firms.We all use the cluster robustness standard error in columns ( 4) to (6).
The column (1) does not consider the individual fixed effect and may lose the individual characteristics.The estimated coefficients are consistent since columns (2) and (3) only use different standard errors.However, as mentioned, because subtracting the mean value from each variable results in a correlation between the explanatory variables and the error term, there are several cases where the coefficient sign of the variable is opposite to the theoretical estimate, so the fixed-effect model is inconsistent.There is "dynamic panel bias," and other estimation methods need to be introduced.
In column (4), we use the sys-GMM estimate and show positive and statistically significant memory effects from last year's firm performance to the current year, and the coefficient is 0.242.The total debt ratio of the firm in the current year does not significantly impact the current year's operating performance.However, the debt ratio of the previous year will have an impact on the operating performance of the current year, which also shows that the mechanism of the debt rate affecting the operating results of the enterprise has a certain "time lag."From the point of view of this paper, it is also logical because all the data in this paper are taken from the same time.It is reasonable that the state at the end of last year affects the result of this year.The regression results show that the total debt ratio at the end of the previous year will have a complex impact on firm performance, showing a U-shaped relationship; that is, with the increase of the debt ratio in the previous year, the business performance of the enterprise will decline, and when it reaches a certain level, the further increase of the debt ratio will significantly increase the firm performance [75,76].The coefficient of short-term debt is positive but has an insignificant effect on ROA.First-order lag of short-term debt rate has positive and statistically significant effects on ROA.This result is confirmed by the fact that the high level of last year's short-term debt rate contributes more to this year's firm performance.This conclusion also reflects that the impact of short-term debt on firm performance has a time lag, and short-term debt can promote firm performance improvement [59,86].Ownership concentration and its lag items have no significant effect on firm performance.
Column ( 5) is the result using EPS as the dependent variable for the robustness test.The square term of the total debt ratio is still not significant.At the same time, its first-order lag remains significantly positive, and the lag of the short-term debt ratio of the first order also has a significant positive impact on firm performance at the significant level of 5%.The degree of ownership concentration and its lagging terms are still not significant.In column (6), we narrow the samples to more typical industries.
Moreover, the result is not significantly different from the previous two columns.The coefficient of the ROA first-order lag increases from 0.249 in column (4) to 0.307.The coefficient of TDTA2 is negative and significant with firm performance.Furthermore, the first-order lag of TDTA2 and STDTA is still positive and significant with firm performance, which is consistent with columns (4) and (5).In columns ( 4) to ( 6), the AR test and Sargan test are all passed.So, from the results in Table 8, we can see that last year of total debt rate has a Ushaped correlation with this year's firm performance, and last year's short-term debt rate increases the firm performance of the current year.
Results between capital structure and business model innovation
Table 9 reports the estimated results between business model innovation and capital structure.We use BMI as the dependent variable of business model innovation and use BMIe in the robustness test.The dependent variable is capital structure (CS), still including the square of total debt rate (TDTA2), total debt rate (TDTA), short-term debt rate (STDTA), and ownership concentration (OC10).In columns ( 1) to (3), we still use ID clustering OLS regression, clustering robust standard fixed effect, and DK standard error fixed effect, and these columns are used for comparison.In columns ( 4) and ( 5), we use BMI and BMIe as dependent variables and use sys-GMM to test the correlation between capital structure and business model innovation.In the last two columns, we still adopt the method of limiting industries, excluding industries with less than five enterprises, and then use BMI and BMIe, respectively, for sys-GMM.
The coefficient estimates in the first three columns are unstable, and even some coefficient signs are pretty opposite.In columns ( 4) to (7), the coefficient of first-order lag BMI or BMIe are positive and significant, and in columns ( 4) and ( 6), the coefficient values are 0.252 and 0.258, which have little difference, and in column ( 5) and ( 7), the coefficient values are both around 0.5 (0.509 and 0.488).Therefore, we believe enterprise business model innovation has some "inertia."That is, the degree of enterprise business model innovation in the previous year will significantly and positively affect the level of business model innovation in the current year, thus forming a "business model innovation chain." The first-order lag TDTA2 is positive and significantly affects business model innovation in all seven columns, regardless of the variable estimate strategies used.In columns ( 4) and ( 6), although samples of different sizes are used, there is little difference between the two coefficients of regression results (1.439 and 1.498).In columns ( 5) and ( 7), the coefficients are 0.156 and 0.16, which also have little discrepancy.Moreover, the AR test and Sargan test are both passed.Therefore, we believe that the total debt ratio of the enterprise in the past period also presents a stable U-shaped relationship on the business model innovation in the current period because the change of capital structure also needs some time to affect the enterprise decision and business model innovation.However, the regression results show that this impact relation is stable and sustained.Short-term debt rate and ownership concentration do not influence business model innovation.
Results between business model innovation and firm performance
Table 10 demonstrates the estimated results between business model innovation and firm performance.We use BMI and BMIe as the independent variables and ROA and EPS as the dependent variables.Column (1) is the basic model using the sys-GMM estimator and columns (2) to (6) are robustness analyses.In column (2), we use EPS instead of ROA; in column (3), we replace BMI with BMIe.In columns (4) to (6), we use the same method or variables except to reduce the samples following the screening principles mentioned earlier.
As we can see, the first-order lag of firm performance is positive and significant, with the dependent variable in most columns.So, these results once again prove that past firm performance positively affects the aforementioned firm's performance.Focus on the core variable and correlation of these models.The business model innovation level can significantly improve firm performance in all models.In most models, the first-order lag BMI or BMIe is L2.OC10 0.101 -0.088 -0.088 -0.088 -0.025 insignificant with firm performance.In columns ( 1) and ( 4), no matter what the sample size, the coefficients are around 5.7 (5.797 and 5.747), which shows strong robustness.When we use EPS as the proxy variable of firm performance, just like columns (2) and (4) show, the coefficients are 0.293 and 0.359.Although these two values are not as robust as the results obtained by using ROA, they are also around 3.2.In columns (3) and ( 5), when we use BMIe as the independent variables, the coefficients are 23.37 and 19.79 (both around 20). Furthermore, the AR test and Sargan test are both passed.Based on our findings, we can conclude with a high degree of confidence that business model innovation has a significant positive impact on firm performance.This conclusion is consistent with our original hypothesis and the findings of previous studies [22-24, 32, 34, 96].
Mediation effect of business model innovation
According to the resource-based view (RBV) theory, different capital structures lead to different resource bases, leading to different decision-making, strategy changes, and new knowledge.These changes can be reflected in the innovation of business model elements [117,118].Therefore, any change in business model elements (such as technological innovation, product innovation, and team management innovation) can lead to business model innovation [32,34,39], ultimately improving firm performance.
It was found that business model innovation mediates in improving firm performance.We have already confirmed that the first-order lag of capital structure influences the current business model innovation level and that the current business model innovation level positively impacts firm performance.Therefore, business model innovation may mediate the relationship between capital structure and firm performance.To test this hypothesis, we conducted a Bootstrap Sobel test analysis with higher statistical efficacy [119][120][121][122][123].
We still use ROA and EPS as proxies for firm performance and BMI and BMIe as variables for business model innovation.As shown in Table 8, ownership concentration does not affect firm performance.Therefore, we only test whether business model innovation mediates the relationship between total debt to total assets (TDTA) and firm performance between shortterm debt (STDTA) and firm performance.Table 11 shows the results of the Sobel test using the 1000 times bootstrap sampling.When testing the mediating effect of business model innovation on TDTA and firm performance, whether ROA or EPS represents firm performance, and whether BMI or BMIe represents business model innovation, the 95% confidence interval does not include 0. When testing the mediating effect of business model innovation on STDTA and firm performance, the 95% confidence interval includes 0. Therefore, it can be concluded that business model innovation mediates the relationship between total debt to total assets and firm performance but not between short-term debt and firm performance.This is also reflected in Table 9, because STDTA or its first-order lag is insignificantly correlated with business model innovation.
Discussion
This study investigates the impact of corporate capital structure and business model innovation on firm performance, utilizing a system-GMM approach with data from Chinese listed enterprises.It was found that the previous year's performance positively influences the current year's performance levels.This is attributed to the fact that better performance often signifies higher levels of innovation investment [124,125], human capital [126], risk management practices [127], among others, all of which further contribute to enhancing firm performance.This study reveals that the total debt ratio of a company at the end of the previous year has a complex impact on firm performance in current year, exhibiting a U-shaped relationship.The possible reason for this result is that when a firm has low debt, the tax shield effect of debt is minimal, and the costs associated with debt outweigh the tax benefits, hindering performance improvement.However, as the firm's debt levels increase, the tax shield effect becomes more pronounced, thereby enhancing the firm's value and net income.This aligns with the agency cost theory [51] and is supported by research findings from some scholars [75,76].With the increase of the short-term debt ratio, the firm performance also increases correspondingly.This is consistent with some research [18,20,59,86,128].Short-term debt is the primary source of debt, just like [20] found that listed companies in China prefer short-term debt financing.The higher value of the short-term debt ratio usually leads more resources in one short period which will lead to better performance.
Furthermore, the research findings of this study did not reveal a significant relationship between equity structure and firm performance.Neither the current year nor previous years showed a significant impact on the performance of the current year.This could be attributed to the use of data from Chinese Growth Enterprise Market (GEM) companies, where the top ten shareholders in GEM companies exhibit high concentration and limited diversity in equity ownership.Additionally, the influence of equity structure is partially reflected in the size of the board of directors.As a result, the positive impact relationships found by scholars are not confirmed in this study [20,47,48].
The study reveals that business model innovation in the previous year significantly and positively influences the level of business model innovation in the current year and firm performance.[129,130] argue that the only way to enhance organizational performance is through a bricolage of resources.This is because, according to the resource-based view theory and dynamic capabilities, the input and uniqueness of organizational resources determine the competitive advantage and performance of a firm.Business model innovation necessitates resource input, and these innovations and inputs often cannot be completed in a short timeframe but rather constitute a continuous dynamic process, giving rise to the "inertia" of business model innovation.Business model innovation improves the efficiency of decisionmaking in the value system, breaks down traditional barriers, changes the decision path, and achieves value addition in the marketplace than previous models [23].Excellent business model innovation is a terrific way to improve competitive advantage and create benefits, which can lead to better firm performance [131,132].Therefore, for enterprises, unique and sustainable business model innovation is a crucial way to develop dynamic capabilities and competitive advantages.Once a company enters the "lane" of business model innovation, its advantages will lead to sustained improvements.
An important innovation in this study is the exploration of the relationship between capital structure and business model innovation, as no studies have been found investigating the impact of business model innovation from capital structure.The study finds that the total debt ratio of the previous period also exhibits a significant U-shaped relationship with business model innovation in the current period.This relationship is attributed to the increasing share of corporate debt in the capital structure of the past year, which gradually raises the risk of bankruptcy for the company.At this point, decision-makers within the company may lean towards adopting conservative operational and managerial measures, thereby suppressing the level of business model innovation.However, as the level of debt increases to some certain levels, the benefits of the debt tax shield become more pronounced, offsetting debt expenses and other costs [51,76].This provides companies with greater autonomy for innovation and potential, leading to higher levels of business model innovation.Moreover, as the amount of debt repayment increases, companies have a stronger drive and necessity for model innovation, further enhancing the level of business model innovation.
Furthermore, as discovered earlier in this study, business model innovation positively enhances firm performance, while capital structure has a U-shaped impact on business model innovation.Therefore, capital structure can influence firm performance by affecting business model innovation.Theoretically, according to the resource-based view theory, firms must acquire and control some resources and capabilities [133,134] to achieve a competitive advantage.How much resources can be invested is primarily influenced by capital structure.Firms have responsibility to continuously develop the variety and adaptability of the resources to gain competitive advantage.Capital structure influences and provides different resources and bringing business model innovation undoubtedly.Business model innovation also require resource-based innovation to gain sustainable competitive advantage, which will lead to better firm performance.
Conclusion
This study aims to examine the impact of capital structure and business model innovation on firm performance in China.The main reason is that there are more and more high-tech enterprises in China, attracting the attention of capital markets.By clarifying the impact of different financing channels on their operating performance, firms can continuously adjust their capital structure and improve their operating performance.At the same time, with the increasing number of business model innovation cases in China, it is essential to clarify the transmission mechanism of business model innovation in the capital structure and firm performance and to supplement the theoretical research on this issue, which will provide Chinese high-tech enterprises with reference opinions.Therefore, this paper uses data from GEM-listed high-tech enterprises from 2016 to 2022 and adopts the sys-GMM method.
The study found that capital structure has a lag effect on enterprise performance and a noticeable "time lag effect."The total debt ratio in the last period significantly nonlinearly impacts this period's firm performance and business model innovation level, presenting a U-shaped relationship.Enterprises' first-order lag short-term debt ratio can effectively improve current firm performance.Ownership concentration has insignificant effect on firm performance and business model innovation.The higher the level of business model innovation in the current period, the better the firm performance.The extent to which a company innovated its business model in the previous year has a significant positive impact on the level of business model innovation in the current year.At the same time, this paper also verifies that business model innovation does exist in the mediating effect between enterprise capital structure and its performance.
The practical implications of this study lies in the following points: (1) Firms can adjust their financing structure based on the research findings in this study.Control the pace of debt financing, especially balance the relationship between financing risk and tax shield, and quickly escape the lowest point of the capital structure effect, and turn to the accelerating growth half of the curve, and use the positive effect of debt ratio on firm performance.(2) Firms can also take advantage of short-term debt to positively impact their performance and enhance their capacity for short-term debt financing.The government should provide enterprises with smoother, barrier-free short-term financing channels and basic guarantees to help them constantly improve their operating level.(3) Encourage firms to innovate their business models and take advantage of the direct effect of business model innovation on improving firm performance and the indirect effect of capital structure adjustment released through business model innovation, encourage and support enterprises to break through the existing business model, and improve enterprise innovation tolerance.
This paper also has some limitations, mainly involving a short sample period, all of the selected enterprises are Chinese listed companies, and the heterogeneity analysis of enterprises with different characteristics and the depth of research on business model innovation.
Future research can be further explored in the following fields: (1) Expand the sample size and extend the research period, and strive to include a variety of types of enterprises in China and other regions of the world to verify the universality of the above conclusions; (2) Discuss the classification and grouping of enterprises in different maturity, region, industry or scale to verify the robustness or difference of the relationship between enterprise capital structure, business model innovation and firm performance in different groups; (3) Conduct in-depth discussions on the mediating mechanism of business model innovation, and what kind of capital structure will make business model innovation more familiar and efficient.These are all worth further research and analysis. | 9,552.2 | 2024-06-21T00:00:00.000 | [
"Economics",
"Business"
] |
Prussian Blue Nanoparticle-Labeled Mesenchymal Stem Cells: Evaluation of Cell Viability, Proliferation, Migration, Differentiation, Cytoskeleton, and Protein Expression In Vitro
Mesenchymal stem cells (MSCs) have been used for the treatment of various human diseases. To better understand the mechanism of this action and the fate of these cells, magnetic resonance imaging (MRI) has been used for the tracking of transplanted stem cells. Prussian blue nanoparticles (PBNPs) have been demonstrated to have the ability of labeling cells to visualize them as an effective MRI contrast agent. In this study, we aimed to investigate the efficiency and biological effects of labeled MSCs using PBNPs. We first synthesized and characterized the PBNPs. Then, iCELLigence real-time cell analysis system revealed that PBNPs did not significantly alter cell viability, proliferation, and migration activity in PBNP-labeled MSCs. Oil Red O staining and Alizarin Red staining revealed that labeled MSCs also have a normal differentiation capacity. Phalloidin staining showed no negative effect of PBNPs on the cytoskeleton. Western blot analysis indicated that PBNPs also did not change the expression of β-catenin and vimentin of MSCs. In vitro MRI, the pellets of the MSCs incubated with PBNPs showed a clear MRI signal darkening effect. In conclusion, PBNPs can be effectively used for the labeling of MSCs and will not influence the biological characteristics of MSCs.
Background
Mesenchymal stem cells, a type of adult stem cell, have the capacity of anti-inflammatory, regenerative potential and can migrate into injured tissues to aid the recovery of damaged function [1]. They can differentiate into multiple cell types under the specific microenvironment and are easily collected from adult and fetal tissue [2]. Thus, mesenchymal stem cells (MSCs) have been used for regenerative medicine and oncology therapy as a promising tool due to these excellent properties [3,4]. However, the fate of MSCs after transplanting into the body remains unclear, and non-invasive MSC tracking is necessary in vivo for evaluating the efficiency of transplantation and their fate, properties, and localization [5]. Recently, magnetic resonance imaging (MRI) as an effective technology has been widely obtained to research the structural and functional information of mesenchymal stem cells in vitro and in vivo [6].
In the past years, multiple nanoparticles were used for labeling MSCs as a promising tool for non-invasive imaging of cells to record their distributions and fate in vivo and in vitro, and even used for the treatment of tumors [7]. For example, superparamagnetic iron oxide (SPIO) nanoparticles and quantum dots (QDs) have been used for labeling cells for many years [8,9]. And fluorescent magnetic nanoparticles (FMNPs) were used for labeling MSCs to realize the targeted imaging and synergistic therapy of gastric cancer cells in vivo [10]. For these novel labels, a careful and complete analysis of cell toxicity is needed because everything has a toxic dose and may perturb downstream cell function [11]. For example, the MRI contrast iron oxide nanoparticles was attributed to the generation of ROS and may cause cell death [12].
Recently, Prussian blue nanoparticles (PBNPs) have been demonstrated that they have potential to be an MRI contrast agent [13][14][15]. Prussian blue, considered as a practical, economical, safe, and environmently friendly drug, has been approved by the US Food and Drug Administration (FDA) in clinic to therapy the radioactive exposure. Importantly, the PBNPs were highly dispersible and stable in both water and biological mimic environments such as blood serum without the appearance of aggregation within 1 week [16], and have good photothermal stability that could reuse the PBNPs during practical applications [17]. For example, Liang et al. [13] firstly demonstrated that PBNPs with strong absorption in the NIR region can be used as an excellent contrast agent to enhance photoacoustic imaging. PBNPs with uniform size and good colloidal stability can be fabricated from low-cost chemical agents by an easy way.
The PBNPs have been used for labeling some tumor cells as the MRI agent in research [18], but little studies were reported on the application of PBNPs in the MSCs. Here, we report that PBNP-labeled mesenchymal stem cells exhibited normal cell viability, proliferation, migration, cytoskeleton, differentiation, and protein expression in vitro. Further work is needed if this is a reality in vivo and to make sure if directed intralesional delivery of PBNP-labeled MSCs is as critical as cell tracking thought.
Cell Culture
Mouse MSCs C3H10T1/2 were obtained from Nanjing KeyGen Biotech. Inc. Cells were cultured in Dulbecco's modified Eagle's medium (DMEM; Hyclone, USA) supplemented with 10% fetal bovine serum (FBS; Israel) and 1% penicillin-streptomycin (Hyclone, USA) at 37°C with 5% CO 2 saturation, and the medium was changed every 3 days. After four passages, the cells were used for experiments.
Preparation of PBNPs
In a typical synthesis, 2.5 mmol of citric acid (490 mg) was first added to a 20 mL 1.0 mM aqueous FeCl 3 solution under stirring at 60°C. To this solution was dropwise added a 20 mL 1.0 mM aqueous K 4 [Fe(CN) 6 ] solution containing the 0.5 mmol of citric acid (98 mg) at 60°C. A clear bright blue dispersion formed immediately. After 30 min, the solution was allowed to cool to room temperature with the stirring continued for another 5 min at room temperature. Then, an equal volume of ethyl alcohol was added to the dispersion and centrifuged at 10,000 rpm for 20 min to result the formation of a pellet of nanoparticles. The latter was separated again by the addition of equal volume of ethyl alcohol and centrifugation.
Characterization of PBNPs
The infrared spectroscopy of the synthesized PBNPs were measured using an infrared spectrophotometer (IR; Thermo Fisher Nicolet IS10). Morphology of the synthesized PBNPs was examined by transmission electron microscopy (TEM; JEM 2100F). Field-dependent magnetization of PBNPs was researched using a vibrating sample magnetometer (VSM; Lakeshore 7307). X-ray diffraction (XRD) analysis was performed using Bruker D8 ADVANCE A25X (XRD). The polydisperisty index of the PBNPs was determined by Zetasizer Nano ZS.
Intracellular Distributions of PBNPs and Ultrastructure of Labeled C3H10T1/2 Cells
Transmission electron microscopy (TEM) was performed to assess the intracellular distributions of PBNPs. After the medium was removed, cells were rinsed with PBS and then digested with 0.25% trypsin. Then, the cells were transferred to a 1.5-mL EP tube to be centrifuged (2000 rpm, 5 min). Then, the supernatant was removed and the cells were fixed by 0.25% glutaraldehyde and 1% osmium acid. After cells were rinsed again, the cells were dehydrated in 50% ethanol, 70% ethanol, 90% ethanol, 90% acetone, and 100% acetone for 20 min each. Then, cells were imbedded at 4°C overnight. Then, 3% uranium acetate-citrate double staining in sections was performed. Finally, images were collected by TEM.
Scanning electron microscopy (SEM) was done to assess the ultrastructure of labeled C3H10T1/2 cells. After the medium was removed, cells were rinsed with PBS and fixed with 3% glutaraldehyde precooling 4°C overnight. Then, the cells were rinsed twice with PBS and fixed with 1% osmic acid 4°C for 1 h. After C3H10T1/2 cells were rinsed again, the cells were dehydrated in ascending graded alcohols (30% ethanol, 50% ethanol, 70% ethanol, 80% ethanol, 90% ethanol, 95% ethanol, and 100% ethanol) for 2 × 10 min each. Then, the cells were immersed in 70%, 80%, 90%, 95%, and 100% acetonitrile solution for 15 min each. Then, vacuum drying and spray coating of gold were performed. Finally, images were collected by SEM.
Cell Viability Analyses of the PBNPs
Cell viability was evaluated using the MTT (Sigma, USA). Cells were seeded into 96-well plates at 1 × 10 3 cells per well at 37°C with 5% CO 2 atmosphere. After incubation overnight, the culture medium was replaced by 100 μL fresh medium containing different concentrations of PBNPs (0, 5, 10, 20, 40, and 80 μg/mL), and then cells were cultured for another 1 to 3 days. The culture medium was removed, and the cells were incubated with 20 μL of MTT (5 mg/mL) at 37°C for 4 h. The precipitated violet dye crystals were dissolved in 150 μL of dimethyl sulfoxide (DMSO; Sigma-Aldrich, USA) for 10 min by shaking gently. The optical density (OD) value was measured at a wavelength of 490 nm using a microplate reader. The results of cells were expressed as percent viable cells.
Proliferation Assay
Comparison with MTT, MTS, WST-1, and XTT, RTCA allows the analysis of the whole period of the experiment and does not require the labeling that negatively affects cell culture experiments [19]. So the xCELLigence system (Roche/ACEA Biosciences) was used to measure cell proliferation in real time. Briefly, cells were seeded into E-Plate-16 (ACEA Biosciences, Inc. San Diego, USA) at 5 × 10 3 cells per well with 150 μL complete medium. After growing 24 h, the medium was replaced by fresh medium containing various concentrations of PBNPs and incubated for another 96 h. This system measured the electrical impedance, which was created by cell attachment on the microelectrode-integrated cell culture plates [20], to provide the quantitative information about the cell number and viability in real time by the RTCA-DP instrument [21]. Cellular proliferation was measured periodically every 15 min for the following 4 days.
Real-Time Monitoring of Cellular Migration
C3H10T1/2 cell migration was measured using a real-time cell invasion and migration (RT-CIM) assay system (ACEA Biosciences, Inc. San Diego, USA). Cells have the ability of increasing the impedance to hence "Cell Index" read-outs when they contact and adhere to the sensors due to the cell migration from the upper chamber into the bottom chamber through the membrane. Simply, cells were seeded in the upper chamber at a density of 4 × 10 4 per well in serum-free medium in the presence of various concentrations of PBNPs. The lower chambers of CIM plates were filled with 165 μL complete medium containing 10% FBS. Cell migration was monitored by RTCA DP instrument every 10 min for a period of 100 h. Cell index (CI) was used to reflect the results. The value of CI was derived from the change in electrical impedance as the living cells interact with the biocompatible microelectrode surface in the microplate well to effectively measure cell number, shape, and adherence. The more the number of migrated cells, the larger the cell index.
Cellular Migration Investigation via Transwell Assay
After being cultured with different concentrations of PBNPs for 48 h, the cells with density of 2 × 10 6 cell/cm 2 were cultured in a Transwell chamber (8 μm pore size; BD FalconTM, USA) for 24 h at 37°C and 5% CO 2 . After culturing, the inner chamber was cleaned, and the migrated cells on the bottom of the chamber was fixed and stained with 0.1% crystal violet. Each step was followed by washing with PBS for 5 min three times. The migrated cells were photographed at different fields of view using an inverted phase-contrast microscope (CK2, Olympus, Japan).
In Vitro Cell Differentiation C3H10T1/2 cell-labeled or unlabeled PBNPs were induced to differentiate into two downstream cell lineages of adipocytes or osteocytes. After cells reached confluence in a six-well plate, cells were cultured in in osteogenic induction medium (10% FBS/DMEM containing 10 nM dexamethasone, 50 μM ascorbic acid, 10 mM β-glycerophosphate, Sigma) or adipogenic induction medium (10% FBS/DMEM containing 1 μM dexamethasone, 0.5 mM isobutylmethylxanthine, 10 μM insulin, Sigma). After 3 weeks, the cells were washed with PBS, fixed with 4% polyoxymethylene, and stained with Alizarin Red or Oil Red O (Sigma). The induced cells were photographed at different fields of view using an inverted phase-contrast microscope (CK2, Olympus, Japan).
Immunofluorescence Assay for F-actin Visualization
MSCs were cultured with various concentrations of PBNPs at a 24-well plate for 48 h. The cells were fixed with 4% paraformaldehyde for 10 min, permeabilized with 0.2% Triton-100 for 5 min, blocked with 1% BSA in PBS for 30 min at room temperature, and then cultured with phalloidine (1:100, Thermo Fisher Scientific, USA) and DAPI (1:800, Thermo Fisher Scientific, USA) for 30 min at room temperature. Fluorescence microscopy was performed on a Nikon eclipse Ti-S microscope with NIS elements software
Protein Expression by Western Blot Analysis
The protein expression of the cells was evaluated via Western blot analysis. The MSCs were cultured with the medium containing different concentrations of PBNPs (0, 25, 50 μg/mL) on six-well plates for 24 h, washed twice with ice-cold PBS, and scraped in 100 μl PIPA buffer (Beyotime) containing protease inhibitors and sodium orthovanadate (Beyotime, China). After 30 min, the samples were centrifuged at 14,000 rpm for 10 min at 4°C, then the protein concentrations of the samples were determined using BCA kit (Beyotime, China). The same amount of proteins were electrophoresed in 10% SDS-PAGE gels (Beyotime, China) and transferred to PVDF membrane (GE Healthcare). The membranes were blocked with 5% milk in Tris-buffered saline with Tween20 (TBST) at room temperature for 2 h and then incubated with anti-β-catenin (1:1000, CST, USA), anti-vimentin (1:1000, Abiocode), and anti-β-actin (1:1000, CST, USA) overnight at 4°C. The membranes were washed three times for 5 min each and then incubated with the appropriate secondary antibodies for 2 h at room temperature. Signals were detected with ECL and ECL-plus (Beyotime, China) and exposed to Molecular Image® ChemiDoc™ XRS+ system (Bio-Rad Inc., USA) with Image Lab™ Software using enhanced chemiluminescence.
Cellular Imaging Investigation of Cellular Labeling Efficiency via MRI
MSCs were treated with different concentrations (25 and 50 μg/mL) of PBNPs, and the control cells were cultured with completed medium without PBNPs for 48 h, washed three times with PBS buffer, trypsinized, collected, and then embedded in 1 mL 1% (w/v) agarose for imaging studies. Additionally, the MSCs labeled with 50 μg/mL PBNPs were induced to osteogenic differentiation for 14 days, then were examined the MRI signal effect. The T2-weighted imaging was performed using an inversion recovery gradient echo sequence with TE = 23 ms, TR = 400 ms, NEX = 2.0, a slice thickness of 2 cm, a FOV of 20 × 20 cm, and matrix size of 384 × 256.
Statistical Analysis
The results were expressed as mean ± SD of at least three independent experiments performed in triplicate. Treatment groups were compared using one-way analysis of variance (ANOVA) and Student's t test was used. p < 0.05 was accepted as a significant difference.
PBNP Characterization
Transmission electronic microscopy (TEM) was performed to characterize the PBNPs (Fig. 1a), which have a diameter of 20-25 nm. For the morphology, the PBNPs showed a cuboidal structure. Figure 1b shows the infrared spectroscopy of the synthesized PBNPs. The PBNPs exhibited a typical absorption peak of Fe 3+ -CN around 2085.23 nm, which was in agreement with that of PBNPs. Field-dependent magnetization measurement was further used to study the magnetic properties of the PBNPs. Figure 1c shows magnetization curves of the PBNPs at room temperature, which demonstrated superparamagnetism of the PBNPs. Figure 1d shows the diffraction peaks at 200, 220, 400, and 420, which corroborated with the XRD pattern of PBNPs. Additionally, the polydisperisty index of PBNPs was 0.16, which indicated a uniform particle size distribution.
Cellular Uptake and Cytotoxicity of PBNPs
To further confirm the cellular uptake of the PBNPs to MSCs, cellular micromorphology of the above C3H10T1/2 cells treated with and without the PBNPs was studied. Figure 2 shows SEM and TEM images of C3H10T1/2 cells after the incubation for 48 h with and without the PBNPs. From the SEM images, the ultrastructure of the labeled C3H10T1/2 cells did not have obvious changes when compared with the control C3H10T1/2 cells. From the TEM images, the control C3H10T1/2 cells without the incubation with the PBNPs exhibited a typical cellular micromorphology with obvious cellular microstructures. Yet, after incubation with the PBNPs, random distribution of the PBNPs was clearly observed in the cytoplasm of the C3H10T1/2 cells. And some PBNPs appeared to be localized in vesicles within the cytoplasm of the cells. Although the random distribution of the PBNPs was observed in the cytoplasm of the C3H10T1/2 cells, the exact mechanism of intracellular uptake was unclear. We propose that the internalization of the PBNPs in C3H10T1/2 cells may occur via a similar mechanism as the previous study demonstrated, which had reported that different inorganic nanoparticles including Prussian blue-Poly(L-lysine), gold, silver, and metaloxides can be readily taken up by cells via endocytosis [15,22,23].
To evaluate the cytotoxicity and the cell viability assay in MSCs, MTT method was performed. The cells were incubated for 1 to 3 days at 37°C under 5% CO 2 with various concentrations of PBNPs suspended in DMEM. Three independent trials were conducted, and the averages and standard deviations were reported. Figure 3 shows that the viability of MSCs treated with PBNPs (5, , 20, 40, 80 μg/mL) was relative to the control cells at 24 to 72 h, respectively. The results indicated that the PBNPs were non-toxic to cells treated with the same amount of PBNPs as MTT. Furthermore, a real-time proliferation assay using the xCELLigence instrument was used for investigating the growth curves of MSCs. Results showed that the growth curves of MSCs were not significantly influenced by these concentrations of PBNPs (Fig. 4a), and the cell viabilities were counted and showed in Fig. 4b after treating in 24, 48, 72, and 96 h. These results suggest that PBNPs have no effect on the proliferation of MSCs.
Cell Migration Capability
Migration of MSCs treated with various concentrations of PBNPs was tested using Transwell assay and a new technique, RT-CIM assay system. From the Transwell assay, the labeled cells showed no obvious changes in migration. By using the RT-CIM assay system, cell migration was monitored in real time, which reflected a more accurate data and could predict cell migration capability more accurately. From the RT-CIM assay system, although at the beginning of the labeling, the labeled cells migrated slowly than the unlabeled cells. But at 72 and 96 h, there was no significant difference in cell migration between labeled cells and unlabeled cells, indicating that high concentrations of PBNPs did not affect MSC motility (Fig. 5).
In Vitro Cell Differentiation
The pluripotency of labeled and unlabeled MSCs was investigated by Alizarin Red and Oil Red O staining. Figure 6 shows that labeled MSCs can be successfully differentiated into adipocytes and osteocytes as the unlabeled MSCs did. These results suggest that the PBNPs did not interfere with the cells' differentiation capacity, which kept the pluripotency of labeled MSCs.
Influence of the Labeling PBNPs on the Cytoskeleton
To investigate the effect of PBNPs on the cytoskeleton of MSCs, immunofluorescence assay of F-actin was used. The phalloidin staining shows no alteration of the red actin filaments of the cytoskeleton after labeling for 48 h compared with unlabeled MSCs. A comparison of the integrity and distribution of actin filaments in the labeled and unlabeled cells for 48 h revealed no alterations (Fig. 7).
Western Blot Analysis
Wnt signaling pathways play an important role in the regulation of cell proliferation, differentiation, apoptosis, tissue formation, and the stem cell fate [24]. So, β-catenin is the functional protein of MSCs. Additionally, the vimentin is the mesenchymal biomarker and is also the functional protein of MSCs [25]. These two proteins related with the biological function of MSCs. The expressions of β-catenin and vimentin were evaluated by Western blot analysis. Figure 8 shows that the expression of β-catenin and vimentin of MSCs treated with various concentrations of PBNPs for 48 h had no significant changes compared with the expression of MSCs treated with no PBNPs. These results indicated that PBNPs cannot change the expression of β-catenin and vimentin of MSCs, which showed the stability of biological function of MSCs after treatment with PBNPs. agent has been demonstrated [14], and some other studies also demonstrated the surface modification of PBNPs enhances its performance in MRI [17,26]. Currently, PBNP labeling has been used in a variety of cells. Dumont et al. described PBNPs as agents for MRI and fluorescencebased imaging of pediatric brain tumors [27]; Perera et al. developed the gadolinium-incorporated PBNPs for the early detection of tumors in the gastrointestinal tract [28]; and Cano-Mejia et al. combined Prussian blue nanoparticle (PBNP)-based photothermal therapy (PTT) with anti-CTLA-4 checkpoint inhibition to treat neuroblastoma [29]. However, there is rarely related reports on PBNP-labeled MSCs. Additionally, whether there is any negative influence on cell function and viability of MSCs after labeling PBNPs remains unclear To investigate whether the PBNPs have the ability to enhance the T2-weighted MRI contrast of cells, we incubated the MSCs with or without PBNPs and examined the MRI signal effect. To monitor the temporal stability of labeling and to investigate whether the PBNPs would lose its imaging capability when the MSCs differentiated, we incubated the MSCs with PBNPs and induced the MSCs to osteogenic differentiation for 14 days then examined the MRI signal effect. As shown in Fig. 9, the pellets of the MSCs incubated with PBNPs showed a clear MRI signal darkening effect and the SI value of the labeled MSCs had obvious difference with unlabeled MSCs. Notably, the labeled MSCs also showed a clear MRI signal darkening effect when differentiation is induced. These results demonstrated that PBNPs had the potential to be used as an effective T2 contrast agent for cellular imaging of MSCs and can offer long-term retention of the contrast agent even after the cell differentiation. There are many published data of MSCs labeled with magnetic nanoparticles (MNPs), but the application of MNPs was limited by their cytotoxicity. When delivering MNPs to target tissue, the majority of MNPs often distribute in the liver and spleen, so the toxicity of MNPs cannot be neglected [30]. For example, Costa C found that SPIONs could produce cytotoxicity to neuronal cells and glia cells [31]. As we mentioned above, the PBNPs showed no detectable cytotoxicity and had no effects on the cell characteristics of MSCs including cytoskeleton, cellular morphology, and functional protein.
Thus, the strength of using PBNPs as an effective T2-weighted cellular MRI contrast agent would be demonstrated in terms of the cytotoxicity.
Conclusions
In summary, we introduced the PBNPs to the tracking of mesenchymal stem cells and studied the survival, migration potential, and cell characteristics of MSCs after being labeled with the PBNPs. Furthermore, we also demonstrated the potential of PBNPs as an effective T2-weighted MRI contrast agent for the cellular MRI of MSCs. PBNPs can be effectively used for the labeling of MSCs and will not influence the biological characteristics of MSCs. This conclusion paved a new road for the label of MSCs. | 5,199.6 | 2018-10-22T00:00:00.000 | [
"Medicine",
"Materials Science"
] |
Research on Key Technologies of Power System Automation Application under Smart Grid
With the continuous development of the scale of the power grid and the advancement of the integrated mode of regulation and control, the amount of real-time monitoring information in the power grid system has grown rapidly today, which has imposed a great burden on all levels of dispatch monitoring. The old alarm function of the power dispatching automation system has been unable to better meet the needs of efficient monitoring. Therefore, it is necessary to brush, analyse, process and summarize the alarm information of each link in the monitoring business, so as to improve the overall operation status of the power grid. Awareness and processing speed of abnormal faults. Based on this, this paper discusses and studies the smart alarms of the smart grid, analyses the overall architecture, and analyses several smart alarm technologies used in the smart grid system on this basis, with a view to the subsequent development of smart grids and alarms the technical research provides a theoretical reference.
Introduction
The smart grid dispatch control system dispatch plan application is based on an integrated basic platform, which can realize the unified coordination of the three-level dispatch plan of the country, the grid, and the province, and give full play to the optimal allocation of the extra-large power grid resources; realize the dispatch plan from annual, monthly, day to day , Real-time organic connection and continuous dynamic optimization to improve the lean management level of the entire process of dispatch planning. Dispatch plan applications can provide a variety of intelligent decision-making tools and flexible adjustment methods according to different dispatch mode requirements, support the visual display of dispatch plan panoramic information and quantitative analysis and evaluation of relevant factors, and realize automatic compilation and scheduling of multi-objective, multi-constraint and multi-period dispatch plans Safety check to achieve the coordination and unification of the safety and economy of power grid operation [1].
Dispatch plan applications mainly include applications such as declaration and release, forecast, maintenance plan, short-term transaction management, hydropower dispatch, power generation plan, assessment and settlement, and plan analysis and evaluation. In the design and development of smart grid dispatch control system dispatch planning applications, different models have been fully considered in the development of core technologies such as safety-constrained unit combination and safety- constrained economic dispatch for different dispatch modes such as San Gong dispatch, energy-saving dispatch, and power market. Taking advantage of the introduction of energy-saving power generation dispatching methods, the National Power Dispatching Control Centre organized a number of domestic R & D units to focus on key technologies such as safety-constrained unit combination, safetyconstrained economic dispatch, and safety verification, and achieved a breakthrough in power generation planning optimization dispatching software., Filled the domestic gap. At present, the dispatching plan suitable for San Gong dispatching and energy-saving dispatching has been widely used in power grids above provincial level in the country.
The Third Plenary Session of the Eighteenth Central Committee put forward major issues for comprehensive deepening of reforms, clearly deepening economic system reforms, and making the market play a decisive role in the allocation of resources. It is expected that the pace of power market reforms will be further increased, and the power market is likely to become the future the target model of power system development. This article analyses the development of smart grid dispatch control system dispatch plan application in the market mode from the aspects of the previous day market, intraday real-time market, auxiliary services, safety verification, new energy consumption, and tie line optimization, and proposes specific applications. Models and development ideas in the new situation. The development model of the electricity market varies according to local conditions, and this paper takes a typical market model as an example to illustrate the adaptability of the smart grid dispatch control system's dispatch planning application function to the market model.
Introduction
Power grid real-time monitoring and intelligent alarms are the core functions of real-time monitoring and early warning applications of smart grid dispatching control systems, including steady-state monitoring of power grid operations, dynamic monitoring and analysis of power grid operations, online monitoring and analysis of relay protection equipment, online monitoring and control of safety control Management, comprehensive intelligent analysis and alarm functions to realize comprehensive monitoring of power system operation using power grid operation information, secondary equipment status information, and meteorological and water conditions, including steady state, dynamic, and transient state of power grid operation In the process, the monitoring of power grid operation status is panoramicized and integrated with regulation and control, and through comprehensive analysis, online fault analysis and intelligent alarm functions are provided [2]. The composition of the real-time monitoring of the power grid and the intelligent alarm function and the logical relationship of the data are shown in Figure 1.
Parallel computing of dynamically allocated tasks
The safety check calls the parallel computing service of the smart grid dispatching control system, and realizes the interaction with the cluster computing resources through the standard interface. The parallel computing service supports two methods: pre-allocation and dynamic allocation. The amount of calculation for safety verification changes dynamically according to the needs of the application. The dynamic allocation method is adopted. After receiving the calculation request, the security verification server estimates the calculation amount according to the calculation content, and then determines the number of servers allocated to this calculation by combining the calculation priority and the parallel computer group resources, so as to support multi-task parallel calculation and fully utilize The computing power of the computer cluster [3].
The amount of calculation for safety verification is relatively large. First, the number of verification sections is determined by the calculation coverage period, and then for each verification section, a large number of various safety analyses based on the set fault set are performed. As shown in Fig. 2, the safety verification adopts example parallelism, which distributes the calculation tasks of the cross-section plan power flow calculation and the safety analysis fault scanning to each central processing unit (CPU) core of the parallel computer group.
Difficulties in calculating the planned power flow
According to the dispatch plan and dispatch operation data, the planned power flow calculation generates a cross-sectional power flow that is checked for convergence and convergence for subsequent static safety analysis and stable check. Input data for planned power flow calculations include: grid model, system load forecast and bus load forecast, equipment status change plan (including equipment shutdown service plan and operation mode change plan), power generation plan, tie line plan, provincial total exchange plan and dispatch operation information [4]. The essence of planned power flow calculation is power flow calculation, and its power flow method is shown in equation (1). Where: and are the active and reactive power of node i; j ∈ i represents the node connected to node i; and are the voltage of nodes Fi and j respectively; and are the elements of the node admittance matrix; is the phase angle difference between nodes i and j. First determine the topology of the check section based on the power grid model, equipment status change plan and dispatch operation information, that is, determine and , and determine the node power injection and according to other data, then solve the node voltage and phase angle according to the power flow equation, and finally determine Power flow distribution of the power grid.
Market Dispatch Electricity Cost Optimization Goal
When the minimum electricity purchase cost is the goal, a typical day-ahead market bidding optimization model is established. The objective function is as follows: (2) In the formula: is the number of quotation units; is the number of market hours before the day; is the quotation curve of the power generation of the ith unit in the t period; is the power generation of the i unit in the t period; is the ith unit Start-up cost in the t period; is the start-up state of the i unit in the t period, the start-up is 1, otherwise it is 0. System power balance constraints: (3) In the formula: is the system load in period t. Unit start and stop time constraints: Where: is the minimum continuous start-up time when unit i changes from shutdown state to start-up state; is the minimum continuous downtime when unit i changes from start-up state to shutdown state; is the start-up change state of unit i at time period t (0 or 1); is the change state (0 or 1) of unit i's shutdown during time period t.
Automated inspection technology
Taking the safety check calculation of a certain day as an example, four sets of calculation tests are carried out as shown in Table 1. The branch chooses whether to control the provincial cross-sectional power, whether to perform automatic adjustment of reactive voltage, and whether to perform intelligent Yes Yes Yes 0 The provincial power control algorithm plays a key role in improving the rationality of the planned power flow results. Figure 3 compares the planned power flow results of the provincial power control with the actual power flow. It can be seen that controlling the provincial power can improve the planned power flow Accuracy.
Multi-source alarm technology
Compared with hierarchical alarm technology, multi-source alarm technology is mainly developed based on the overall structure of horizontal intelligent alarm. Collecting alarm information from multiple sources, such as the acquisition of power grid operation monitoring information, the acquisition of total accident signals and the use of corresponding secondary equipment usage signals, on this basis, we must first strictly verify the multi-source alarm information, Verification requires corresponding verification results. Secondly, based on the obtained verification results, an effective online analysis of possible related faults is given, and the fault results are obtained through research and analysis. The third is to effectively integrate all the fault information based on the analysis of the fault and finally obtain a fault report. It can be said that this kind of alarm technology can collect alarm information from multiple sources and multiple sources. As long as the relevant alarm information conforms to the alarm rules, it can be effectively collected, thereby truly ensuring the reliability, validity and real-time nature of the alarm information. In addition, through multi-source alarm technology, the alarm information can be analysed and summarized layer by layer, which increases the scientific and effectiveness of the final fault report, and lays the foundation for improving the level of fault handling and the operational skills of the grid system business [6].
Conclusion
The real-time monitoring and intelligent warning technology of the power grid meets the requirements of the smart grid dispatching control system to monitor the steady, dynamic, and transient process of the power grid operation and intelligently alert, and supports the operation requirements of the integration of the dispatching and control integration of the power grid control agency. Based on the service bus of the smart grid dispatching control system, this paper designs a service-oriented security check function architecture, uses interface functions to implement the security check function customization, and uses a subscription-publishing service model to provide security check services for each application. Parallel computing technology based on dynamically allocated tasks implements multi-task simultaneous calculations and provides safe verification services for multiple users. Planned power flow generation is the foundation and key link of safety check. This paper analyses the difficulties of planned power flow calculation, proposes a planned power flow algorithm based on multi-section power flow control, and uses automatic reactive voltage automatic adjustment and intelligent power flow non-convergence adjustment technology Power flow convergence and rationality of results. At the same time, it puts forward the practical technology of multi-level dispatching plan safety check a few days ago, and improves the practical level of application through plan data verification and statistical technical indicators. | 2,749.6 | 2020-07-03T00:00:00.000 | [
"Engineering"
] |
Psychological Stress Detection in Speech Using Return-to-opening Phase Ratios in Glottis
This paper is focused on investigation of psychological stress in speech signal using shapes of normalised glottal pulses. The pulses were estimated by two algorithms: the Direct Inverse Filtering and Iterative and Adaptive Inverse Filtering. Normalised glottal pulses are divided into opening and return phase, and a feature vector characterizing each glottal pulse is calculated for a series of n percentage interval in time domain. Each feature vector is created by parameters describing its return to opening phase ratio, namely chosen intervals, kurtosis, skewness, and area. Further, psychological stress is detected by feature vector and four different classifiers. Experimental results show, that the best accuracy approaching 95 % is reached with Gaussian Mixture Models classifier. All the best results were obtained regarding only the interval of 5 % from both phase durations, i.e. for and after pulse peak, where the most significant differences between normal and stressed speech in feature vector are occurred. Presented experiments were performed on our own speech database containing both real stressed speech and normal speech. DOI: http://dx.doi.org/10.5755/j01.eee.21.5.13336
1 Abstract-This paper is focused on investigation of psychological stress in speech signal using shapes of normalised glottal pulses.The pulses were estimated by two algorithms: the Direct Inverse Filtering and Iterative and Adaptive Inverse Filtering.Normalised glottal pulses are divided into opening and return phase, and a feature vector characterizing each glottal pulse is calculated for a series of n-percentage interval in time domain.Each feature vector is created by parameters describing its return-to-opening phase ratio, namely chosen intervals, kurtosis, skewness, and area.Further, psychological stress is detected by feature vector and four different classifiers.Experimental results show, that the best accuracy approaching 95 % is reached with Gaussian Mixture Models classifier.All the best results were obtained regarding only the interval of 5 % from both phase durations, i.e. for and after pulse peak, where the most significant differences between normal and stressed speech in feature vector are occurred.Presented experiments were performed on our own speech database containing both real stressed speech and normal speech.
I. INTRODUCTION
The first application of glottal pulses can be found in speech synthesis where precise understanding of glottal pulses and its estimation lead to high-quality synthetic speech.For instance, the novel method called Glottal Spectral Separation (GSS) is published recently by Cabral et al. [1].By suitable combination of mixed excitation model and noise component, the high-quality speech can be produced by the GSS method using suitable combination of mixed excitation model and noise components.Another method of speech synthesis was introduced by Raitio et al. [2], where synthetic voice is utilized by Hidden Markov Models (HMM) and Iterative and Adaptive Inverse Filtering (IAIF) leading to subjectively highly natural synthetic speech.Similar HMM-based speech synthesizer based on the Liljencrants-Fant (LF) model of the glottal flow is published by Cabral et al. [3].Glottal pulses can be also used in music, for instance the speech (sing) resynthesis [4].
The next application field of glottal pulses is so-called expressive speech processing used for expressing emotions, dynamic and varying voice quality and articulation during phonation.In 1980, the dynamic changes varying on phonation type, exactly on glottal source signal, was published by Laver [5].Differences between prosodic and glottal feature were statistically processed and published in [6], where glottal feature shows significant differences for all 30 emotion pairs contrary to prosodic features.Using the suitable combination of prosodic and glottal features for emotion recognition is also described in [7], where Support Vector Machine (SVM), Artificial Neural Network and Gaussian Mixture Models (GMM) classifiers were applied on Berlin emotional speech database.The symmetry of glottal pulse shape has been used for recognizing between six spoken emotional states [8] reaching average efficiency 66.5 % for well recorded speech signal and 47 % for noisy speech respectively.A number of observed parameters including glottal features are described in [9] varying on the type of psychological stress influence.A set of chosen speech features was tested and observed by different classifiers for gender and emotion recognition [10].
Glottal flow analysis can be also applied in speaker recognition [11].The efficiency of glottal source component derived from Linear Prediction (LP) residual was preliminary experimentally tested for speaker recognition using Auto Associative Neural Networks models on total 20 speakers [12].Other studies using, for instance, Glottal Flow Cepstrum Coefficients [13] and vocal source model [14] were experimentally tested in the case of speaker recognition.
Glottal pulse analysis can be applied also in biomedical field.Recently, the detection of Parkinson's disease from dysphonia measurements is described as a promising intermediate phase to non-diagnostic diagnostic method [15].Glottal pulses can be also utilized for analysis vocal disorders [16], alcohol intoxication [17] as well as for Alzheimer's disease detection [18].Other possible disease detection using voice analysis can be found in the review published by Saloni et al. [19].In general, a survey oriented on glottal source processing and its applications was written by Drugman et al. [20].
II. MINING THE GLOTTAL PULSES
Despite the years to be the research of obtaining the real glottal course from speech signal worked on, recently best results are reached only for the base of glottal flow estimation.Glottal flow can be characterized by a set of glottal pulses repeated by fundamental period T.
Psychological Stress Detection in Speech Using
Return-to-opening Phase Ratios in Glottis An example of glottal flow is illustrated in Fig. 1.Briefly, the whole glottal pulse is composed by two instancesprimary opening To and return Tr phase.The space between particular pulses is called closed phase Tc during which the glottis is closed and the air does not flow through the gap.
Detail description of each individual glottal flow part including physical changes and processes can be found in [21].The mostly used methods for the estimation of glottal flow are DIF (Direct Inverse Filtering) and IAIF.Both methods are based on all-pole modelling of speech signal using LP analysis for considering the transfer function of vocal tract with impulsive or periodic source substituting glottis.The topic of inverse filtering and its impact on voice research and therapy was by Lofqvist [22] and Nwachuku [23].Other methods of glottal pulses estimation are described in [24].
Basically, the DIF method can be classified as a traditional autoregressive modelling-based inverse filtering method [25].The IAIF estimating method can be simply described as a suitably connected serial-parallel combination of DIF pair.This method is described in detail in [26], characterized by better results of glottal flow estimation and it is more computationally challenging.All analysed glottal pulses are mined in speech by modified version of software Aparat [21] where all pulse estimation is based on four parameters LF glottal model [27].Thus, beginning, maximum position and the end of analysed glottal pulse (its absolute height and width) are defined by mined LF model and used in further processing.
Under psychological stress, people tend to make syllables, exactly entire words, shorter, therefore the stability of glottal pulse uniqueness has been also observed in dependency on the word duration.The first row of Fig. 2 illustrates the signal form (light grey) of the Czech syllable "ču" containing mainly the vowel /u/ and estimated glottal pulses.The glottal flow of shorty spoken word is black (left column), the longer version of the same word is dark grey (right column).In the next subfigure (second row), both waveforms of estimated glottal flows are illustrated in the same time scale for showing the shape differences and fundamental periods Tshort and Tlong of the short and long versions of the same word, where fundamental frequency of shortly spoken word is little bit higher (82 Hz versus 78 Hz).Obviously, the glottal pulses do not vary upon the duration of spoken word nor fundamental period which leads to finding verification by two-dimensional normalisation of mined glottal pulses showed in the last row, where for example five normalised pulses are set over themselves for each speech tempo (see Fig. 2).Due to this fact, only individual glottal pulses are used in further experiments and not the whole glottal flow periods where the time interval of closed glottis is seemed to be not so representative in the case of characterizing the actual state of speaker.
III. GLOTTAL FEATURE EXTRACTION
This section describes the method used for extracting chosen parameters, exactly their ratios.Basically, used method exploits only glottal pulses, composed by return and primary opening phase Tr and To (see Fig. 1).Each of mined glottal pulses is normalised to value 1 in time and amplitude domain leading to dimensionally uniform glottal pulses keeping original shape.In these two-dimensionally normalised pulses, the primary opening and return phase are processed separately.Both phases are transferred into relative time scale reaching the zero level at the position of current pulse's peak and the maximum (100 %) in both directions, i.e. at the end and at the start of current phases.
Used extraction method is based on the observation of both phases only for selected relative division n. Figure 3 shows the main idea of n-percentage glottal pulse processing of particular primary opening To(n) and return phase Tr(n) leading to following equation where RTO is Return-To-Opening phase ratio of current n-percentage interval always symmetric for both used phases in relative scale.Area, skewness and kurtosis (the third and fourth standardized Pearson's moments) are further calculated for both n-percentage intervals.Finally for each mined parameter value, the RTO phase ratio is calculated to sign the domination level of one n-percentage interval of current parameter.Obviously, each part of pulse curve (thick line segment in Fig. 3) corresponding to the n-percentage division is characterized by three different RTOs (kurtosis, skewness and area).These feature values are further used for processing and observing differences between normal and stressed speech.For example, the real values of investigated RTOs are listed in Table I, where all values are based on 5 % interval and averaged over all speakers.
IV. REAL STRESS DATABASE
Research presented in this paper has been performed on created database containing speech under real psychological stress as well as normal speech.The first part of used database is formed by 18 different Czech speaking male speakers from ExamStress database [28] previously used for observation of vowel polygon differences varying on speaker's state [29].Second part of used database is formed by another 6 Czech male speakers recorded by microphone PCB 378B02 suitable for infrasonic applications and sound interface USB-9234 produced by National Instruments.All Czech speakers in both parts of used database were recorded during the thesis defence in frame of final exam for capturing the real psychological stress influence.Few days later each speaker repeated the same text in more self-comfortable conditions for recording speaker's normal mood.
V. EXPERIMENTAL RESULTS
This section describes results achieved by realized experiments.In fluent speech performed by second part of used database (six speakers), Czech vowels were automatically detected and separated for further processing [30].Then, separated vowels were manually divided to begging and centre vowel parts from which glottal pulses were estimated by DIF and IAIF methods.From the first part of used database, vowels were separated manually in fluent speech and further were processed similarly to achieve the most pure training data further used in designed classifiers.
For naturally dynamic speech, the efficiency of emotional state (stress and normal mood) recognition was achieved for two types of glottal flow estimation methods (DIF and IAIF) in beginning and centre vowel part for 20 different npercentage intervals (5 % to 100 % by step 5 %).Mentioned ways of efficiency testing were also applied on each 10 ms segment of normalised speech leading to the impact observation of glottal pulse uniformity on dynamic range limitation.The efficiency, exactly the uniformity of glottal pulses under normal and stress conditions, is tested by four different classifiers embedded in standard MATLAB version and further appropriately trained, validated and applied.
In following text, the recognition of used various glottal pulses are defined as: The k-Nearest Neighbour (kNN) was chosen as the first classifier.The best results are reached for the 5 % observed interval of glottal pulses, where the most significant differences are occurred between normal and stressed speech.Almost the efficiency of 95 % is reached by Method 2 on 5% selected interval.Further, accuracy over 90 % is reached by Method 1 and the Method 4 for 5 % and 10 % selected intervals.This method is the most successful on higher n-percentage intervals, where its recognition efficiency lies between 70 % and 80 %.The worst efficiency of kNN classifier was reached by Method 8 reaching efficiency values lower than 40 % for higher n-percentage intervals.Generally for kNN, the average recognition efficiency is approximately 60 % over all used methods and intervals.
The efficiency of stress detection of chosen classifier and actual n-percentage interval and method is calculated as follows ( ) 100, where Nn is the total number of used normal state glottal pulses, Ns the total number of glottal pulses under psychological stress, Ncdn is the number of correctly detected normal mood glottal pulses and Ncds is the number of correctly classified stressed glottal pulses.(3) Significant efficiency increase was obtained by the SVM classifier where average efficiency value approaches to 70 % over all methods and intervals.Generally, efficiency reached by SVM can be regarded as more satisfactory with more possible used n-percentage glottal pulse intervals for correct psychological stress detection.The best results approaching 95 % accuracy are received by Method 4 for 5 % interval as well as in the selected interval range 75 %-95 %.The most significant differences between observed features ratios of normal and stressed speech can be found in 5 % (Method 6 and Method 7) and 65 % (Method 3) selected intervals where also accuracy approaches to 95 %.
As the third classifier, GMM was used.The high efficiency values of psychological stress detection were reached over all possible n-percentage intervals of glottal pulses.On the other hand, the generally lowest accuracy values are also achieved by GMM, exactly for 10 % (Method 5) and for 35 % (Method 1) selected intervals where the accuracy approaches only to 10 % in stress detection.In some selected intervals, each method reaches the efficiency almost 95 % which signs the highest uniformity of observed features varying on actual state of speaker, and targets on GMM as a suitable classifier for stress detection.The average efficiency value over all used methods and interval approaches 82 %, but the best results were achieved by Method 4.
Figure 4 illustrates reached efficiency for Method 3 and its sound normalised equivalent Method 4 for showing the impact of sound normalisation on stress detection in the form of more stable reached efficiency results.Obviously, the best and the most constant results can be found in the range of n-percentage intervals 50 %-90 % for Method 4. Obviously, the chosen method and classifier are not important in the case of stress recognition as well the appropriate selected interval, but generally the best results are reached by GMM classifier.
The final sorting of used types of stress detection parameters is listed in Table II, where due to the total number 640 of all used types, only the first fifteen (best) and the last five (worst) positions are listed.All types are written only in abbreviations, e.g.GMM_5_D_C_N represents GMM classifier applied on 5 % selected interval, DIF estimation method, the vowels' centre and normalised sound (Method 6).The values of total analysed normal Nn and stressed speech Ns glottal pulses are listed in Table II Obviously, the best results are reached by Method 4 and GMM classifier in general.As the best n-percentage interval can be marked 5 % sector, but the most stress recognition stable range is from 50 % to 80 %.
VI. CONCLUSIONS
By the comparison of eight different glottal pulse estimation methods and four classifiers, the GMM classifier can be marked as the best for stress recognition with method estimating glottal pulses by IAIF algorithm from normalised sound vowel's beginning.Obviously by presented RTOs: the IAIF estimation is more suitable than DIF algorithm, stress influence is better detectable at vowels' beginning, sound normalisation leads to more stable efficiency results, the biggest differences in RTOs between normal and stressed speech lie in 5 % interval as well as in 65 %.
Generally, presented approach corresponds with the similar method detecting stress by means of glottal pulse distribution [31].However, presented experiments show higher accuracy (95 %) as the accuracy of 88 % published in [31] or in [32], where Glottal Spectral Slope reached stress detection ratios in the range 18 %-36 %.
Obviously, the combination of automatic vowel detection, e.g.[30], and findings presented in this paper can lead to development new systems recognizing psychological stress in speech which can negatively influence human behaviour.These systems can be practically applied in many fields of usage e.g. machine control, medical applications, etc.
Further, it is necessary to expand real psychological stress database to verify experimentally presented results.In future, described method will be also expanded and adapted to its usage on all estimated glottal pulses in all voiced parts of speech, i.e. not only on found vowels.This modification can lead to higher amount of estimated glottal pulses and to observation if described methods are phoneme-independent in the case of psychological stress detection in speech.
Fig. 2 .
Fig. 2. Differences of glottal pulses depending on the speech tempo.Black signs the fast speech, i.e. shorter version of the same spoken word.
Fig. 3 .
Fig. 3. Division of two-dimensionally normalised glottal pulse into n-percentage particular intervals of opening and return phase.
Fig. 4 .
Fig. 4. Efficiency of stress detection for Method 3 (dashed grey line) Method 4 (solid black line) depending on selected n-percentage interval for using the GMM classifier.The Probabilistic Neural Network (PNN) was used as the fourth classifier.Comparing to previous results, similar observations were occurred.The highest uniformity of RTOs varying on speaker's state can be found in the usage of 5 % selected interval (Method 2, Method 4 and Method 7) and higher intervals 75 %-100 % used only by Method 4. Absolutely highest accuracy (almost 94 %) in stress detection is achieved by Method 2. The worst efficiency results were achieved in intervals higher than 70 % by Method 3 and Method 8.These both methods are not suitable for psychological stress detection with the PNN classifier.Generally, the average efficiency of PNN classifier using RTOs is approximately 62 %.
TABLE I .
AVERAGED REAL VALUES OF THREE RETURN-TO-OPENING PHASE RATIOS IN 5 % SELECTED INTERVAL, IAIF METHOD, NORMALISED SOUND VOWEL'S BEGINNING.
as well as false detected normal N'cdn and stressed N'cds glottal pulses.
TABLE II .
FINAL SORTING OF USED TYPES. | 4,296.6 | 2015-05-10T00:00:00.000 | [
"Physics"
] |
Stratification and temporal evolution of mixing regimes in diurnally heated river flows
Direct numerical simulations of stratified open channel flows subject to a varying surface heat flux are performed. The influence of the diurnal heating time on the spatial and temporal variation of mixing in the flow and the characteristics of the mean flow state are examined. The control parameters are the bulk stability parameter λB\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\lambda_{B}$$\end{document}, defined through the ratio of the channel height δ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\delta$$\end{document} and a bulk Obukhov length scale LB\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathscr{L}_{B}$$\end{document}, and the diurnal time scale t^\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hat{t}$$\end{document}, defined as the ratio of the heating time to an eddy turnover time. The Prandtl number Pr and Reynolds number Reτ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Re_{\tau}$$\end{document} have values of 1 and 400. Simulations are performed over t^=1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hat{t} = 1$$\end{document} to 24 and λB=0.6\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\lambda_{B} = 0.6$$\end{document} to 26. Two key flow features are used to classify the flow regimes observed, namely the laminar layer depth (LLD) and stratified layer depth (SLD) where the LLD is defined as the depth from the free surface when the buoyancy Reynolds number ReB≈7\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Re_{B} \approx 7$$\end{document} and the SLD is the depth from the free surface when the turbulent Froude number Fr≈1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Fr \approx 1$$\end{document}. This study attempts to characterise how these length scales vary across the diel cycle. The LLD is a viscous length scale and a regime map of a viscous parameter, the bulk Obhukov Reynolds number ReL\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Re_\mathscr{L}$$\end{document}, and t^\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hat{t}$$\end{document} is presented to classify the LLD behaviour. A regime map of λB\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\lambda _{B}$$\end{document} and t^\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hat{t}$$\end{document} is presented to classify the behaviour of the SLD. Three classifications for each layer depth behaviour within a diel cycle form the basis of the regime maps for this paper: a neutral flow where the LLD or SLD does not exist (denoted by NL and NS), a stratified flow where the LLD or SLD are diurnally varying (denoted as DL and DS) and a persistent layer of the LLD or SLD (denoted as PL and PS). The transition between the NL to DL is t^∝ReL4.5\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hat{t} \propto Re_{\mathscr{L}}^{4.5}$$\end{document}, DL to PL is t^∝ReL-0.5\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hat{t} \propto Re_{\mathscr{L}}^{- 0.5}$$\end{document}, NS to DS is t^∝λB0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hat{t} \propto \lambda _{B}^{0}$$\end{document} and DS to PS is t^∝λB1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hat{t} \propto \lambda _{B}^{1}$$\end{document}. The regime maps may be used as a predictive tool to determine when suppressed mixing regimes occur in rivers. At each flow depth, the flow sweeps though a range of mixing states across the diel cycle. The local mixing efficiency are briefly assessed and found to scale well with the instantaneous Fr\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Fr$$\end{document} number according to the regimes proposed by Garanaik and Venayagamoorthy (J. Fluid Mech., vol. 867, 2019, pp. 323-333). This paper reports on direct numerical simulations of stratified open channel flows subject to a varying surface heat flux. The results have found that: Increasing the diurnal time scale allows the flow to sweep through a wider range of flow states from turbulent to strongly stratified, Simulation data of the mixing efficiency and turbulent Froude number from this temporally varying and spatially inhomogenous flow that undergoes strong temporal forcing collapses well onto the parameterisation scheme of Garanaik & Venayagamoorthy (J. Fluid Mech., vol. 867, 2019, pp. 323–333) found for homogenous stratified flows, and Three distinct classifications (a persistent layer, a diurnal layer and one where the layer does not exist) of the laminar layer depth (LLD) and stratified layer depth (SLD) behaviour persists throughout a diel cycle and form a regime map given a λB\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\lambda _{B}$$\end{document}, Reτ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Re_{\tau }$$\end{document} and t^\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hat{t}$$\end{document} value. The relationship between each transitions are: from no LLD (NL) to a diurnal LLD (DL) t^∝ReL4.5\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hat{t} \propto Re_{\mathscr {L}}^{4.5}$$\end{document}, DL to a persistent LLD (PL) t^∝ReL-0.5\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hat{t} \propto Re_{\mathscr {L}}^{-0.5}$$\end{document}, no SLD (NS) to a diurnal SLD (DS) t^∝λB0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hat{t} \propto \lambda _{B}^{0}$$\end{document} and DS to a persistent SLD (PS) is t^∝λB1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hat{t} \propto \lambda _{B}^{1}$$\end{document}. The regime maps may used as a predictive model to calculate when suppressed mixing transpires in rivers. Increasing the diurnal time scale allows the flow to sweep through a wider range of flow states from turbulent to strongly stratified, Simulation data of the mixing efficiency and turbulent Froude number from this temporally varying and spatially inhomogenous flow that undergoes strong temporal forcing collapses well onto the parameterisation scheme of Garanaik & Venayagamoorthy (J. Fluid Mech., vol. 867, 2019, pp. 323–333) found for homogenous stratified flows, and Three distinct classifications (a persistent layer, a diurnal layer and one where the layer does not exist) of the laminar layer depth (LLD) and stratified layer depth (SLD) behaviour persists throughout a diel cycle and form a regime map given a λB\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\lambda _{B}$$\end{document}, Reτ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Re_{\tau }$$\end{document} and t^\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hat{t}$$\end{document} value. The relationship between each transitions are: from no LLD (NL) to a diurnal LLD (DL) t^∝ReL4.5\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hat{t} \propto Re_{\mathscr {L}}^{4.5}$$\end{document}, DL to a persistent LLD (PL) t^∝ReL-0.5\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hat{t} \propto Re_{\mathscr {L}}^{-0.5}$$\end{document}, no SLD (NS) to a diurnal SLD (DS) t^∝λB0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hat{t} \propto \lambda _{B}^{0}$$\end{document} and DS to a persistent SLD (PS) is t^∝λB1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hat{t} \propto \lambda _{B}^{1}$$\end{document}. The regime maps may used as a predictive model to calculate when suppressed mixing transpires in rivers.
Introduction
Water bodies such as rivers, estuaries and canals are subject to diurnal solar radiation.This unsteady heating causes these channels to experience time-varying flow states within their diel cycle.During the day, the short-wave radiation at the free surface is transmitted and progressively absorbed through the water column, leading to a non-uniform stable thermocline.Therefore, stable stratification is stronger near the free surface and gradually weakens towards the channel bed.This stratification then acts to dampen the turbulent mixing within these regions.Accounting for the diurnal variation of solar fluxes, solar radiation tends to strengthen stratification while the absence of radiation will cause the temperature field to relax, increasing turbulent mixing [1].This mixing determines the overall ecological health of the limnological ecosystems [2].As turbulence becomes suppressed as a result of stratification due to daytime heating, in extreme cases, this reduction in mixing limits the transportation of scalars such as dissolved oxygen, sediments and nutrients, causing the benthic region of the water column to experience conditions of eutrophication or hypoxia [3][4][5].
Among Australian inland rivers, during long periods of drought and high radiative forcings, flow rates are significantly reduced, causing a decrease in the turbulence production from the shear at the bottom of the channel.These low flows and persistently stratified flow states contribute to the extensive algal outbreaks resulting in mass fish mortality as seen in the Menindee Region of NSW over the summer of 2018 and 2019 [6].Cyanobacterial blooms contribute to oxygen depletion at the near-wall or bottom region of the river channel as their substantial biological matter sinks to the bottom of the channel and consumes the oxygen reserves for decomposition.If a rapid breakdown of stratification occurs, the sudden overturning of the oxygenated epilimnion layer with the anoxic hypolimnion layer will cause the water column to dilute and therefore suffocate the wildlife that concentrates in the previously oxygen-rich region [7].Understanding the processes that affect the mixing and turbulence of diurnally heated stratified flows can help predict and mitigate these detrimental events.
Using global energy balances, Simpson and Hunter [8] developed a criterion for the onset of stratified conditions in continental shelves, that included the mechanical work done by tidal stresses and the solar heat input.The mixing criteria is therefore just the ratio of the rates of stratifying thermal energy and destratifying turbulent kinetic energy.The work was latter extended by Holloway [9] for estuaries that incorporated a windmixed system and depth dependent heating function with no tidal forces and by Bormans and Webster [1] for turbid rivers.These works determined that the degree of stratification depends on the magnitude of tidal or river discharge, wind speed, and the attenuation of the radiative heat flux through the water column, which is dependent on the water body's turbidity and channel depth .In Borman and Condie [10], their study into the stratification dynamics of rivers modelled the diurnal variation of solar heating as a depth-varying volumetric heat source following the Beer-Lambert law, Q(Y, T) = I s (T) e (Y− ) , where T is the time, Y is the vertical position, and I s is the radiant heat flux through the surface.Williamson et al. [11] characterised stratification in radiatively heated open channels using DNS.The bulk stability parameter in this case was equal to the ratio of a confinement length scale related to the domain height and the Obukhov length scale L = o C p U 3 1 2 − 1 ∕ g I s , where o is the reference den- sity, C p is the specific heat of the fluid, U is the friction velocity, g is the gravitational acceleration and is the coefficient of thermal expansion [11, 12].Flows with = 1 is found to be strongly stratified with the flow in local energetic equilibrium over much of the channel and at Re L = LU ∕ ≃ 400 , where is the kinematic viscosity, laminarisa- tion occurs at the upper layer y ≈ 0.8 [11].Because descriptions of the state of turbu- lence in stratified flows can be characterised by the stratified outer layer, the definition of the Reynolds number can be defined using the bulk stability parameter B and , resulting in the bulk parameter Re L = Re ∕ B .Here, Re is the friction Reynolds number and is defined as Stably stratified open channel flows are used to investigate many of the environmental flows previously mentioned above [13,14].For open channel flows where stratification is spatially inhomogeneous, the vertical domain can acquire a wide range of gradient Richardson numbers Ri g = N 2 ∕S 2 , Froude numbers, specifically the turbulent Froude number Fr = ∕NK , and buoyancy Reynolds numbers Re B = ∕ N 2 [15][16][17][18][19]. Here, is the tur- bulent dissipation rate, N is the buoyancy frequency, S is the mean vertical shear, and K is the turbulent kinetic energy.These parameters are often considered important parameters in governing the effects of buoyancy of stratified flows.Though Ri g has been considered impractical for quantifying mixing of moderately to strongly stratified flows [13,20], Re B and Fr remain parameters of interest in parameterising the effects of stratification on mixing in turbulent flows.For Re B , the parameter indicates the range of length scales over which motions are considered to be largely unaffected by stratification [21,22] while Fr is a ratio of the characteristic timescale of turbulence and stratification and is suggested to be a good indicator of the local state of turbulence in a stably stratified flow [16,22].In terms of local states of turbulence, Re B < 7 is thought to be within a molecular regime [15] and Fr ≪ 1 in a strongly stratified regime.Understanding where these local mixing regimes exist within the water column and how they vary with bulk parameters is often useful when aiming to predict the behaviour of the flow.
While Williamson et al. [11] and Issaev et al. [23] studied the transition to statically stably stratified states and parameterization of mixing in stratified flows with a constant heating force, and Kirkpatrick et al. [24] investigated the removal of surface radiation from a thermally stratified open channel, only a handful of studies examined a flow that undergoes diurnal variation of a radiant heat source.Lei and Patterson [25] numerically investigated the natural convection induced by diurnal heating and cooling in varying topographical reservoirs.Their studies involve heating and cooling in shallow and deep reservoirs to observe their temperature profiles and flow rates.Their findings show a distinct time lag in response to the transient thermal forcing beyond one quarter of the heating-cooling cycle and dependence on their control parameter, the Grashof number.Lei and Patterson [25] further observed thermal instabilities to be the dominant driving mechanism responsible for vertical mixing in the reservoirs.While this paper does not study varying topographical reservoirs, it does simulate the diurnal solar radiation behaviour experienced in river flows.This current paper expands on the work of Williamson et al. [11] by introducing an unsteady radiant heat flux through the surface of an open channel flow and aims to uncover the local flow states, where these states exist within the water column, and the extent of their changes across the diel cycle as the flow goes through its cyclic heating (stratifying) and adiabatic -destratifying stages.
This paper is divided into seven sections.Section 2 gives the problem formulation, describing the mathematical set-up of the flow along with the governing equations and subsequent non-dimensionalisation of important parameters.Section 3 details the numerical simulations, stating all simulation parameters tested for each case.Sections 4 to 6 contain the analysis of the flow's transient response, the layer depths and their regimes and the relationship between the flow's local flux Richardson number to the turbulent Froude number, respectively.Finally, section 7 concludes this paper.
Problem formulation
The schematic of the open channel flow, illustrated in Fig. 1, has a free-slip, adiabatic upper surface and a no-slip, adiabatic lower wall.The model is periodic in both the horizontal planes, and the stream-wise direction is driven by a constant uniform pressure gradient.The water column experiences a progressive absorption of the surface solar radiation that gives rise to the thermal stratified state of the flow.This internal heat source is characterised by a volumetric depth-varying heat source Q(Y, T), conveyed by the Beer-Lambert law [1], where is the height of the channel, is the absorption coefficient and I s is the short-wave heat flux which varies with time.Here, the diurnality of I s is based on the piecewise function defined below [26], with I m signifying the maximum irradiance at noon and D signifying the diel cycle, a 24 h period where D = 86, 400s and therefore, can be used to determine the sunrise time T = 0.25D , length of light period T = 0.5D and sunset time, T = 0.75D.
The diurnal timescale t is non-dimensionalised by the channel depth, diel cycle, and friction velocity: with U defined as the friction velocity related to the shear stress at the lower surface of the channel and as the channel height.The dimensionless new day, sunrise time, length of the light period and sunset time can therefore be defined to be; The temperature field is decomposed as: where � (X, T) is the statically fluctuating factor and φ(T) is the domain-average tempera- ture at time T.
Non-dimensionalising the depth-dependent heat forcing q h and the temperature fluctuating field with the normalising components Q b and b for one diurnal period, respectively, can be shown as: where, and with Q(T) being the domain-average radiant heat flux for one diurnal cycle, ̃ (T) being the domain-average temperature field, C p is the specific heat of the fluid, and o is the reference density.Throughout this paper, capitalisation of a variable signifies a dimensional quantity, whereas a lower-case variable indicates non-dimensionality.Here, Q b is described as, where with I s as the average radiant heat flux through the surface for one diel cycle.Since strati- fied layer depths are predominately above y = 0.6 , the normalisation of q h and , along with this paper's modified definition of the bulk stability parameter B , is defined over the distance y = 0.6 to .As previous work of Williamson et al. [11], Issaev et al. [23] and Kirkpatrick et al. [24] have shown that local conditions are well aligned with mean flux profiles, the definition of the normalising components and the bulk stability parameter in this paper is more directly connected with local dynamics than for one based on the entire channel height as used in Williamson et al. [11], Issaev et al. [23] and Kirkpatrick et al. [24].
As mentioned above, stratification of the channel is characterised by the bulk stability parameter B , which is the ratio of the confinement scale to a modified bulk Obukhov length scale L B .These terms are defined as: where g is the gravitational acceleration and is the coefficient of thermal expansion which relates the fluid density to the temperature by d ∕ o = − d .b is defined as: where s is the surface shear from wind and w is the wall shear.
Direct numerical simulation (DNS) is applied to solve the Navier-Stokes equation for the stratified open channel flow.Using the Oberbeck-Boussinesq approximation for buoyancy [11], the governing equation for the non-dimensional conservation of mass, momentum and energy can be written as: and, with u i being the Cartesian components of the velocity vector u = (u, v, w) , x i the compo- nents of the position vector x = (x, y, z) , p is the pressure and, e x and e y are the unit vectors in the x and y directions.The non-dimensionalised form of the components above can be described as: with X, U, T, P, U and as the above equations dimensional counterpart.The friction Reynolds number is Re = 400 and Prandtl number Pr = ∕ = 1, where is the scalar diffusivity.The problem can therefore be fully defined by specifying Re , Pr, B , and the non-dimensional turbidity parameter found in Eq. 1.This paper only considers cases where = 8 so that the solar radiation does not fully penetrate the channel bed and stratification is confined only to the upper layer of the domain.
Referring back to the boundary conditions; the adiabatic no-slip bottom surface ( y = 0 ) and the adiabatic, stress-free top surface ( y = 1 ) can now be described as:
Numerical simulation
Simulations were solved using the three dimensional, Cartesian structured and fractionalstep finite-volume method described in Armfield et al. [27].Cell face velocities were calculated using a Rhie-Chow interpolation with the spatial discretisation using a fourth-order central differencing for advective approximation and a second-order central differencing for diffusion approximation for both velocity and scalar terms.A second-order accurate in time Adams-Bashforth scheme was employed for nonlinear terms, while a Crank-Nicolson scheme was used to advance the diffusive terms.A stabilised bi-conjugate gradient solver was used to solve the pressure correction equation, while a Jacobi solver was employed to solve the momentum and temperature equations.A Courant number between 0.16 and 0.17 was used to adjust the time step, ensuring that the Courant number stays within the set range.
Table 1 shows the simulation parameters and grid and domain sizes that were tested.The horizontal axes, x and z, are uniform in grid size with cell sizes in viscous wall units being △x + = 5 and △z + = 2.5 .The vertical axis is non-uniform, stretched, and set on a log mesh that grows from both ends.In other words, between y = 0 − 0.5 , the cell size in viscous wall units range from △y + = 0.5 − 3.3 and symmetrical over the rest of the domain height.Grid sizes 512 × 165 × 512 of the constant forcing case for a domain of 2 × 1 × were employed and compared to Williamson et al. [11] to which these results yielded indiscernible differences allowing this grid resolution to be used.Comparisons between a half-span domain of 2 × 1 × 0.5 and a full-span domain of 2 × 1 × indicate negligible dif- ferences in results when run with a constant heat force and identical parameters for each simulation.The vertical profiles of Fig. 2 are averaged over horizontal planes and time with perturbation from the mean denoted by a prime.Here, ⟨⋅⟩ indicate averaging over the horizontal planes and ⋅ symbolise averaging in time.Figure 2 are profiles of the mean tem- perature, velocity, scalar and turbulent shear fluxes, buoyancy Reynolds number Re B and turbulent Froude number Fr.The similarity in results shown in Fig. 2 for both flux and important bulk parameters such as Re B and Fr demonstrate that the half-span domain can be applied, and this in effect helps in lowering computational power and run time for each simulation.Simulations were initialised from a realisation of a stably stratified constant forcing flow.Preliminary tests showed the flow to be independent of the initial conditions.Once quasisteady-state is achieved, statistics are comparable to simulations with identical parameters and grid and domain size regardless of the initialisation states.where h is the simulation domain height, ⟨Δ ⟩ is the horizontally averaged temperature at the top and bottom of the channel, u b is the bulk velocity and, where K is turbulent kinetic energy and ⋅ indicate averaging over the entire domain.The turbulent dissipation rate and buoyancy frequency N are represented as follows: with S ij is the strain rate due to velocity fluctuations given by S ij = 1∕2( U � i ∕ X j + U � i ∕ X j ) and U ′ indicates the fluctuation stream-wise velocity.
Figure 3 shows the results of the initial condition independence tests.There is an observable difference between the start-up non-dimensional time t = 0 ; however, as time progresses, this difference decreases until the two simulation results collapse onto one another.Convergence here is defined when the average parameter of each diel cycle differs by less than five percent from the previous cycle.Once achieved, the oscillatory flow is fully developed, and quasi-steady-state is established.Therefore, for this study, quasisteady-state is defined when the cycle averages, as well as the maximum and minimum amplitudes of essential parameters such as the bulk Richardson number Ri B (Fig. 3a), bulk Froude number Fr B (Fig. 3b), and the friction Richardson number Ri = ⟨Δ ⟩ B h∕u 2 (Fig. 5) differs less than five percent from the previous cycle.Under these conditions, the time taken for the flow to converge and reach a quasi-steady-state is t ≈ 12 .Figure 4 illus- trates the horizontally averaged and time-averaged at each new day, rise time, mid-day, and set time for the scalar flux profile of Case INF and Case ISF.It is evident in all figures that the profiles are similar when statistics are taken after t ≈ 12 from the initialisation of the flow.
Provided that the mean Re and B of the initialised flow field are similar to the simula- tion conditions, the flow will reach full development when t ≈ 12 .That is, regardless of the ( 20) diurnal timescale t value of the simulation, when the flow is initialised with similar B and Re values to the current simulation, the development time is insensitive to t .This is shown in Fig. 5, a plot of the friction Richardson number Ri time-series for a constant forcing case and varying t .For all cases in Fig. 5, initial conditions are started from a flow with parameters B = 5.8 , Re = 400 , Pr = 1 and = 8 .As Fig. 5b, c and d have an unsteady thermal force applied to the surface of the open channel, the flow cycles through different flow states in space and time.Ri captures the buoyancy effects on the channel and for each case aside from Fig. 5a, buoyancy is at its peak between t = 0.5t and t = 0.75t while the lowest Ri lies between t = 0.25t and t = 0.5t .The flow throughout the diurnal cycle will experience a repeating cyclic trend where the flow becomes progressively stratified as the levels of thermal forcing increase and decrease with the reduction or absence of the thermal forcing.
Figure 5 shows a fully developed, quasi-steady-state flow that is established from t = 0 for different t values but initialised from the same flow field.This fully developed, quasi- steady-state flow developed right at t = 0 suggests that the flow fields from one t result can be used to initialise simulations at other t values efficiently if B and Re are kept constant.Shown in Section 6, the diurnal average flow quantities such as Fr are relatively insensitive to t which supports this conclusion.For the remainder of this paper's simulations, flow fields were initialised with parameters B = 5.8 , Re = 400 , Pr = 1 and = 8 for cases 2 to 5 with statistics being taken after one diurnal timescale cycle.The remaining cases are initialised from the aforementioned parameters though statistics are taken after four diurnal timescale cycles.
Temperature evolution
The temperature evolution of an unsteady thermal forcing on a flow is discussed in this section.Figure 6 shows the visualisation of the transient temperature field for Case 3 with simulation parameters; B = 5.8 , t = 6 , Re = 400 , Pr = 1 and = 8 .The colour bar is scaled between 0 − 1 of Fig. 6 and normalised by ( − min ) ∕ ( max − min ) to highlight the turbulent traits for each image.Initially, as shown in Fig. 6a, the flow is unstratified in the turbulent region of the lower half of the channel and is progressively stratified in the upper half of the domain as shown through the gradual colour changes at the near surface of the channel y = 0.8 to y = 1 .As the flow evolves, turbulence is noticeably more ener- getic as the temperature field mixes though the channel and stratification is broken down as presented in Fig. 6c.Between the period, t = 0t to t = 0.25t (Fig. 6a and b), the flow experiences a zero surface heat force and turbulence begins to extend to the upper half of the channel, reducing the thickness of the near surface laminar layer and the strength of the stratification.This behaviour continues past the introduction of the heat forcing at t = 0.25t until re-stratification occurs from t = 0.5t and t = 0.75t (Fig. 6c and d), where the near surface region of the channel y = 0.8 to y = 1 settles into an almost-laminar state and exhibits distinct shear instabilities.
Figure 7 shows the vertical temperature profiles of cases 2 to 5 (refer to Table 1) to highlight the effects of varying diurnal timescales on the flow as well as comparing the diurnal flows with its constant forcing counterpart.As the vertical temperature profiles vary throughout a diel cycle for diurnal cases, temperature profiles at t = 0t , t = 0.25t , t = 0.5t and t = 0.75t are shown.The constant forcing case in Fig. 7 shows a well-mixed turbulent region near the bottom wall transitioning to a thermocline region extending from y = 0.7 − 1 .Unsurprisingly for all diurnal cases, the flow sweeps through a wider range of temperature profiles from an almost neutral flow state, to a weakly stratified flow, and to a strongly stratified flow.This range becomes greater as t increases where Fig. 7b with t = 6 experiences weakly to strongly stratified flows compared to Fig. 7d where t = 24 , which transitions through to a complete isothermal state as shown by the t = 0.25t line, to a weakly stratified flow shown by line t = 0.5t and finally to a strongly stratified flow indi- cated by the t = 0.75t line.This sweep is attributed to the length of the diurnal cycle where shorter periods do not fully allow the flow to respond completely to the external heat forcing as compared to a longer diurnal cycle.Observing the near-wall to mid-height of the channel y = 0 − 0.6 , this region is shown to remain relatively unstratified throughout the diel cycle however as t increases, the flow begins to experience changes within this area.This behaviour can be seen when comparing the stratified region of Case 3 where t = 6 (Fig. 7b) and Case 5 where t = 24 (Fig. 7d).For Case 3, as it progresses through the diur- nal cycle, each profile line in Fig. 7b collapses onto one another between the y = 0 − 0.6 region, compared to Fig. 7d, where the varying time profiles of Case 5 do not collapse in this region.
Upon comparing profiles when t remains constant and B varies, unlike temperature profiles of Fig. 7, Fig. 8 shows that temporal temperature profiles of each case decrease in differences as B increases.Figure 8 compares the temperature profiles of B = 1.5 and 8, the temperature difference at the thermo- cline rises with the thermocline extending deeper through the channel as B increases.For changes in B at a certain value where the flow can be considered strongly stratified throughout the diel cycle, the lower mixed region of the channel remains insensitive to changes in the two parameters as shown through Case 3 (Fig. 7b) and Case 7 (Fig. 8b).
Removal of diurnal heat source
Considering the period of the flow where the heat source is suspended, the destratification rate for a thermally stratified flow after the removal of the heat source can be described as [28]: with Ri described as: where Δ is the temperature difference at the top and bottom of the channel, t = h∕u , t N = ( Δ ∕h) −1∕2 and the bulk stability parameter : where, The destratification process can be thought of as the time when the initial state of a flow moves from one that is strongly affected by buoyancy to a final state in which buoyancy effects are minimal [28].The relationship between this destratification rate D s and Ri when Ri > 15 follows the equation below [28]: Equations 22 to 26 are used to model a destratification time t d for a flow with no surface cooling [28].The destratification time t d is defined as [28]: where the subscripts i and f represent the initial and final parameter values, u 2 avg is equal to the average u 2 between the initial and final time when Δ f = 0 .As these simulations do not experience a time when Δ f = 0 , u 2 avg is taken between the time when the radiant heat flux is removed from the flow.The modified bulk stability parameter B (given in equation 21) and Kirkpatrick et al. [28] bulk stability is related by = 0.061 B for the constant forcing case and = 0.286 B for the diurnal cases.
In Table 2, the results for the destratification time defined through Eq. 27 are taken from when the flow experiences a zero radiant heat flux, between t = 0 − 0.25t and t = 0.75 − t .According to Table 2, it is expected for all diurnal cases to experience a period where Δ = 0 as the predicted t d from Eq. 27 is approximately less than or equal to the phase where the surface heat source is discontinued t = 0.5t of each case.This expected destratification however does not occur for cases 3, 4 and 18 as shown in Fig. 9, and while Case 5 does destratify during its diurnal cycle, the t sim or the time taken for Case 5 to reach Δ f = 0 from t = 0.75t (when heating is completely removed from the flow) is t ≈ 11 , a 6.4 difference from t d .These discrepancies may be due to the variation of the paper's flow compared to Kirkpatrick et al. [28] who investigated thermally stratified turbulent channel flows after the removal of a radiant heat source instead of a diurnally varying radiant heat source.As Eq. 27 is dependent on the temperature difference, having diurnal heating affects Δ as shown in Fig. 9 with the maximum Δ in the constant forcing simulation being much greater than for a flow subject to diurnal heating.Results from Table 2 show that t sim compares well to 2.1t∕t d .Applying this will give Case 5 Layer depths and flow regimes
Laminar and stratified layer depths
This section investigates the changing laminar and stratified layer depths as the flow evolves through time and is subject to a transient radiant heat source.In this analysis a laminar layer indicates minimal to no turbulent mixing within the region and certain flows, given the right conditions, can exhibit a persistent, diurnal or absent laminar layer close to the free surface throughout its diel cycle.These three behaviours are also exhibited for stratified layer depths which are depths where the variations of temperature are highest and tend to act as a thermal barrier for vertical mixing [29].
The laminar layer depth (LLD) for this paper is defined as the depth from the free surface where the buoyancy Reynolds number Re B , a buoyancy parameter used to indicate the separation of the smallest eddy affected by buoyancy to the smallest scale of turbulence, is equal to 7 [15].Within this diffusive range Re B < 7 for stably stratified shear flows, it was found that turbulence is strongly damped and minimal lateral mixing occurs [15].
The stratified layer depth (SLD) for this paper is defined as the distance from the free surface to where the turbulent Froude number Fr, defined as the ratio of buoyancy to turbulence times scales, is equal to 1 as Fr ≪ 1 lies within the transition into a strongly stratified regime [16].Since Fr is defined through local turbulent quantities, Fr can be considered a reasonable measure for the local state of turbulence in a stably stratified flow with Fr ∼ 1 shown to be a good definition for the onset of strong stratification in many studies of turbulent flow [16,30].Furthermore, the Froude number is a significant parameter in inferring the state of turbulence and parameterises well with mixing efficiency in stably stratified turbulent flows [16,[31][32][33].
Figure 10 shows the time-series of the LLD and SLD for simulation cases 2 to 5. For the constant forcing case, Fig. 10a shows an almost-constant layer depth with the average LLD = 0.05 and the SLD = 0.57 while oscillating layer depths occurs for diurnal cases, cases 3 to 5. All diurnal simulations upon observing the time-averaged LLD have an average LLD = 0.10 .The average SLD for diurnal cases vary as t increases.Case 3 ( t = 6 ) has an average SLD = 0.56 (much like the constant forcing case where SLD = 0.57 ) whereas Case 5 ( t = 24 ) has an average SLD = 0.46 .On the minimum depths of the two layers, as t increases, Fig. 10b to d shows the lowering of the minimum depth of the layers, and in some instances, these layer depths will go to zero.The maximum LLD and SLD, in all cases, increase with t.
Figure 10d reveals a rapid decline of the SLD at the end of each diurnal cycle, indicating that the channel is weakly stratified over its entire depth and is the only case that exhibits this complete breakdown before recovering to an almost constant SLD.This relationship also highlights that in extreme cases where t is very small, its behaviour, when averaged, mirrors that of its constant forcing counterpart.
When changing B and keeping t constant, the flow will progressively increase in strati- fication and its LLD and SLD will increase in size as shown in Fig. 11a where B = 1.5 will have no LLD throughout its diel cycle, while Fig. 10b where B = 5.8 will have a diurnal LLD and finally to Fig. 11b where the LLD is persistent throughout the diel .Furthermore, the length of the LLD will also grow and penetrate further down the water column as B increases.A similar behaviour is observed with the SLD.These behaviours exhibited in both the LLD and SLD are much like when t increases for the same B as shown above.
For all simulations, there is a distinct lag between the thermal forcing and the flow response in all diel simulations for both LLD and SLD shown in Figs. 10 and 11 where the LLD and SLD are slightly out of phase with the black dashed lines that represent the unsteady heat forcing.Furthermore, for all cases, the SLD is greater than LLD, with the maximum SLD ranging between 0.55 and 0.75.
The results in Fig. 10 demonstrate that for increasing t the flow transitions from persis- tently laminar in the near surface region to diurnally laminar.The SLD also demonstrates this transition although a flow which is diurnally laminar is not necessarily diurnally stratified as Fig. 10c shows.In certain cases (refer to Table 1), the flow may also attain a regime where the flow will never acquire an LLD nor an SLD (Case 8), a regime where the flow won't obtain an LLD but will have a diurnal SLD (Case 6, Fig. 11a) and a regime where LLD and SLD are persistent (Case 7, Fig. 11b).In the following section, a regime map of these behaviours for the LLD and SLD is presented where each simulation will fall into one region of the LLD regime map and another for the SLD regime map.The LLD map denotes all three flow regimes: never laminar (NL), diurnally laminar (DL) or persistently laminar (PL) while the SLD map will never stratified (NS), diurnally stratified (DS) or persistently stratified (PS).It can therefore be said that when increasing B , the flow will transition into higher stratification regimes (NL > DL > PL or NS > DS > PS).
Flow regimes
The location of the LLD and SLD depends on the governing parameters, B or Re L = L B U ∕ = Re ∕ B , and t .Simulations of varying B and t have been used to iden- tify the flow regimes defined through these governing parameters and locate these regimes on a regime map.The regime map in Fig. 12 reveals a flow's response against varying Re L for LLD and B for SLD against t.Fig. 12 Regime map for an absent, diurnal or persistent a LLD and b SLD for all cases given in Table 1.As LLD is a viscous length scale defined by Re B ≈ 7 , Re L is used as the relative vis- cous parameter to map the regime of the LLD.Re B is employed to defined the LLD as it is often used as an indicator of the collapse of turbulence or the transient relaminarisation in stratified flows.It is a parameter defined through the ratio of the characteristic length scales, the Ozmidov length scale l O to the Kolmogorov length scale in stratified turbulent flow [18,21,34].The Ozmidov length scale l O is a length scale defined as the smallest (vertical) eddy influenced by buoyancy whereas the Kolmogorov length scale characterises the smallest scales of motion [21] therefore, Re b suggests the dynamic range of scales over which motion remain unaffected by the buoyancy force that dampens the larger scales and the viscous dissipation that affects the small [21].Because Re L is a bulk scale analogous to Re B in equilibrium flows [11], it is on this basis that it is used to construct a regime map for the laminarisation the t and Re L space.
While Re L is used to characterise the regime map for LLD, B is the characterising parameter used for regime map based on the stratified layer depth.The SLD is defined as the location from the free surface where Fr ≈ 1 as a strongly stratified regime is found to be Fr ≪ O(1) [16,22,32].In the limit of strong stratification, buoyancy effects are dominant and hence it is reasonable to assume that the bulk stability parameter in this paper B is appropriate to map the SLD regime map.
The LLD and SLD may break down and re-establish throughout the diel cycle; however, at sufficiently low B or higher Re L , these layers may be completely non-existent, and the flow is fully turbulent over the entire channel depth.At very high B or low Re L , the layer depths may persist through the diel cycle as shown in Fig. 10.The first classification; a non-existent LLD or SLD throughout the diurnal cycle, is denoted by NL and NS, respectively.The second classification; a break down and re-established LLD or SLD throughout the diurnal cycle, is denoted by DL and DS, and the last classification; a persistent LLD or SLD, is denoted by PL and PS.
The transition lines between these regimes are found by identifying the region of the transition before fitting a trend-line onto the regime map.For Fig. 12a, the transition from NL to DL is given by the equation t = 2.7 × 10 −10 Re 4.5 L − 0.1 and the transition from DL to PL is dictated by t = 1.56 × 10 2 Re −0.5 L − 18 .For a constant zero stratified layer regime, NS to DS, for a neutral flow exists when B = 1.01 with the transition from DS to PS governed by t = 3 B .NS is not labelled in Fig. 12b but exists within the region when B ≤ 1.01.
The regimes maps indicate a direct relationship between t and B (or for Re L , which is indirectly related to t ) as the regimes begin to transition from NL / NS, DL / DS and PL / PS.This behaviour is as expected.B is a determining value for stratification levels.As B increases the flow will move from NL, DL and PL.The same principle applies to the SLD behaviour.
Regime map application
To apply the regime maps, values of B , Re and t are required and Eqs. 9 and 12 must be calculated.Here we give an example for the application of the regimes maps.Analysing data taken from a site at the Bourke Weir pool located on the Darling River in 1997, finds flow rates during late September, early October to be around 3.5 m 3 s −1 [35].Though surface heat flux over the Bourke Weir is difficult to find, average surface heat flux over Maude Weir can be approximated to be around ≈ 170 Wm −2 [36].Given a cross-sectional area of around 97 m 2 for the Bourke Weir and a depth of = 4 m , the flow velocity can be 1 3 calculated to be around U b = 0.04 ms −1 [35].The shear stress at the wall of the river can be determined by, where r p is the roughness of the bed channel which is equal to 0.05 [35].Here, the Reyn- olds number is equal to Re = U b ∕ = 1.4 × 10 5 .Analysis of average wind velocities at recording station in Bourke indicate minimal variations through the year with average speeds varying between 1.2 to 2.1 ms −1 between the period of 1991-1995 [37] taken at altitude 107 m.Using a logarithmic wind profile [38], U 10 can be interpolated to be around 1.7 ms −1 .From here, a surface shear stress can be found.
Turbidity at Bourke can range between 9-1740 NTU [39] though there is a strong correlation between the decline of turbidity and decreased flow rates [37].With turbidity at Bourke in August of 1995 dropping to around 20 NTU at flow rates 3.5 m 3 s −1 [37].20 NTU has been taken to calculate the attenuation coefficient .
With all the parameters listed above and summarised into Table 3, B , Re and t can be solved once Eqs. 9 and 12 are calculated.With these parameters, B = 17.6 , Re = 8.4 × 10 3 , t = 45 and Re L = 478 for the example of this flow.From Mitrovic et al. [35], the period where U b was calculated (September-October of 1997), the flow was persistently stratified with persistent stratification defined when the temperature difference between the top and bottom of the water column is greater 0.5 • C for more than five days.Though temperatures at the surface of the water column will drop during the night, the difference between surface and bottom temperatures do not fall below 0.5 • C during this time.In the case of this flow, from the regime maps of Fig. 12, the flow will be in the NL regime for the LLD regime map and a PS regime for the SLD regime map.Overall, the flow for this instance will have no laminar layer depth within its diurnal timescale of t = 45 but will have a stratified layer depth that is persistent throughout the entire period of the diurnal timescale.Since the data indicates a persistent SLD, it is in agreement with the data from Mitrovic et al. [35] where they observe persistent stratification within this period.
Parameterisation of R * f as a function of Fr
This section aims to determine if previous parameterisation of the mixing efficiency can be applied to a diurnally heated, stratified channel flow and is distinct from previous sections with regards to the layer depths and their regimes.For many studies into stratified flows, there is a lot of emphasis placed on parametrisation of mixing with one such notable relation, the Froude number and flux Richardson number Fr − R * f framework.Previous data simulation have found the turbulent Froude number Fr to be a strong parameter to parameterise the mixing efficiency and infer on the localised state of turbulence in stably stratified flow [16,23,32] though its relation has yet to be tested on diurnally heated open channel flows.For the paper, the irreversible flux Richardson number R * f is used as the mixing efficiency parameter.This definition of the flux Richardson number allows the formulation to quantify the mixing efficiency in stably stratified flows with the consideration of removing stirring, a large scale non-diffusive reversible flux from the equation [40].The R * f number can be defined as, where pe is the dissipation rate of the turbulent potential energy and is approximated by the density scalar variance dissipation rate p [41].Both parameters are defined respectively as: Here, is the density (temperature) and ′ i is the fluctuating density.For strongly stratified flows Fr ≪ 1 , the irreversible flux Richardson is approximately constant R * f ∝ Fr 0 , while moderately stratified flows Fr ∼ O(1) exhibit the relation R * f ∝ Fr −1 and for weakly stratified conditions Fr ≫ 1 ; R * f ∝ Fr −2 [16, 23, 32].The Fr-based framework defined by Garanaik & Venayagamoorthy [16] differs from our definition of SLD as the SLD is based on the length from the free surface when Fr ≈ 1 , making the location of the SLD within the transition region between weakly stratified and strongly stratified.Due to the nature of open channel flows and depth and time dependent heating function, it is expected for the paper's flow to exhibit a wide range of Fr and R * f spatially (in the vertical direction) and temporally.The temporal relevance is most notable for long diurnal timescales where the change in regime in one diurnal timescale cycle at a specific location is most evident and visible (Fig. 13d).
Figure 13 illustrates the irreversible flux Richardson number as a function of the turbulent Froude number for cases 2 to 5. Each point represents a specific time, while each shape indicates where the data lies along the channel height.The vertical dotted black line denotes where Fr = 1 while the solid black lines take the form of the function Fr −1 , the closely spaced dashed line shows the function Fr −2 and the horizontal and widely spaced dashed line when Fr 0 .The dot-dashed horizontal line indicate when R * f = 0.17 .Between Fig. 13a and d, the simulation data collapses reasonably well onto the parameterization made by Garanaik & Venayagamoorthy [16] with all cases supporting the theoretical critical R * f = 0.15 − 0.21 value above which turbulence is incapable to be sustained at steady-state [15,42,43].
With regards to the constant forcing case, the flow exists mostly within a moderately to strongly stratified regime as shown in Fig. 13a.For most of the constant forcing, the flow stays constant in its stratification regime along the vertical positions of the channel column except for when y = 0.3 .This can be noted in Fig. 13a where the y-position points stay rela- tively in the same Fr and R * f values as oppose to the scattering observed in Fig. 13d where the y = 0.3 red-circle points demonstrate differing Fr and R * f values.This scattering behaviour is seen in all diurnal cases within Fig. 13 where each region of the water column experiences a range of Fr numbers throughout its diel cycle, and this range broadens as t increases.Furthermore, the free surface is seen to undergo greater shifts in stratification levels than the near-wall regions between the diurnal periods.Figure 13d highlights all stratification regimes with the strongly stratified regime following the critical flux Richardson number value R * f ∝ Fr 0 .Where the case is moderately stratified, to the left of the vertical Fr = 1 line on the plot, the points follow the R * f ∝ Fr −1 equation well, and this additionally applies to the weakly stratified regime on the right of the Fr = 1 line which follows R * f ∝ Fr −2 adequately.This behaviour is similar to the time graphs and vertical profiles of t = 24 , revealing that at large t , the flow exhibits periods of stratification and an abrupt collapse of it for specific locations within the vertical water column.
Conclusion
This paper seeks to determine the effects of varying diurnal timescales on stably stratified channels as a model for radiatively heated river flow.This surface heating irradiance acts as potential energy and suppresses the turbulence in competition with the turbulent kinetic .Each marker represents a vertical position on the water column at a point in time with red circle marker at y = 0.3, blue diamond at y = 0.5, green squares at y = 0.6, purple crosses at y = 0.7 and orange triangles at y = 0.9 energy production through shearing from the bottom of the channel.In this flow, the near wall region is turbulent while the mid-channel and near surface region is stratified.As the thermal source is removed and re-introduced, the stratification alternatively increases and then decreases.
This paper provides a description of the local flow states of the open channel and show where they exist within the water column and their changes across the diel cycle as the flow goes through its cyclic heating period.The diurnal timescale between t = 6 to 24 with The flow is shown to exhibit either a fully neutral state, or a diurnal or persistently stratified state.It is essential to understand the conditions for the transition between these regimes.These conditions highlight whether the flow, within its diel cycle, will experience a breakdown of stratification or persist in its stratified state with minimal mixing.
A relationship was found between the diurnal timescale t and the vertical extent of the LLD and SLD.A regime map was produced with the transition from NL to DL given by t = 2.7 × 10 −10 Re 4.5 L − 0.1 , DL to PL by t = 1.56 × 10 2 Re −0.5 L − 18 , NS to DS by B = 1.01 and DS to PS by t = 3 B .This regime map may be of direct use in identifying flow condi- tions that may lead to adverse phenomena such as cyanobacterial blooms, caused by the rapid breakdown of a persistently stratified and stagnant flow [6,35].
For these flow conditions, the irreversible flux Richardson number R * f and its relationship with the turbulent Froude number Fr collapses well onto previous parameterizations of turbulent mixing in stratified flows [16].Increasing the diurnal timescale for this flow will broaden the range of stratification levels throughout a flow's diurnal cycle, and its thermal buoyancy influences extend further down the depth of the channel.At very high t the flow will move between a fully neutral state flow to a steady state equilibrium within its diurnal cycle.
Fig. 1
Fig. 1 Schematic of radiatively heated open channel flow
2 bFig. 2
Fig. 2 Comparison profiles of a full-and half-span channel for a constant forcing case with B = 5.8 , Re = 400 , Pr = 1 and = 8 : a stream-wise velocity, b instantaneous temperature difference to wall temperature w , c turbulent shear stress, d scalar flux, e buoyancy Reynolds number and f turbulent Froude number profile.Red solid lines = full-span and blue dashed lines = half-span domain
Fig. 3 Fig. 4 Fig. 5
Fig. 3 Time-series of a Ri B and b Fr B for Case INF and Case ISF with simulation parameters B = 2.9 , t = 3 , Re = 400 , Pr = 1 and = 8 .Red solid lines = neutral initial condition and blue dashed lines = stratified initial condition
Fig. 9
Fig. 9 Temperature difference Δ plotted against time t: a Case 2 -constant forcing case, b Case 4 -diurnal with t = 12 , c Case 18 -diurnal with t = 18 and d Case 5 -diurnal with t = 24
10
Fig. 11 Laminar and stratified layer depths for Case 6 and 7 with simulation parameters t = 6 , Re = 400 , Pr = 1 and = 8 : a Case 6 -B = 1.5 , and b Case 7 -B = 17.45 .Vertical axis is flipped to clearly show where layer depths lie within the water column.The dashed black lines signifies the surface radiant heat flux value I s that is scaled to a secondary y-axis defined by Eq. 2 located on the right of the plots.Red dotted lines indicate the LLD and blue solid lines indicate the SLD Fig.12 Regime map for an absent, diurnal or persistent a LLD and b SLD for all cases given in Table1.Figure a is plotted against Re L with the horizontal axis reversed and b is plotted against B .The blue squares indicate an absence of either the LLD or SLD, red circles indicate diurnal and black crosses are persistent laminarisation or stratification
Fig. 13 R
Fig. 13 R * f as a function of Fr for cases 2 to 5 with parameters B = 5.8 , Re = 400 , Pr = 1 and = 8.Each marker represents a vertical position on the water column at a point in time with red circle marker at y = 0.3, blue diamond at y = 0.5, green squares at y = 0.6, purple crosses at y = 0.7 and orange triangles at y = 0.9
B = 5 . 8
along with t = 6 with B = 1.5 and B = 17.4 for parameter values Re = 400 , Pr = 1 and = 8 are examined.Results of the temperature profile, laminar and stratified layers are shown and flow regimes are presented.
Table 1
Simulation cases and parametersCase
Table 2
Destratification time for each simulation where the superscript * represents the average parameter value at t = 0.75t for a given diurnal cycle while t d represents the estimate for destratification time defined by the scaling relation in Eq. 27 and t sim is the destratification time taken from the simulation results
Table 3
Summary of variables used to determine B , t and Re at Bourke Weir pool | 13,165.6 | 2023-08-04T00:00:00.000 | [
"Environmental Science",
"Physics"
] |
Speckle reduction using deformable mirrors with diffusers in a laser pico-projector
We propose a design for speckle reduction in a laser pico-projector adopting diffusers and deformable mirrors. This research focuses on speckle noise suppression by changing the angle of divergence of the diffuser. Moreover, the speckle contrast value can be further reduced by the addition of a deformable mirror. The speckle reduction ability obtained using diffusers with different divergence angles is compared. Three types of diffuser designs are compared in the experiments. For Type 1 which uses a circular symmetric diffuser the speckle contrast value can be decreased to 0.0264. For Type 2, the speckle contrast value can be reduced to 0.0267 because of the inclusion of an elliptical distribution diffuser. With Type 3 which includes a combination of the circular distribution diffuser and elliptical distribution diffuser, the speckle contrast value can be reduced to 0.0236. For all three types, the speckle contrast value is lower than 0.05. Under this speckle value, the speckle phenomenon is invisible to the human eye. ©2017 Optical Society of America OCIS codes: (030.6140) Speckle; (290.0290) Scattering; (140.0140) Lasers and laser optics; (110.6150) Speckle imaging References and links 1. K. V. Chellappan, E. Erden, and H. Urey, “Laser-based displays: a review,” Appl. Opt. 49(25), F79–F98 (2010). 2. O. Svelto, Principles of Lasers, 4th ed. (Springer, 2009). 3. H. J. Rabal and R. A. Braga, Dynamic Laser Speckle and Applications (CRC Press, 2008). 4. M. N. Akram, Z. Tong, G. Ouyang, X. Chen, and V. Kartashov, “Laser speckle reduction due to spatial and angular diversity introduced by fast scanning micromirror,” Appl. Opt. 49(17), 3297–3304 (2010). 5. N. E. Yu, J. W. Choi, H. Kang, D. K. Ko, S. H. Fu, J. W. Liou, A. H. Kung, H. J. Choi, B. J. Kim, M. Cha, and L. H. Peng, “Speckle noise reduction on a laser projection display via a broadband green light source,” Opt. Express 22(3), 3547–3556 (2014). 6. T. T. Tran, Ø. Svensen, X. Chen, and M. N. Akram, “Speckle reduction in laser projection displays through angle and wavelength diversity,” Appl. Opt. 55(6), 1267–1274 (2016). 7. J. W. Pan and C. H. Shih, “Speckle reduction and maintaining contrast in a LASER pico-projector using a vibrating symmetric diffuser,” Opt. Express 22(6), 6464–6477 (2014). 8. T.-K.-T. Tran, X. Chen, Ø. Svensen, and M. N. Akram, “Speckle reduction in laser projection using a dynamic deformable mirror,” Opt. Express 22(9), 11152–11166 (2014). 9. F. Shevlin, “Optically Efficient Homogenization of Laser Illumination,” IDW, PRJ3 3 (2015). 10. M. Blum, M. Büeler, C. Grätzel, J. Giger, and M. Aschwanden, “Optotune focus tunable lenses and laser speckle reduction based on electroactive polymers,” Proc. SPIE 8252, 825207 (2012). 11. Z. Cui, A. T. Wang, Z. Wang, S. L. Wang, C. Gu, H. Ming, and C. Q. Xu, “Speckle suppression by controlling the coherence in laser based projection systems,” J. Disp. Technol. 11(4), 330–335 (2015). 12. Q. Ma, C. Q. Xu, A. Kitai, and D. Stadler, “Speckle reduction by optimized multimode fiber combined with dielectric elastomer actuator and lightpipe homogenizer,” J. Disp. Technol. 12(10), 1162–1167 (2016). 13. E. G. Rawson, A. B. Nafarrate, R. E. Norton, and J. W. Goodman, “Speckle-free rear-projection screen using two close screens in slow relative motion,” J. Opt. Soc. Am. 66(11), 1290–1294 (1976). 14. F. Shevlin, “Speckle reduction for illumination with lasers and stationary, heat sinked, phosphors,” IDW, PRJ4 4 (2013). 15. W. J. Smith, Modern Optical Engineering, 4th ed. (McGraw Hill, 2007). Vol. 25, No. 15 | 24 Jul 2017 | OPTICS EXPRESS 18140 #296255 https://doi.org/10.1364/OE.25.018140 Journal © 2017 Received 17 May 2017; revised 30 Jun 2017; accepted 3 Jul 2017; published 19 Jul 2017 16. B. Redding, G. Allen, E. R. Dufresne, and H. Cao, “Low-loss high-speed speckle reduction using a colloidal dispersion,” Appl. Opt. 52(6), 1168–1172 (2013). 17. F. Riechert, G. Bastian, and U. Lemmer, “Laser speckle reduction via colloidal-dispersion-filled projection screens,” Appl. Opt. 48(19), 3742–3749 (2009). 18. DYOPTYKA miniaturized phase-randomizing deformable mirror, http://www.dyoptyka.com/. 19. F. Shevlin, “Optically efficient directional illumination with homogenization of laser incidence on remote phosphor,” in LDC ’16 (2016). 20. J. W. Pan and C. H. Shih, “Speckle noise reduction in the laser mini-projector by vibrating diffuser,” J. Opt. 19(4), 045606 (2017). 21. D. S. Mehta, D. N. Naik, R. K. Singh, and M. Takeda, “Laser speckle reduction by multimode optical fiber bundle with combined temporal, spatial, and angular diversity,” Appl. Opt. 51(12), 1894–1904 (2012).
Introduction
In recent years, laser projection display technology has developed significantly [1].The laser projector has a wide color gamut, long lifetime and high optical efficiency compared with traditional projectors.There are also advantages arising from using a laser as the light source in the projector design, such as the monchromaticity, directionality, brightness and coherence of the light [2].However, the high coherence of the laser light can lead to a speckle effect caused by interference [3].This speckle phenomenon damages the image quality of the projection.Therefore, speckle suppression is very important to consider in laser projection displays.An additional element is needed in the projector design to reduce the speckle effect.Many techniques have been developed for speckle suppression in laser projection displays in recent years [4].The speckle suppression technologies can be divided into three methods.The first method uses wavelength diversity [5], angle diversity, and polarization diversity of the laser for coherence reduction, which can further reduce interference [6].The second method is to eliminate the degree of temporal coherence of the laser by the inclusion of a vibrating diffuser [7], dynamic deformable mirror [8,9], or electroactive polymers [10].The third method is spatially varying independent to include dielectric elastomeric actuators (DEA) [11,12], or rely upon moving the screen [13].The above methods can reduce the speckle contrast value to between 0.03 and 0.05.However, all of these technologies have disadvantages such as requiring a large system volume and high power consumption, making them unsuitable for laser pico-projector designs and none of these methods can effectively to reduce the value of speckle contrast in the laser projector display to a low enough level that speckle particles are not visible to the human eye.
In this study, we combine two methods for the design of a speckle reduction element which has commercial applications.In the first method, diffusers, a circular distribution diffuser, and an elliptical distribution diffuser, are used to increase the étendue of the laser.This method has already been used in laser projector displays.In order to further reduce the speckle contrast to a level invisible to the human eye, we add a dynamic deformable mirror [9,14].This method can generate many uncorrelated speckle patterns, which can further reduce the speckle contrast value.The above two methods are based on the principle of angle diversity.
Definition of speckle contrast
The speckle phenomenon is very important for image quality in laser projection displays.The speckle contrast value indicates the amount quantization used to describe the speckle.The speckle contrast is given by [3] where speckle contrast C is defined as the ratio of the standard deviation σ I to the mean intensity <I>.The speckle contrast value is usually between 0 and 1.When the value of speckle contrast C is a little short of one, this is called a fully developed speckle pattern [3].
When the image is not affected by the speckle phenomenon, the value of the speckle contrast is said to be 0. When the value of the speckle contrast is less than 0.05, the speckle phenomenon becomes imperceptible to the human eye [4,7].
Pico-LASER projection layout
The layout of the pico-LASER projection system is shown in Fig. 1.There are three white light laser sources used in the projector display.An X-prism is used to combine the three laser sources.For evaluating for speckle phenomenon, the wavelength of the laser source is 532nm, because the human eye is more sensitive to green light than other colors.First, the laser light passes through a neutral density filter.The neutral density filter is used to maintain the intensity of the laser light at the same laser power level for production of the speckle phenomenon, and to avoid saturation at the detector.The optical light path transfers the laser beam through a deformable mirror.When the deformable mirror is in operation, the randomly-distributed surface deformation creates many uncorrelated speckle patterns.Furthermore, the deformable mirror can also avoid localized temperature increases from becoming too high [14].
The laser beam is reflected from the deformable mirror to pass through the first diffuser and then through the second diffuser at the end of the light pipe.The multiple-reflections of the laser light within the light pipe generate a uniform homogenization at the end of the light pipe [15].The first diffuser and the second diffuser are used to increase the étendue of the laser.The passing of the laser light through the diffusers will produce various speckle patterns.A relay lens system is used to build a conjugation relationship between the exit port of the light pipe and the active area of the digital micromirror device (DMD).This relationship allows the relay lens system to superposition the various speckle patterns, further reducing the speckle contrast value.The typical projector elements such as the DMD, total internal reflection (TIR) prism and projection lens are placed after the relay lens system.The light pipe size is 4.5mm × 5.8mm and 30mm in length.
Experimental setup and deformable mirror function
The measurement setup is shown in Fig. 2. The Pico-LASER projector and camera lens are located 50cm from the screen.For the speckle contrast ratio measurement, we are specific to low-image-magnification apparatus for 50 cm from projection screen and camera.Thus, the speckle contrast ratio would be higher if the screen was further away.The CCD camera pixel size is 5.2um × 5.2um with a resolution of 1280 × 1024 pixels.The F/# for the camera lens is 1.3 [16].The integration time of CCD camera chooses 20ms that is close to integration time of human eyes [17].In this experimental setup, the deformable mirror takes the place of the moving diffuser device typically used in anti-speckle technology.The deformable mirror allows a more compact system size than the moving diffuser voice coil motor (VCM) device typically used in the projector system, because the deformable mirror can bend the optical path.Compared with previous designs, the deformable mirror reduces the volume and the complexity of the system [7].Moreover, the deformable mirror produces uncorrelated speckle patterns, thereby reducing the speckle phenomenon.The working mechanism is comprised of an actuated phase-randomized deformable mirror capable of reaching hundreds of KHz. Figure 3 shows the DYOPTYKA miniaturized phase-randomizing deformable mirror [18,19].Figure 3(a) shows an inactive deformable mirror with dimensions of 4.5mm and 6mm.As can be seen in Fig. 3(b), the active deformable mirror has an elliptical working area of 3mm × 4.5mm for the generation of angle divergence [19].The relation between the vibration frequency of the deformable mirror and the divergence angle of the laser obtained in this study is shown in Fig. 4. The driving frequency of the deformable mirror in the range of 0 to 350 KHz.When the deformable mirror is operating at high frequency, there is an approximately 2 degree increase in the degree of divergence of the laser beam compared to the inactive state.Moreover, the rate of change in the divergence angle is independent in the X and Y directions.The profile of the laser beam after reflection by the deformable mirror is elliptical.As can be seen in Fig. 4, the rate of change in the angle of divergence is not stable.We use curve fitting to study changes in the divergence angles in the X and Y directions.The results show that the divergence angle increases with the working frequency of the deformable mirror.The rates of increase of the divergence angle in the X and Y directions are 0.079 deg./KHz and 0.041 deg./KHz for frequency ranges of 0Hz and 10 KHz, respectively, and 0.005 deg./KHz and 0.0027 deg./KHz for frequency ranges of 10Hz and 350 KHz, respectively.The initial divergence angles in the X and Y directions are 0.968 and 1.181, respectively.The divergence angle of the laser is based on the intrinsic property of the vibration of the deformable mirror.Moreover, the elliptical divergence angle profile in the Y direction will change to an elliptical divergence angle profile in the X direction with increasing vibration frequency.Around a vibration frequency of 75 KHz, the profile of the elliptical divergence angle becomes circular.This angle of divergence is more stable for a laser projector design based on symmetrical principles with this relay lens design and light pipe arrangement.
Speckle reduction by a deformable mirror with different diffusers
Different diffusers are used with the deformable mirror for speckle reduction in the experiments.The setup of the optical system for speckle reduction comparison can be divided into three types based on the type of diffuser.Type 1 uses a diffuser with a circular distribution, Type 2 uses diffuser with an elliptical distribution and Type 3 uses two diffusers, one with a circular distribution and one with an elliptical distribution.Only one diffuser is used in Type 1 and Type 2. The second diffuser is removed from the Pico LASER projector design used in the speckle testing experiments to make it more compact in size.Two diffusers are used in Type 3 for comparison with previous studies [7,20].The divergence angles of the circular distribution diffusers are 5 degrees (5X5), 10 degrees (10X10) and 30 degrees (30X30) and the divergence angles of the elliptical distribution diffusers are 5 degrees and 30 degrees (5X30), 10 degrees and 50 degrees (10X50), and 20 degrees and 80 degrees (20X80), described as the angle of the full width at half maximum (FWHM) of the bidirectional transmittance distribution function (BTDF) in the X and Y directions.
Type 1: Circular distribution diffuser
The circular distribution diffuser is used as the first diffuser located at the entrance of the light pipe.The passage of laser light through the deformable mirror produces divergence resulting in an elliptical profile with the long side corresponding to the X direction.Moreover, the light pipe element has a long side and a short side, so the light pipe arrangement can be divided into two modes.The light pipe arrangement for an optical system with a first diffuser of "30°X30°" is shown in Fig. 5.In Fig. 5(a), "LPL X_30°X30°" indicates the case where the long side of the light pipe corresponds to the X direction and the first diffuser is "30°X30°"; "LPL Y_30°X30°" indicates that the long side of the light pipe corresponds to the Y direction with a diffuser of "30°X30°" as shown in Fig. 5(b).The speckle contrast value measurement results are shown in Fig. 6.When the deformable mirror is inactive (frequency at 0 Hz) and the long side of the light pipe corresponds to the X direction (LPL X), the speckle contrast values are 0.1141, 0.2364 and 0.3579 for first diffusers of 30X30, 10X10 and 5X5, respectively.When the deformable mirror works at a frequency of 0 Hz and the long side of the light pipe corresponds to the Y direction (LPL Y), the speckle contrast values are 0.1299, 0.265 and 0.3879 for first diffusers of 30X30, 10X10 and 5X5, respectively.We can see that the speckle value is larger when the deformable mirror is inactive.After activation of the deformable mirror, the speckle value gets smaller for all conditions.When the first diffuser is 5X5 and the driving frequency of the deformable mirror is 350 KHz, the speckle contrast value can be reduced from 0.3579 to 0.0839 for "LPL X" and from 0.387 to 0.079 for "LPL Y".When the first diffuser is 10X10 and the driving frequency is 350 KHz, the speckle contrast value can be reduced from 0.2364 to 0.0546 for "LPL X" and from 0.265 to 0.0538 for "LPL Y".In addition, when the first diffuser is 30X30 and the driving frequency is 350 KHz, the speckle contrast value can be reduced from 0.1141 to 0.0273 for "LPL X" and from 0.1299 to 0.0264 for "LPL Y".The lowest speckle value is 0.0264 obtained under the condition of "LPL Y_30°X30°" with a deformable mirror frequency of 350 KHz.When the speckle contrast value is lower than 0.05, the speckle phenomenon becomes invisible to the human eye.Figure 7(a) shows the speckle image produced with an inactive mirror and Fig. 7(b) shows that produced with an active mirror.Based on the above results, we can see that the speckle contrast value decreases towards a constant value as the driving frequency gradually increases.In addition, we also find that the speed of decrease is larger at a low driving frequency (0 Hz to 50 KHz) than for a high driving frequency (50 KHz to 350 KHz).The speckle reduction ability of the different diffusers is different.The speckle reduction ability of a first diffuser with a large divergence angle is higher, because the large divergence angle in the light pipe leads to the creation of more speckle patterns [7,21].The speckle patterns are superposed on the image plane by the relay lens system thereby reducing the speckle contrast value.Furthermore, the speckle contrast value is less in the "LPL Y" mode than in the "LPL X" mode.This reason for this is the reflection of the elliptically distributed laser beam by the active deformable mirror.The deformable mirror functions to change the circular distribution of the laser beam into an elliptical distribution, thus causing differences in the amount of light bounce for the different light pipe modes.
Type 2: Elliptical distribution diffuser
In the second type of design, an elliptical distribution diffuser is placed at the entrance of the light pipe.As for Type 1, the arrangement of the elliptical distribution diffuser and the light pipe can be divided into four modes.An example of an optical system with a first diffuser of "80°X20°" is shown in Fig. 8. Figures 8(a)-8(d) show the arrangements for "LPL X_80°X20°", "LPL X_20°X80°", "LPL Y_80°X20°", "LPL Y_20°X80°", respectively.The measurement results for the speckle contrast value are shown in Fig. 9. Using a "30°X5°" diffuser as the first diffuser, the speckle contrast values for the four arrangement modes can be reduced from 0.286 to 0.048 for "LPL X_30°X5°", 0.280 to 0.0425 for "LPL X_5°X30°", 0.286 to 0.0492 for "LPL Y_5°X30°" and 0.286 to 0.0412 for "LPL _30°X5°".Using a "50°X10°" diffuser as the first diffuser, the speckle contrast values for the four arrangement modes can be reduced from 0.263 to 0.0401 for "LPL X_50°X10°", 0.240 to 0.0396 for "LPL X_10°X50°", 0.287 to 0.0377 for "LPL Y_10°X50°" and 0.250 to 0.0310 for "LPL Y_50°X10°".Using an "80°X20°" diffuser as the first diffuser, the speckle contrast value for the four arrangement modes can be reduced from 0.171 to 0.0310 for "LPL X_80°X20°", 0.173 to 0.0271 for "LPL X_20°X80°", 0.171 to 0.0288 for "LPL Y_20°X80°" and 0.170 to 0.0267 for "LPL Y_80°X20°".The lowest speckle contrast value is obtained for the "LPL Y_80°X20°" arrangement, as shown in the speckle image in Fig. 10. Figure 10(a) shows the speckle image obtained with an inactive mirror and Fig. 10(b) shows the image obtained with an active mirror.Fig. 10.System image quality obtained with a first diffuser of "80X20" with a speckle contrast value of (a) 0.170 for an inactive mirror; (b) 0.0267 for a deformable mirror and a driving frequency of 350 KHz.
Comparison of the test results in Figs. 6 and 9 shows that the speckle contrast value is lower for Type 2 than for Type 1.For a more detailed explanation please see the following: the difference is speckle reduction ability occurs because the long axis of the elliptical laser beam corresponds to the short side of the light pipe and the large divergence angle of the elliptical distribution diffuser also corresponds to the short side of the light pipe.This increases the number of reflections within the light pipe.This phenomenon can further reduce the speckle contrast value by the superposition of the speckle pattern.This result has been shown in our previous research [7].Therefore, the "LPL Y_80°X20°" mode has a lower speckle contrast value than the other modes.This overall trend for Type 2 is similar to that for Type 1.
Type 3: circular distribution diffuser and elliptical distribution diffuser
In Type 3, the experimental setup includes two diffusers.The first diffuser is placed at the entrance of the light pipe and the second diffuser is placed at the exit of the light pipe.From the above discussion, we find that for Type 1 and Type 2, the "30°X30°" and "80°X20°" diffusers, respectively, have the highest speckle reduction ability.In the experimental setup, we choose to discuss the speckle contrast reduction for the "30°X30°" and "80°X20°" diffusers.For example, using a first diffuser of "80°X20°" and a second of "30°X30°" we examine four modes, as shown in Fig. 11.Figures 11(a)-11(d) show the arrangement modes "LPL X_80°X20°, 30°X30°", "LPL X_20°X80°, 30°X30°", "LPL Y_80°X20°, 30°X30°", "LPL Y_20°X80°, 30°X30°", respectively.The measurement results for the speckle contrast value are shown in Fig. 12.When the deformable mirror is inactive (frequency of 0 Hz), the speckle contrast value is lower for Type 3 than for Type 2 or Type 1.The main reason is the two diffusers used in the experimental setup for Type 3. The speckle contrast reduction ability is higher for Type 3 than for Type 2 or Type 1.Moreover, the tendency for speckle change is the same for Types 3, 2 and 1.According to the results, the lowest speckle contrast value is 0.0236 obtained for modes "LPL Y_80X20, 30X30", but it is difficult to distinguish differences in the speckle contrast value for Type 3. The main reason is that the speckle spots are smaller than the CCD camera pixel size.Under this condition, the CCD camera cannot detect the changes in irradiance caused by the speckle spots.In a word, the speckle contrast value measurement is limited by the CCD pixel size.Thus, the change in tendency of speckle contrast values for different arrangements in Type 3 is the same.The speckle image under the arrangement mode "LPL Y_80X20, 30X30" is shown in Fig. 13.The image speckle obtained with an inactive mirror is shown in Fig. 13(a) and that obtained with an active mirror is shown in Fig. 13(b).
In other words, although the speckle contrast value of Type 3 is lower than for Type 1 or Type 2, in terms of cost, compactness of size and relay lens cone angle matching [7], the "LPL Y_30X30" mode of Type 1 is the most suitable setup for a laser pico-projector.
Among the three type diffuser arrangements, the speckle contrast vale is decreased with deformable mirror vibration frequency increasing.The lowest speckle value for three type conditions is shown in Table 1.
Fig. 13.System image quality of the first diffuser "80X20" and the second diffuser "30X30" with a speckle contrast value of (a) 0.170 for an inactive mirror; and (b) 0.0267 for a deformable mirror with a driving frequency of 350 KHz.
Conclusion
In this paper, we discuss speckle suppression for designs using deformable mirrors and different diffuser arrangements.The use of diffusers with different angles of divergence affects the speckle reduction ability.The main reason is the larger number of reflections within the light pipe for the large divergence angle diffusers than for the small divergence angle diffusers.The use of a deformable mirror can efficiently decrease the speckle contrast by generation many uncorrelated speckle patterns.The measurement results for Type 1 clearly show that the "LPL Y" mode produces a smaller speckle value than the "LPL X" mode.The main reason is that the deformable mirror produces an elliptical laser beam, so that the speckle contrast value is smaller when the long axis of the elliptical laser beam corresponds to the short side of the light pipe.With an active deformable mirror, the speckle contrast values for "LPL Y_30°X30°" for Type 1, "LPL Y_80°X20°" for Type 2 and "LPL Y_80°X20°, 30°X30°" for Type 3 are 0.0264, 0.0267 and 0.0236, respectively.For the three arrangement modes and three types, the lowest speckle contrast values are all less than 0.05, at which point the speckle phenomenon becomes invisible to the human eye.The above arrangement modes are thus all effective for speckle reduction in a laser pico-projector, however, for mass production, the issue of cost is most important.Thus, "LPL Y_80°X20°, 30°X30°" arrangements are not suitable owing to the use of two diffusers for speckle reduction even though their speckle reduction ability is the same as for the one diffuser setup in Type 1.
There is no advantage to using two diffusers as in the Type 3 designs.Comparison between
Fig. 4 .
Fig. 4. Relation between vibration frequency of the deformable mirror and divergence angle of the laser.
Fig. 6 .
Fig. 6.Dependence of the speckle contrast value on the applied frequency for a circular distribution diffuser.
Fig. 7 .
Fig. 7. Image quality of the first diffuser "30X30" with a speckle contrast value of (a) 0.1299 for an inactive mirror; (b) 0.0264 using a deformable mirror and a driving frequency of 350 KHz.
Fig. 12 .
Fig.12.Dependence of the speckle contrast on the applied frequency using both a circular distribution diffuser "30X30" and an elliptical distribution diffuser "80X20". | 5,766.6 | 2017-07-24T00:00:00.000 | [
"Engineering",
"Physics"
] |
Complex Permittivity and Permeability Studies Viewing Antenna Applications of NBR-Based Composites Comprising Conductive Fillers
The work presents studies on the complex permittivity and permeability of composites based on acrylonitrile butadiene rubber containing combinations of conductive fillers which include carbon black and nickel powder. The properties of those composites, containing each of the fillers at the same amount were compared. The permittivity and permeability values of the composites are influenced remarkably by their morphology and structure as well as by the morphological and structural specifics of both fillers. As electron scanning microscopy studies confirm, those parameters are predetermined by the nature of the composites studied—particle size, particles arrangement in the matrix and their tendency to clustering. Last but not least matrix-filler interface phenomena also impact the characteristics in question. The possibilities for applications of the composites in antennae have been studied, in particular, as substrates and insulating layers in flexible antennae for body centric communications (BCCs). The research results allow the conclusion that these materials can find such applications indeed. Composites of higher conductivity can be used where surface waves are generated to provide on-body communications, while composites of lower conductivity may be used for antennae that will be on the body of a person and will transmit to and receive from other antennas that are not on the body of the same person (off-body communications). It is clear that one can engineer the properties of How to cite this paper: Al-Sehemi, A.G., Al-Ghamdi, A.A., Dishovsky, N.T., Atanasov, N.T. and Atanasova, G.L. (2018) Complex Permittivity and Permeability Studies Viewing Antenna Applications of NBR-Based Composites Comprising Conductive Fillers. Materials Sciences and Applications, 9, 883-899. https://doi.org/10.4236/msa.2018.911064 Received: September 10, 2018 Accepted: October 23, 2018 Published: October 26, 2018 Copyright © 2018 by authors and Scientific Research Publishing Inc. This work is licensed under the Creative Commons Attribution International License (CC BY 4.0). http://creativecommons.org/licenses/by/4.0/
Introduction
The current expansion of high-frequency wireless devices and appliances has augmented the need of versatile dielectric antennae, hence urged the search of materials able to meet the producers' requirements-higher dielectric constant and lower tangent of dielectric loss angle values being amongst the most important.In the former case a wave will be transmitted faster in the dielectric material [1], while the latter factor leads to suppressed heat generation-induced energy and to a better performance, respectively.Being hard and brittle conventional antenna materials, lack flexibility and are vulnerable to impact [1], so recently elastomer composites have emerged as promising substitutes of traditional ceramics used to make antenna substrates.A rubber substrate antenna has a number of advantages [1] [2]: • One can choose the relative dielectric constant (4 -20) established arbitrarily to be in the 2.2 -12.0 range for flexible substrates [3] At lower dielectric constant values the surface wave losses are related to guided wave broadcast within substrate.Thus the impedance bandwidth of the antenna raises with adequate competence and high gain [4].
• Low tangent dielectric loss angle values (0.01 or lower) As a dissipation factor the tangent of dielectric loss angle (tanδ) describes the amount of power turned into heat in the substrate material.It is defined as the ratio between the imaginary part and real part of the relative permittivity.Its high values result in additional losses in the dielectric substrate, subsequently into higher losses, in reduced radiation efficiency [5].
• Thickness of the dielectric substrate
The dielectric constant and its width determine the bandwidth and competence of a rubber based flexible antenna.A substrate is normally in the range of 0.003λ ≤ h ≤ 0.005λ, whereas λ represents its wavelength.The bandwidth of the flexible antenna at a fixed comparative permittivity may be maximized by selecting an appropriate substrate.We have assumed that, besides by the layers of the elastomer matrix, the electroconductive aggregates and agglomerates could be insulated by introduction of a second less conductive phase to fill the space between the aggregates and agglomerates.The second phase would have its own contribution to the formation of the dielectric and magnetic losses of the composite.For the purpose we have chosen two conductive phases differing completely in their chemical, crystallochemical and crystallographic properties, namely conductive carbon black (CCB) and nickel powder.
The properties of dual systems (rubber-metallic powder) are well-studied but there are few papers dealing with properties of the tertiary ones (rubber-carbon black-metallic powder) and the comparison of the two types of systems comprising the same amount of filler.Such a type of research also deserves interest in view of the large differences in the chemical and structural properties of the fillers used.Also the influence of the components ratio on the microwave properties of the composite material and the possibilities for its antenna applications requires special attention.
The present study reports on investigations on the complex permittivity and permeability of acrylonitrile-butadiene based composites containing a combination of conductive fillers including carbon black and nickel powder.It also presents and discusses the comparison between the properties of the dual and tertiary composites containing each of the fillers at the same loading degree, as well as assessment of the possibilities for antenna applications of the developed composites.
Nickel powder with apparent density 1.8 -2.7 g/cm 3 and average particles size 3 -7 microns, produced by Alfa Aesar was also used as filler.
Sample Preparation
The formulation of the NBR based compounds (in phr) were as follows: acrylonitrile butadiene rubber (NBR)-100, zinc oxide-
Experimental Techniques
The electromagnetic parameters (EM) of the composite materials were measured by the resonant perturbation method described in a previous publication [6].
According to the resonant perturbation method, the tested sample was introduced into a cavity resonator with dimensions 61.2 mm × 10.0 mm × 610 mm.The EM parameters of the sample were deduced from the change in the resonant frequency and quality factor of the resonator [6].For permittivity measurements the sample was placed at the spot of maximum intensity of electric field, where TE 103 mode was always adopted.For permeability measurements the sample was placed at the spot of maximum magnetic field, where TE 104 even mode was always adopted.
Scanning electron microscopy (SEM)
The nickel powder, CCB and NBR-based composites were subjected to electron microscopy studies on a JEOL ISM 5510 microscope.The SEM images of the nickel powder sample were taken directly and those of carbon black-from aqueous dispersion.The prepared samples were put into a JEOL JFC-1200 cathode-sputtering chamber and covered with 24 carat gold.
Results and Discussion
Figure 1 and Figure 2 present the real ( r ε ′ ) and imaginary ( r ε ′′ ) part of the rela- tive dielectric permittivity of NBR-based vulcanizates at a different filler concentration and 2.56 GHz.As seen, the values for all vulcanizates increase with increasing the filler concentration.The smallest variation is observed for the composites containing only nickel powder, while the highest r ε ′ and r ε ′′ values are observed for composite NBR-6 containing CCB at 60 phr.The results also show that at filler concentration of 30 phr and 50 phr r ε ′ and r ε ′′ values are higher for the tertiary systems, if compared to those of the composites containing only CCB or nickel powder.The opposite is observed at a 60 phr filler loading-the composites with only conductive carbon black have better results.That might be due to the combination of fillers, namely to their different particles shape and Table 1.Formulations of the studied NBR based compounds (in phr).size which yield non-uniform interface phenomena.Hotta et al. [7] discussing the complex permittivity, claim it to be dependent on the specific surface area of the fillers.In our case CCB has specific surface area much higher than that of the nickel powder.The difference in particle size is also huge.
The magnitude of the dielectric loss angle is greatest with NBR-6 composite containing 60 phr carbon black as a filler, whereas at the concentration of 30 phr and 50 phr the values of r ε ′ and r ε ′′ are higher in the triple systems, relative to those composites which contain as filler only carbon black or only nickel powder (Figure 3).These results are due to the fact that the dielectric losses in a multiple composite are the result of complex phenomena like natural resonance, dipole relaxation, electronic polarization as well as their relaxation and interfacial polarizations.Interfacial polarizations occur in heterogeneous media due to accumulation of charges at the interfaces and the formation of large dipoles [8].
The important factor is also the ratio between the particles of the fillers and the host materials [9].The morphology and structure of the conductive filler, its size and shape, as well as the morphology and structure of the composite affect the changes in the real and imaginary part of the relative permittivity.Materials Sciences and Applications The electric conductivity (σ АС ) of the composites at 2.66 GHz is presented in The results reveal that the increasing filler concentration leads to significant changes in r µ′ and r µ′′ for NBR-1, NBR-2, NBR-3 comprising a hybrid filler and for those with carbon black (NBR-4, NBR-5, NBR-6), while for those loaded with nickel powder the change is negligible.The magnetic losses for magnetic materials originate mainly from domain resonance, hysteresis loss, eddy current loss, and natural resonance.The domain wall resonance normally occurs at a frequency lower than 100 MHz, so it can be neglected in the microwave range.Having measured relative permeability a low microwave power (≤5 mW), we claim that complex phenomena like eddy current loss and natural resonance are the main cause of the magnetic loss in the present study [10], instead of hysteresis and domain resonance observed in the conventional case.Undoubtedly, the morphology and structure of the composites studied influence the mentioned phenomena.
The explanations made are supported by the results obtained by scanning electron microscopy.Figure 8 shows different crystalline structures and formations of layered structure in the investigated nickel powder.
The aggregates of the nickel particles form crystalline structures of the "spiral rosette" type.The rosettes aggregate according to the forces of interaction between them and strive to have a pseudospheric shape.Spheres can also form tiles that make up twin-nano domains.The spiral growth of nano-twins forms the crystal "comb" structure (Figure 8(b)).
Figure 9 shows a SEM image of the used conductive carbon black Printex XE-2B.The picture shows the high structurality of the carbon black expressed via formation of chain aggregates, as well as the tendency of the latter to form agglomerates.Elementary particles are nano-sized.
Figure 10 shows SEM images of the investigated vulcanizates filled at 30 phr.
It is clear from Figure 10 Here in some places we can see the aggregates and agglomerates of the nickel particles, which stand out in their larger dimensions, but the chain-like structures of CCB particles, which act as conducting paths, predominate in the elastomeric matrix.The similarity in the morphology of the two composites also explains their close values of specific volume resistivity, electrical conductivity, respectively.The images obtained by SEM demonstrate clearly that the particles and aggregates are well dispersed in the elastomeric matrix in all composites filled with CCB unlike the nickel powder particles.However, in the presence of carbon black, the dispersion of nickel particles is better.The presented results from SEM studies on the composites confirm categorically that the differences observed in the values of their dielectric permittivity and magnetic permeability, and the tangents of the dielectric and magnetic losses angles are due to significant differences in morphology and structure of the very fillers and of composites filled with the latter.Obviously, the morphology and structure have a significant impact on the phenomena forming the dielectric permittivity and magnetic permeability values described above.It is also obvious that, the morphology and structure of the fillers used are very different.As seen, the particle size and occurrence of particles clustering, the arrangement of the particles in the matrix and the phenomena in the interface region between the matrix and the filler have a significant effect on the permittivity and permeability of the composite materials investigated.
The different types of vulcanizates studied were built into a microwave dipole antenna with a reflector as substrates or insulating layers in flexible antenna for body centric communications (BCCs).The constructed model antenna had three metallic (Figure 12(a)-layers 1, 3 and 5) and three polymer layers (Figure 12(a), layers 2, 4 and 6), and can operate in the industrial, scientific, and medical (ISM) band in the range of 2.40 -2.48 GHz.Materials Sciences and Applications (formulations in Table 1) were used in layer 6.The structure of the antenna as well as its dimensions are shown in Figure 12.The electromagnetic parameters of the NRK composite measured at 2.56 GHz are as follows: r ε ′ = 2.48, r ε ′′ = 0.02, tanδ ε = 0.006, σ АС = 0.0021.
The first series of studies we conducted aimed at showing the absorption of electromagnetic energy from each composite with a conductive filler, while the subsequent studies aim at demonstrating the applicability of the composites to reduce the specific absorption rate (SAR) which is a value that measures how much power is absorbed in a biological tissue when the body is exposed to electromagnetic radiation [11].SAR is determined as follows: where E is the electric field (V/m), σ is the conductivity (S/m), and ρ is the density (kg/m 3 ) [12].
The SAR was computed using FDTD numerical technique by placing a homogeneous human body model (Figure 12(c)) near the antenna.In the course of the study, numerical models of nine antennae were constructed, in each of which the last antenna layer (Layer 6 located behind the reflector) consisted of Materials Sciences and Applications one of the composites from NBR-1 to NBR-9.Table 2 presents the materials from which the individual layers of the investigated antennae were made.Each antenna is placed directly on the surface of the flat phantom composites from NBR-1 to NBR-9.Each antenna is placed directly on the surface of the flat phantom to study how the insulating layer consisting of one of the composites with conductive fillers (NBR-1 to NBR-9) reduces SAR. Figure 13 shows results for the maximum mean 10 g average SAR in each of the composites with conductive fillers (NBR-1 to NBR-9 (FDTD).All results were normalized to a net input power level of 100 mW.As seen, the lowest electromagnetic was power absorbed by the sixth layer of the antenna when it is made of composite NBR-7.This effect can be related to several factors, namely to the lower dielectric permittivity of the composite, lower dielectric losses, and lower conductivity regarding the composites with a conductive filler.Figure 14 shows results for the distribution of SAR over the flat phantom surface caused by an antenna whose last layer is made of one of the composites with the conductive filler (NBR-1 to NBR-9).
According to the above figure, the highest SAR values of the surface of the phantom are observed when the last layer of the antenna is made of composites NBR-2, NBR-3 and NBR-6.These results can be related to the conductivity of the three composites which are higher than that of the rest (Figure 4).High conductivity leads to a current flow both on the surface of the reflector and on the surface of the last composite layer, which in turn leads to the generation of surface waves on the surface of the flat phantom and hence to higher SAR values.It should be noted that, the highest SAR value induced at an input power of 100 mW is lower than the maximum one set in international standards and recommendations [13] [14].On the other hand, the generation of surface waves can be used when the two antennas (transmitting and receiving) are located on the surface of the human body, for example in cases of on-body communications in Body area networks.Analyzing the results in Figure 14, one also sees that antennae with NBR-7, NBR-8 and NBR-9 composites in the third layer of the antenna cause the lowest SAR values of the flat phantom surface.This result may be associated with several factors, namely, the lower dielectric permittivity of the As a result of the research we can conclude that the studied composites can find application as insulating layers and substrates in flexible antennas, depending on the antenna application.For example, composites of higher conductivity can be used in applications where surface waves are generated to realize on-body communications, whereas composites with lower conductivity can be used for antennas that will be on the body of a person and will transmit to and receive from other antennas that are not on the same person's body (off-body communications).It is clear that we can engineer the properties of an antenna substrate at microwave frequencies by adjusting the content and type of the filler and thus control the antenna performance.It will allow us to tailor an antenna performance specific to a particular application.
Conclusions
The complex permittivity and permeability of composites containing conductive fillers-very different in their type and chemical nature-conductive carbon black, nickel powder, and combinations thereof-were investigated.The properties of the dual and tertiary composites were evaluated at the same degree of filling.A noticeable influence of the morphology and structure of both fillers and the composites on their permittivity and permeability values was established.It can be argued that particle size and occurrence of particle clustering; arrangement of the particles in the matrix and phenomena in the interface region between the matrix and the filler are the crucial factors for the permittivity and permeability formation of the composite materials investigated.
The investigations covered the possibilities to use the composites in antennae making, in particular as substrates and insulating properties in body centric communications (BCCs) flexible antennas.The research allows the conclusion that the studied composites can be used for such purposes depending on the application of the antenna.Higher conductivity composites can be used in applications where surface waves are generated to provide on-body communications, while lower conductivity composites may be used for antennas that will be on the body of a person and will transmit to and receive from other antennas that are not on the body of the same person (off-body communications).It is clear that one can engineer the antenna substrate properties at microwave frequencies by adjusting the content and type of filler and thus control and tailor the antenna performances specific to particular applications.
Figure 1 .
Figure 1.Real part of the relative dielectric permittivity of NBR-based vulcanizates.
Figure 2 .
Figure 2. Imaginary part of the relative dielectric permittivity of NBR-based vulcanizates.
Figure 4 .Figure 3 .
Figure 4.As seen, composites NBR-2, NBR-3, NBR-5 and NBR-6 are highly electroconductive and NBR-6, possesses the highest σ АС values.Compositesconductivity is a function of a number of factors, but filler-filler and matrix-filler interactions are the determining ones[6].The higher the filler loading in the matrix, the more the conductive paths in it are.Another factor is the so called "percolation threshold"-a certain concentration of the filler that causes a drastic conductivity increase.The number of conductive paths continues to increase at filler amounts higher than the percolation threshold.That leads to the formation of a complete conducting network, which can explain the high σ АС value for NBR-6, as the percolation threshold for Printex is at about 25 phr.The results in Figure4allow the conclusion that the tertiary systems have σ АС values higher than those of the composites loaded only with one filler at the same amount (30 phr and 50 phr).As Figure4, shows σ АС (NBR-1) > σ АС (NBR-4) > σ АС (NBR-7)
Figure 5 .
Figure 5. Real part of the relative magnetic permeability of NBR-based vulcanizates.
Figure 6 .
Figure 6.Imaginary part of the relative magnetic permeability of NBR-based vulcanizates.
Figure 11 (Figure 11 .
Figure 11(a) shows clearly the nickel aggregates made up of a large number of particles of different size which are isolated from each other with a coating layer cutting the contact between them.In addition, the aggregates themselves are encapsulated by an elastomeric layer and are located in the matrix itself on separate islands remote from one another.That impedes greatly the charge transfer and blocks the high electrical conductivity of the nickel powder.In the composite, filled with CCB at 60 phr (Figure 11(b)), the location of the carbon black particles in the elastomeric matrix is identical to that of the composite filled with
Figure 12 .
Figure 12.Structure of the antenna (a), photograph of the antenna (b) and photograph of the homogeneous flat phantom (c).
Figure 13 .
Figure 13.Maximum averaged 10 g SAR values caused by investigated antennaе in composite layers and flat phantom.
Figure 14 .
Figure 14.SAR distribution over the flat phantom and antenna when Layer 6 is made of composites NBR-1 to NBR-9.
Table 2 .
).The 12-field components approach was used to calculate SAR in the voxel using xFDTD (xFDTD, Remcom Inc., State College, PA, USA) simulation software, based on the finite-difference time domain method Materials used to build the layers of the antennae studies. | 4,835.6 | 2018-10-26T00:00:00.000 | [
"Engineering",
"Materials Science",
"Physics"
] |
Quaternary Ammonium Chitosans: The Importance of the Positive Fixed Charge of the Drug Delivery Systems
As a natural polysaccharide, chitosan has good biocompatibility, biodegradability and biosecurity. The hydroxyl and amino groups present in its structure make it an extremely versatile and chemically modifiable material. In recent years, various synthetic strategies have been used to modify chitosan, mainly to solve the problem of its insolubility in neutral physiological fluids. Thus, derivatives with negative or positive fixed charge were synthesized and used to prepare innovative drug delivery systems. Positively charged conjugates showed improved properties compared to unmodified chitosan. In this review the main quaternary ammonium derivatives of chitosan will be considered, their preparation and their applications will be described to evaluate the impact of the positive fixed charge on the improvement of the properties of the drug delivery systems based on these polymers. Furthermore, the performances of the proposed systems resulting from in vitro and ex vivo experiments will be taken into consideration, with particular attention to cytotoxicity of systems, and their ability to promote drug absorption.
Introduction
The current research in the field of controlled drug delivery systems has been focused on the use of polymeric materials. These, due to their unique properties, they can be easily modified and hence utilized in the pharmaceutical and food or cosmetic industry [1].
Since synthetic polymeric materials often cause side effects, natural polymeric materials, extracted from starch, inulin, cellulose, chitin or alginates, are preferred for use as excipients (e.g., binders, viscosity enhancers etc.) for the preparation of controlled drug delivery systems [2]. Among the natural polymers, chitosan is one of the most used and its safe application in the pharmaceutical and food industry has been approved by the U.S. FDA.
Chitosan is a cationic polysaccharide of d-glucosamine with some N-acetyl-d-glucosamine residues linked by β (1-4) bonds. It has excellent biological properties such as biocompatibility, biodegradability and mucoadhesivity moreover it has antimicrobial, antiviral and immunoadjuvant activities. Chitosan occurs rarely in nature and it is obtained by incomplete deacetylation of chitin that is a homo-polymer of β (1-4) linked units of N-acetyl-d-glucosamine, present in the shell of Int. J. Mol. Sci. 2020, 21, 6617 2 of 25 crustaceans and molluscs, cuticle of insects and cellular walls of fungi. Chitosan is able to enhance drug penetration not only through cell monolayer epithelia such as intestinal [3,4] and nasal [5], but also through stratified epithelia such as buccal [6,7], vaginal [7] and corneal tissue [8][9][10][11].
For all these characteristics, chitosan has been used for the preparation of conventional pharmaceutical systems (e.g., solutions, suspensions, emulsions, etc.) and for the development of innovative drug delivery systems such as colloidal systems and hydrogels.
However, the pharmacologic and therapeutic application of chitosan is limited by its insolubility in water and in most organic solvents. For this reason, various chemical modifications of chitosan molecular structure have been made in order to increase polymer solubility and, hence, its applications. Indeed, chitosan ( Figure 1) has active groups on its backbone, such as -OH, -NH 2 , that can be modified to generate different derivatives. The resulting chitosan derivatives have the same properties as the parent polymer, only, they have enhanced biocompatibility and non-toxicity [12][13][14][15][16].
However, the pharmacologic and therapeutic application of chitosan is limited by its insolubility in water and in most organic solvents. For this reason, various chemical modifications of chitosan molecular structure have been made in order to increase polymer solubility and, hence, its applications. Indeed, chitosan ( Figure 1) has active groups on its backbone, such as -OH, -NH2, that can be modified to generate different derivatives. The resulting chitosan derivatives have the same properties as the parent polymer, only, they have enhanced biocompatibility and non-toxicity [12][13][14][15][16].
A great deal of different chitosan derivatives, including acylated, alkylated, carboxylated and ammonium quaternary chitosan conjugates have been described. It is known that even slight physical-chemical differences in the polymer backbone are reflected in significant biological differences concerning, for example, cellular absorption, the biological processes that regulate this absorption and the interaction with the mucous lining of epithelia. Indeed, numerous studies show that drug release systems based on chitosan and its derivatives promote the absorption and biodistribution of drugs in a manner strictly dependent on the properties of the polymer. These properties could then be modulated to make cellular absorption selective, thus targeting the transported drug to the site of action, which would result in stronger pharmacological activity with less systemic exposure. In this review we will focus on the quaternary ammonium derivatives of chitosan in order to evaluate the impact that the positive fixed charge has on the biopharmaceutical characteristics of the relevant drug delivery system.
Quaternary Ammonium Chitosan Derivatives: Principal Characteristics and Applications
A large number of chitosan derivatives have been prepared via alkylation, quaternization, carboxylation, phosphorylation and sulfation, in order to increase chitosan solubility and extend its application in drug delivery systems. Among chitosan derivatives, quaternary ammonium salts have been the most used in tissue engineering, drug and gene delivery and wound care [17].
Chitosan derivatives are generally obtained by modifications that do not involve the basic chemical structure of chitosan, but yet they lead to compounds with changed, improved properties. Quaternary ammonium chitosan derivatives are readily water soluble irrespective of pH, therefore they enhance the release and the permeation of drugs across biological barriers in neutral/alkaline environments [18]. In particular, these derivatives have a permanent positive charge, an enhanced mucoadhesivity and a high drug loading capacity, in addition to biocompatibility, low toxicity and biodegradability. For these reasons, they are optimal candidates for the development of conventional and innovative systems delivering drugs through different routes of administration, as shown in Table 1 and as will be discussed in the following sections [1].
In virtue of their antibacterial activity, chitosan derivatives can be also used as anti-inflammatory drugs or as fibre fillers for wound dressings [19][20][21][22][23]. A great deal of different chitosan derivatives, including acylated, alkylated, carboxylated and ammonium quaternary chitosan conjugates have been described. It is known that even slight physical-chemical differences in the polymer backbone are reflected in significant biological differences concerning, for example, cellular absorption, the biological processes that regulate this absorption and the interaction with the mucous lining of epithelia. Indeed, numerous studies show that drug release systems based on chitosan and its derivatives promote the absorption and biodistribution of drugs in a manner strictly dependent on the properties of the polymer. These properties could then be modulated to make cellular absorption selective, thus targeting the transported drug to the site of action, which would result in stronger pharmacological activity with less systemic exposure. In this review we will focus on the quaternary ammonium derivatives of chitosan in order to evaluate the impact that the positive fixed charge has on the biopharmaceutical characteristics of the relevant drug delivery system.
Quaternary Ammonium Chitosan Derivatives: Principal Characteristics and Applications
A large number of chitosan derivatives have been prepared via alkylation, quaternization, carboxylation, phosphorylation and sulfation, in order to increase chitosan solubility and extend its application in drug delivery systems. Among chitosan derivatives, quaternary ammonium salts have been the most used in tissue engineering, drug and gene delivery and wound care [17].
Chitosan derivatives are generally obtained by modifications that do not involve the basic chemical structure of chitosan, but yet they lead to compounds with changed, improved properties. Quaternary ammonium chitosan derivatives are readily water soluble irrespective of pH, therefore they enhance the release and the permeation of drugs across biological barriers in neutral/alkaline environments [18]. In particular, these derivatives have a permanent positive charge, an enhanced mucoadhesivity and a high drug loading capacity, in addition to biocompatibility, low toxicity and biodegradability. For these reasons, they are optimal candidates for the development of conventional and innovative systems delivering drugs through different routes of administration, as shown in Table 1 and as will be discussed in the following sections [1].
In virtue of their antibacterial activity, chitosan derivatives can be also used as anti-inflammatory drugs or as fibre fillers for wound dressings [19][20][21][22][23]. HACC is a quaternary ammonium salt widely used in recent years as an excipient for controlled drug or gene release. HACC can be obtained through various synthetic routes. Peng et al. [68], obtained HACC by reaction of chitosan with glycidyl trimethylammonium chloride (GTMAC).
GTMAC is a small quaternary ammonium molecule with an epoxy group that reacts easily with the amino groups on the chitosan backbone.
Another synthetic route, reported by Jin et al. [69], involves the reaction of chitosan with 2,3-epoxypropyl trimethyl ammonium chloride (EPTAC). This synthesis was subsequently improved with the introduction of the concept "green chemistry", through the use of an ionic liquid, 1-allyl-3-methylimidazole chloride, as the reaction solvent [70].
In particular, Ao et al. [24], exploited the antibacterial activity of HACC and used it to prepare wound dressing systems. The authors demonstrated that wound dressings with good antibacterial properties and biocompatibility could be obtained by optimizing the concentration and the degree of substitution (DS) of HACC in bacterial cellulose culture medium.
Fan et al. [25], prepared HACC hydrogel using gamma radiation and demonstrated its potential application as a scaffold in wound healing. Indeed, the in vitro studies revealed a strong inhibitory effect of HACC hydrogel against Staphylococcus aureus and Escherichia coli, thus proving its antibacterial activity.
The ability of HACC to provide a thermosensitive and reversible sol gel transition is also very interesting. It was studied by Wang et al. [26], who prepared liposomes-containing thermosensitive hydrogels based on HACC and glycerophosphate, medicated with doxorubicin. The release of doxorubicin from liposomes-containing hydrogels in nine days was about 22% of the drug load. In vivo tests on rats showed that tumour growth in the doxorubicin group was significantly inhibited. However, serious side effects were observed. The weight of the mice in the doxorubicin group decreased significantly. The side effects were reduced by encapsulating doxorubicin in liposomes, but anticancer activity was also slightly reduced. Introduction of medicated liposomes into the HTCC-based gel significantly improved the antitumor doxorubicin activity.
This polymer was found to be suitable for preparing nanosystems such as polymeric NPs. In particular, Jin et al. [27], prepared HACC-based NPs by ionotropic crosslinking with carboxymethyl chitosan (CMC) as a carrier of Newcastle disease virus (NDV) [69]. HACC has shown good antimicrobial and antifungal activity in itself, coupled with NDV activity. Moreover, the application of HACC derivatives as nanocarriers able to enhance the immune response and the efficacy of vaccines has been demonstrated.
Lu et al. [28], successfully prepared HACC NPs loaded with paclitaxel (PTX) for the oral administration of this anticancer drug. These authors used in vitro, ex vivo and in vivo experiments to compare the behaviour of two NPs types, one based on HACC the other based on unmodified chitosan. The results showed a better intestinal permeability and cellular absorption as well as a more effective inhibition of tumour growth and induction of apoptosis in cancer cells with the former NP type. These results, which were ascribed to the presence of fixed positive charges on these NPs, highlight the importance of positive fixed charges on the absorption and internalization of polymer nanosystems.
Furthermore, HACC NPs are currently considered useful carriers for the oral delivery of hydrophilic molecules, such as proteins and peptides. Indeed, it was demonstrated that NPs based on HACC and coated with thiolated hyaluronic acid were able to promote the oral delivery of insulin, thanks to the high mucus-penetration ability of the coating [29].
Recently, Li et al. [30], proposed the synthesis of new fatty acid modified HACC, prepared by the conjugation of lauric acid or oleic acid to the quaternized polymer. These two derivatives were used to prepare NPs to deliver insulin to liver. The data showed that the NPs obtained by the more aliphatic derivative, that is oleic acid quaternized chitosan, were more internalized by liver cells than all the other NPs tested and this resulted in a more relative pharmacological availability.
The most common method for its preparation involves methylation by methyl iodide. However, the resulting polymer, TMC iodide, is unsuitable for pharmaceutical or cosmetic purposes, because it is very toxic for ingestion or inhalation. For this reason, this salt has to be converted into TMC chloride in the final purification step, by means of the dialysis technique. In addition, a number of alternative reaction types have been developed well reported in a previous review [84]. Wu et al. [85], synthesized TMC by applying the concept of "green chemistry", i.e., by reacting chitosan with dimethyl carbonate in the presence of the ionic liquid, 1-butyl-3-methylimidazolium chloride as catalyst. Very recently, Rathinam et al. attempted a selective methylation of chitosan from tert-butyldimethylsilyl-chitosan in a multi-step process, which involved protection with Boc. A derivative was obtained in which part of the primary amino groups have been trimethylated and the remaining amino groups have not been modified [86].
TMC like chitosan, has muco-adhesive properties that depend on the charge density and have been attributed to an interaction between the cationic groups of TMC and the anionic sialic and sulfonic acid residues of mucin. TMC promotes the transport of hydrophilic and peptide molecules through the paracellular route, as it is able to open the tight junctions between epithelial cells [84,[86][87][88][89].
Thanks to its amphiphilic nature, TMC can be assembled into vesicles that can be used in nanomedicine for the preparation of several innovative drug delivery systems for pharmaceutical, biomedical and biotechnological applications [91]. Numerous TMC-based colloidal systems have been reported in the literature such as, e.g., TMC-based polyelectrolyte nanocomplex, TMC-based nanoparticles, TMC-based liposomes, etc., as efficient drug delivery systems for the treatment of various forms of cancer, hypertension, rheumatoid arthritis as well as peptide, gene and vaccine delivery.
In particular, Sayin et al. [31], synthesized TMC-based NPs loaded with tetanus toxoid. When compared with NPs based on unmodified chitosan or on carboxymethyl chitosan (MCC), this NPs type demonstrated, either in vitro or in vivo, their safety, ability to be uptaken by cells and induce immune responses. Very interestingly the chitosan and TMC NPs, which have positively charged surfaces, induced higher serum IgG titrers than those prepared with MCC, which are charged negatively.
Sandri et al. [32], evaluated TMC-based NPs as carriers for the oral administration of insulin and demonstrated that, thanks to their high mucoadhesivity, they were internalized by duodenum and jejunum cells much more than chitosan-based NPs.
Similar results were obtained by other authors [33], who showed that TMC-PLGA NPs, thanks to their positively charged surfaces, can improve penetration through mucus, uptake and permeation of insulin through the intestinal epithelium much more than the uncoated PLGA NPs. Furthermore, it was demonstrated that the mucoadhesive, targeted PLGA NPs, surface-modified by lactoferrin-conjugated TMC, promote intranasal drug delivery to the brain and can be used for Alzheimer's disease treatment [34]. On their part Rassu et al. [35], used TMC alone for the preparation of particulate systems for nose-to-brain drug delivery and treatment of central nervous system diseases.
TMC-based polyelectrolyte complexes for DNA delivery were prepared by electrostatic complexation between pDNA and TMC. Zheng et al. [37], demonstrated that the cellular uptake of folate-TMC/pDNA nanocomplex was higher than that of TMC/pDNA nanocomplex thanks to the folate receptor mediated endocytosis. TMC has also been used for the preparation of several liposomal systems [38]. TMC-coated liposomes for the oral delivery of drug and natural compounds have been developed. Their efficacy in promoting absorption and controlling release of molecules, e.g., harmine, calcitonin, curcumin, was demonstrated by both in vitro and in vivo studies [39][40][41][42].
TMC finds another interesting application in wound-healing, along with such wound dressing materials as films, fibres, hydrogels, etc. Among the polymers used, such as collagen, alginate and chitosan, chitosan represents the safest material for wound dressing. However, chitosan fibres have poor liquid absorbing properties and antibacterial activity. To overcome these limits, Zhou et al. [66], prepared TMC fibres with various quaternization degrees, which exhibited higher absorption promotion ability and antibacterial properties compared to those based on chitosan, as shown by in vitro and in vivo studies. Indeed, Rúnarsson et al. synthesized a series of methylated derivatives of chitosan with different methylation degrees and found them active against S. aureus, even at pH 7.2 at which the unmodified chitosan was inactive, thus demonstrating the importance of the fixed positive charge for the antibacterial effect of polymers [43].
The enhanced antibacterial and antimicrobial activity of TMC was also used to prepare hydrogels and blends. For example, Mohamed et al. [92], synthesized hydrogels based on TMC, cross-linked with poly(vinyl alcohol) (PVA) to increase the antimicrobial and antibacterial activity of the latter. Noticeably Boles et al. [93], prepared injectable local delivery systems based on a combination between TMC and poly(ethylene glycol) diacrylate chitosan, medicated with vancomycin and amikacin. The resulting blend was not-cytotoxic and improved the antimicrobial activity of drugs.
Other administration routes involving TMC-based delivery systems have been explored. For example, TMC and its derivatives have been used in nasal delivery systems, especially as a base material for forming micro-and nanoparticles, for therapeutic applications [36].
The preparation of TMC derivatives containing cysteine (TMC-Cys) was proposed as a strategy to increase TMC mucoadhesivity [94]. TMC-Cys, which was synthesized through the formation of amide bonds between the non-substituted primary amino groups of TMC and the carboxylic groups of cysteine, combined the mucoadhesive characteristics of thiomers with the advantages of the fixed positive charges of TMC. In fact, Zhao et al. [45], demonstrated that the positive groups present on TMC and the mucoadhesive characteristics of thiol groups favour the interaction of TMC-Cys with the cell membrane, thus enhancing cellular uptake.
Thanks to their mucoadhesion and permeation-enhancing properties, the TMC-Cys derivatives have been used for the preparation of such safe and efficient oral delivery systems as NPs. For example, Yin et al. [94], synthesized TMC-Cys NPs loaded with insulin. Oral and ileal administration of TMC-Cys NPs led to more significant hypoglycaemic effects than those of the insulin solution. In addition, with this NPs type a stronger and longer-lasting hypoglycaemic effect was observed compared to TMC NPs, which is in agreement with the mucoadhesion and permeation improvement results. TMC-Cys was also able to form nanocomplexes for gene delivery [45,46]. The authors demonstrated that the positive charge of TMC-Cys would interact with the negative charge of pDNA, thus stabilizing the complexes and hence, protecting the pDNA from degradation that occurs during the extracellular transit. Nanocomplexes based on TMC-Cys and condensed with DNA were prepared also by Rahmani et al. [47]. The thiol groups present on the TMC-Cys conjugate enhance the transfection efficiency, thus making efficient gene delivery systems.
Also, a gene silencing activity of siRNA polyplexes based on TMC-Cys was demonstrated, that was much higher than that of complexes based on non-thiolated TMC [48]. These authors showed that the introduction of thiol groups in the TMC chain enhances the extracellular stability of the TMC-Cys complexes, due to formation of reducing disulfide bonds, and promotes the intracellular release of siRNA. Hence, TMC-Cys could represent a promising vector for gene delivery thanks to the fixed positive charges and the thiol groups which co-operate to improve the properties of unmodified chitosan.
Quaternary Carboxymethyl Chitosan Derivative (QCMC)
QCMC are water-soluble chitosan derivatives containing anionic and cationic groups. They possess antimicrobic activity [95][96][97], flocculating properties [98] and antioxidant activity [99]. By virtue of their properties, QCMC are used in various fields, such as tissue engineering, food and cosmetic industry, as well as antioxidant, antimicrobial and drug delivery systems.
Sun et al. [100], synthesized QCMC by reacting CMC with 2,3-epoxypropyl trimethylammonium. The resulting QCMC derivatives possess a strong antimicrobial activity due to a synergistic effect of carboxylic and quaternary ammonium groups present on the chitosan backbone. Li et al. [101], prepared QCMC with antioxidant activity starting from a reaction between chitosan and chloroacetic acid using 2,3-epoxypropyltrimethyl ammonium chloride as a modifying agent under microwave irradiation. These authors demonstrated that quaternary ammonium and carboxylic groups have a different impact on the polymer antioxidant activity, in fact the former, while on the one hand enhancing the OH scavenging activity, on the other hand exert a negative effect on the polymer metal chelating ability. Both these properties depend on the degree of substitution (DS) of quaternary ammonium and carboxyl groups on polymer chain. Indeed, the authors found that a high DS of quaternary ammonium groups lowers the reducing power, whereas a moderate DS of carboxyl groups enhances it.
Concerning the use of QCMC as antimicrobial and antibacterial agents Yin et al. [49], prepared blend films based on QCMC and PVA loaded with Cu 2+ , as potential biomaterials for biomedical application, with good antibacterial activity, in fact, they inactivated 98.3% of S. aureus and 99.9% of E. coli. For the same application, Huang et al. [50], synthesized silver nanoparticles in an aqueous solution of QCMC, used as a chemical reducing and stabilizing agent. These nanoparticles had a better antibacterial property and a lower toxicity than a solution of QCMC. Liang et al. [51], prepared liposomes from QCMC derivatives containing cholesterol and medicated with vincristine. These liposomes had a structure similar to conventional liposomes prepared from phosphatidylcholine/ cholesterol, but they had better thermal stability, water solubility and drug loading efficiency. Moreover, they exhibited a steady drug release within 2 weeks at pH 7.4.
Dimethyl Ethyl Chitosan (DMEC) and Diethyl Methyl Chitosan (DEMC)
DMEC is an N-alkyl chitosan derivative prepared by the Schiff condensation reaction between amino and carbonyl groups of chitosan, followed by water removal. In particular, Bayat et al. [102], synthesized DMEC by two reaction steps, as reported by Kim, et al. [103], in order to enhance the oral bioavailability of peptides. In the first step, the authors introduced an ethylic group on the amino group of chitosan and in the second one methyl iodide was added to produce DMEC. Like all the other N-alkyl chitosan derivatives, DMEC has antimicrobial, anticancer and antioxidant activity and could be applied in tissue engineering [104].
The synthesis of DEMC, another N-alkylchitosan derivative, is similar to that of DMEC [103], but in the second step, ethyl iodide instead of methyl iodide was added. Avadi et al. [105], synthesized a DEMC with a quaternarization degree of 79%, responsible for the complete solubility of the polymer in water at room temperature. They found that DEMC has more antibacterial activity against E. Coli than chitosan, thanks to its high charge density that interacts with bacteria more than chitosan. The antimicrobial activity was found to be pH-dependent and an increase in the concentration of acid acetic to the medium led to a decrease in minimum inhibitory concentration (MIC) and minimum bactericidal concentration (MBC). Like TMC, DEMC was able to enhance the absorption of hydrophilic drugs through the tight junctions both ex-vivo and in vivo [106].
Sadeghi et al. [52], compared free-soluble forms and nanoparticulate systems based on TMC, DMEC, DEMC and triethyl chitosan (TEC) for their ability to enhance insulin intestinal absorption. The absorption promoting effect was due to the ability of the polymers to open the tight junctions between cells and depended on polymer positive charge density. For this reason, the nanoparticles based on these polymers were found unable to promote absorption. On the basis of these results it could be erroneously concluded that the encapsulation of proteins in NPs is useless for their absorption. However, it should be borne in mind that the proteins administered orally in addition to having a low permeability have a poor stability which could instead be increased precisely by their encapsulation in nanosystems.
DMEC can be modified to enhance its potential in oral delivery. In fact, nanoparticles based on thiolated DMEC (DMEC-Cys) were successfully prepared to obtain buccal films for the delivery of insulin [53]. The paper reports ex-vivo studies demonstrating that DMEC-Cys nanoparticles enhanced insulin permeation (up to 97.18%) through rabbit buccal mucosa much more than unmodified chitosan and DMEC. The QA-Ch derivatives were prepared by our group by an aminoalkylation reaction between chitosan and 2-diethylaminoethyl chloride. This reaction resulted in the formation of derivatives having small pendant chains containing a number, n, of adjacent quaternary ammonium groups [107]. These derivatives were studied for their ability to enhance the permeation of hydrophilic and lipophilic drugs through different membranes, such as buccal, intestinal and corneal. For this purpose, the polymer structural parameters, i.e., degree of substitution, DS, and number of quaternary ammonium groups in pendant chains, n, were modulated [108,109]. It was found that all the QA-Ch derivatives were able to promote the absorption of drugs through both the paracellular and transcellular pathways of monoand multilayered epithelia and that the most effective had a higher positive charge density.
It is known that thiolated chitosan derivatives have a high mucoadhesivity thanks to their ability to make an exchange reaction with disulphide groups of mucus or through an oxidation reaction with cysteine residues of mucus glycoproteins both resulting in the formation of disulphide bonds between derivatives and mucus [110][111][112][113]. For this reason, since non-substituted amino groups remained on the chitosan backbone, a multifunctional derivative of chitosan, containing both quaternary ammonium and thiol groups (QA-Ch-SH) was obtained [114]. QA-Ch-SH derivatives were synthesised by covalent attachment of thiol groups on free primary amino groups of QA-Ch, via formation of 3-mercaptopropionamide moieties. These derivatives enhance the permeability of hydrophilic drugs through intestinal epithelium, and of lipophilic drugs, such as dexamethasone, through corneal epithelium, which enhanced the intraocular bioavailability of both these drugs [115]. The interaction between QA-Ch-SH derivatives and the intestinal and corneal epithelia is due to a synergism between the quaternary ammonium and the thiol groups of QA-Ch-SH.
In the light of the above information, it was thought intriguing to compare the value of the drug mean residence time in the rabbit pre-corneal area, as determined after instillation of a drug solution containing the positively charged mucoadhesive polymer QA-Ch, or QA-Ch-SH, with that for drug loaded nanoparticles prepared from the same polymer [54]. This comparison would clarify the actual advantages of formulating nano-structured aggregates of specific chitosan derivatives, rather than simple solutions of these non-aggregated polymers, to enhance the contact time of drugs administered by eye-drops, and hence their ocular bioavailability. The corticosteroid dexamethasone phosphate (DP), and the peptide met-encephalin acetate (ME) were chosen as model drugs [54]. The data obtained showed that the nanoparticles are considerably more effective than the parent mucoadhesive polymer when they concurrently adhere to the ocular surface and strongly interact with DP molecules in solution. In such cases, it may be worth developing the often complicated preparation of a stable nanoparticle dispersion. On the contrary, nanoparticles made from the mucoadhesive thiolated QA-Ch-SH, which interacted weakly with the non-entrapped DP, were approximately as effective as the non-aggregated parent polymer. In these cases, the preparation of the medicated polymer solution is a simpler, hence, more convenient way to prolong the drug corneal contact time. Quite different is the situation with the peptide ME, the poor mean residence time value of which is mainly due to enzymatic hydrolysis. Only ME entrapment in the supramolecular systems was able to shield the peptide from aminopeptidase activity to a significant extent, whereas the non-aggregated parent polymers were ineffective [54]. These results were confirmed by an NMR investigation, which showed that the DP entrapped in nanoparticles was involved in strong interactions inside them [116].
The presence of thiol groups on the QA-Ch chain seems to improve the wound healing properties of polymer, thus accelerating the healing process. Felice et al. [23], explored the efficiency of QA-Ch and QA-Ch-SH conjugates with high and low molecular weight (MW) in the regeneration of wounds and demonstrated that high MW QA-Ch-SH promoted fibroblast cell migration and accelerated wound healing.
Since thiomers undergo oxidative degradation in aqueous environment at pH higher than 5, it is important to protect thiol groups, in order to increase polymer interaction with mucosal epithelia. For this reason, protected thiolated quaternary ammonium chitosan derivatives (QA-Ch-S-pro) were synthesized by forming disulphide bonds between the thiol groups of the ligand 6-mercaptonicotinamide and the free thiol groups of the polymer QA-Ch-SH according to a procedure previously reported [55,117]. These derivatives demonstrated stronger mucoadhesivity properties than the corresponding thiolated non protected parent polymers [118].
Regarding ocular delivery, QA-Ch and QA-Ch-S-pro were used to prepare a thermosensitive ophthalmic hydrogel (TSOH) for the transcorneal administration of 5-fluorouracil [66]. The introduction of 5-fluorouracil-medicated NPs based on Ch in TSOH increased the transcorneal penetration of the drug when administered in rabbit eyes, leading to a time-constant 5-fluorouracil concentration in the aqueous for 7h from instillation. Subsequently, TSOH containing Ch-nanoparticles was compared with TSOH containing QA-Ch nanoparticles in order to study the impact of the surface characteristics of nanoparticles on 5-fluorouracil bioavailability [67]. The instillation, in rabbit eyes, of TSOH containing NPs based on QA-Ch or sulfobutyl chitosan (SB-Ch) led in both cases to a plateau of the concentration in the aqueous humour for 10 h. Negative charges on the surface of SB-Ch-based NPs slowed down the release of 5-FU from TSOH, while the positive charges of the QA-Ch-based NPs increased NP contact with the negatively charged ocular surface. Both resulted in a higher ocular bioavailability [67].
Recently, QA-Ch and QA-Ch-S-pro-based NPs were prepared to study the effect of mucoadhesion on oral bioavailability of a protein model drug (FD4) [55]. In vivo data obtained with rats for the FD4 plasma concentration vs. time pattern showed that the bioavailability of FD4 was higher when this was administered via QA-Ch-S-pro-based NPs than with QA-Ch-based NPs. Moreover, the peak time moved from 1 to 2 h. The bioavailability increase was ascribed to an increased NPs residence time at the absorption site, associated with an increase in NPs adhesion to the mucus lining of the intestinal epithelium [55].
It is known that the poor solubility of drugs in physiological fluids may be responsible for their poor absorption and, therefore, for the insufficient bioavailability of the drug. For this reason, the solubilization ability of metyl-β-cyclodextrin (MCD) and the mucoadhesive properties of QA-Ch were merged into a QA-Ch-MCD derivative. The macromolecular product (QA-Ch-MCD) and its relevant nanoparticulate carrier (NPs) were thoroughly characterized and compared in terms of their ability to promote the absorption of the poorly soluble model drug dexamethasone (DEX) [65]. In vitro and ex-vivo studies revealed a stronger mucoadhesivity of the macromolecular complex, resulting in a more difficult transport through mucus, with respect to NPs. Drug permeation through excised rat intestine was faster when the macromolecular complex was used as the carrier. Meanwhile, the permeation rates of the fluorescein isothiocyanate (FITC) labelled carriers were comparable. Then, the use of NPs did not seem to provide any determinant advantage over using the simpler macromolecular complex [56]. The role of the QA-Ch-MCD conjugation concerning the conjugate ability to bind the dalargine peptide (DAL) in comparison with that of the physical mixture of QA-Ch and MCD was also investigated. The data showed a greater ability of QA-Ch-MCD to protect DAL from degradation by α-chymotrypsin compared to the physical mixture of the precursors. This ability can be attributed to a synergistic cooperation of cyclodextrin and polymer, which occurs only when the former is covalently linked to the latter [57].
Many agri-food extracts are important sources of polyphenols, that is, molecules of high interest thanks to an ample variety of biological activities [119]. However, a low bioavailability is the major problem of using antioxidants from agri-food extracts in the therapy. The poor intestinal absorption, along with oxidation in GI and marked metabolism in liver make it unlikely that high concentrations of these antioxidants are found in the organism for long after ingestion, and reach the blood, that is the action site [56]. For this reason, QA-Ch and QA-Ch-SH were used to prepare nanoparticles that were tested for their ability to enhance grape seed extract oral absorption [59,60].
The uptake of nanoparticles by endothelial progenitor cells (EPC) upon incubation with grape seed extract loaded in FITC-labelled NPs based on QA-Ch or QA-Ch-SH was studied. Both NP types were partially internalized by cells, QA-Ch-based NPs being seemingly taken up to a higher extent than QA-Ch-SH-based NPs. This difference can safely be correlated with the stronger positive surface charge of the former, resulting from zeta-potential measurements. Moreover, it was found, as shown in Figure 2I, that following incubation of NPs dispersions with the mucosa of excised rat intestine, NPs migrated from donor to acceptor compartment and penetrated across the intestinal wall in an integral state [61]. Recently an analogous study with autochthonous cherry extracts from the Region Tuscany was carried out. In this case the extracts were encapsulated in NPs based on two different polymer types, namely QA-Ch and QA-Ch-S-pro. The data obtained showed that the cherry extracts encapsulated in nanoparticles are much more stable than the non-encapsulated ones and are not degraded in the gastric environment [62]. NPs from both types of chitosan derivatives promoted the absorption of cherry extracts with no significant difference between the two nanoparticle types. The same nanoparticles were compared to those prepared from poly(lactic-co-glycolic acid) (PLGA) [63]. All nanoparticle types were able to promote the permeability of encapsulated extract and showed good anti-inflammatory activity. However, as shown in Figure 2II, NPs prepared from Ch derivatives were more capable of being internalized by endothelial cells than PLGA nanoparticles due to the positive charges present on the surface of the former. Interestingly NPs prepared from QA-Ch-S-pro were more effective in improving the ability of cherry extract to protect endothelial cells from oxidative stress, thanks to the intrinsic antioxidant properties of the protected thiol group present on the polymer [64].
In Vitro Studies to Characterize Drug Delivery Systems Based on Quaternary Ammonium Chitosan Derivatives
In vitro studies are commonly used to establish the efficacy of a drug delivery system and predict in vivo behaviour. Jug et al. [120], reported that the future development of novel mucosal drug delivery systems can be facilitated by using the vitro-in vivo correlation mathematical model (IVIVC), which can predict the in vivo performance of a drug starting from the in vitro model used for drug release. Different in vitro models have been developed to study the effectiveness of a specific delivery system and include cellular studies, drug release studies, and evaluation of positive biological effects. The choice of one in vitro model rather than another mainly depends on the absorption route of the drug contained in the system, hence on the absorption site. In the next sections the in vitro studies of the drug delivery systems based on quaternary ammonium chitosan derivatives that have been used will be reviewed.
Release Studies
Although in vitro drug release tests were initially developed for solid oral dosage forms and reported by all the Pharmacopoeias, many changes have been proposed in order to study drug release from innovative delivery systems [120]. To evaluate the in vitro release of insulin from HACC microparticles coated with Eudragit L100-55, Sonia et al. [121], suspended the formulations in buffer solutions (pH 1.2 or 7.4) for 6 h, after which the insulin content was quantified by the Lowry protein assay. Due to the hydrophilic nature of HACC, at pH 7.4 the microparticles showed a burst release of insulin followed by a slow and sustained release.
On the other hand, the amount of insulin released at gastric pH was rather low thanks to the gastro retentive coating. Similarly, Xu et al. [122], studied the release profile of the model protein drug, bovine serum albumin (BSA), from HACC NPs. BSA-loaded HACC NPs were placed in test tubes with 6 mL of 0.9% (w/v) sodium chloride saline and incubated at 37 • C under stirring. After collection of samples at different times, the amount of BSA released from the nanoparticles was evaluated by the Coomassie Blue protein assay. The in vitro release profile showed four different phases, e.g., (1) an initial burst desorption of BSA from surface, (2) a 12 h BSA re-adsorption onto nanoparticle surface, (3) a plateau phase for the subsequent 3 days, resulting from diffusion of the drug dispersed in the polymer matrix, and finally (4) a constant sustained release of the drug, resulting from both protein diffusion through polymer and polymer erosion. This profile described a slow and continuous release of BSA and confirmed the potential of HACC NPs to control protein release.
In the perspective of a potential mucosal vaccine, Zao et al. [123], designed HACC and N,O-carboxymethyl chitosan (CMC) nanoparticles loaded with Newcastle Disease Virus Fusion gene plasmid DNA with C3d6 molecular adjuvant. In vitro release of plasmid DNA was performed and the results exhibited an initial burst effect followed by a prolonged and sustained release indicating the HACC-CMC nanoparticulate system as a carrier for the delivery of plasmid DNA via the nasal route [123].
A noteworthy study reported the release of diclofenac sodium (DC) from TMC nanoparticles for ocular delivery. The pH effect of the NPs reconstitution buffer was the focus of this in vitro experiment. Three different pH (5.5, 6.5 and 7.4) phosphate buffer solutions were used separately to reconstitute the NP dispersions from the lyophilized products. The receptor compartment, containing a phosphate buffer solution pH 7.4, was separated from the donor compartment through a cellulose dialysis membrane. Eight hours after the start of the experiment the NP dispersion reconstituted with 5.5 phosphate buffer showed a constant release pattern avoiding the rapid DC dissolution that instead occurred with 6.5 or 7.4 phosphate buffer reconstitution. Because of the DC pKa close to pH 5.5, the 5.5 phosphate buffer could be considered preferable as the reconstitution solution for TMC nanoparticles [124].
Meng et al. [34], designed targeted PLGA NPs coated with lactoferrin (Lf)-conjugated TMC medicated with huperzine A (HupA). These NPs were studied as a nose-to-brain delivery system for Alzheimer disease therapy. The in vitro study of HupA release was carried out using an NPs dispersion in PBS pH 7.4 placed in a cellulose membrane dialysis bag as the donor compartment, immersed in PBS pH 7.4 at 37 • C for 96 h. A prolonged and sustained release of HupA was observed, suggesting that these NPs can represent a model for further investigation in nose-to-brain delivery.
However, it is important to underline that in the case of NPs, dialysis experiments might not be descriptive of drug release, because the process could be mainly controlled by drug permeation across the dialysis membrane [125]. This view was supported by the results obtained in a subsequent work by our research group where the release of dexamethasone sodium phosphate or metenkephalin from two NPs types, based on QA-Ch or QA-Ch-SH was investigated. A dialysis method was used where equal samples of the NP dispersion under study were dialyzed for different time intervals, at the end of which each sample was ultra-centrifuged and the relative surnatant analyzed for the drug. This procedure allowed the building of the drug released from NPs vs. time graph [54]. All nanoparticle types showed an initial burst release followed by no further drug release from the nanoparticle matrix over 24 h. These results were in agreement with those found by Uccello-Barretta et al. [116]. Similar results were obtained using nanoparticles based on QA-Ch-S-pro, loaded with FD4 [104] or based on QA-Ch-CD [56].
Evaluation of Antibacterial, Antifungal, Antimicrobial and Antioxidant Activity
Quaternary ammonium chitosan derivatives have antioxidant effects due to the presence of the fixed positive charge. The method frequently used to assess the antioxidant activity of quaternary ammonium derivatives is the 2,2-diphenyl-1-picrylhydrazyl hydrate (DPPH) radical scavenging assay that is based on the measurement of the scavenging capacity of antioxidants [126]. DPPH method has undergone various modifications, but the basic approach remains the same: the compound under analysis is mixed with a DPPH solution and the absorbance, measured colorimetrically, is directly related to the antioxidant activity. In vitro bacteria or fungi cultures are prepared in Petri dishes or 96-well plates to study the antibacterial and antifungal activity of polymers, which are generally reported as minimum inhibitory concentration (MIC), minimum bactericidal concentration (MBC) and minimum fungicidal concentration (MFC), respectively. Different HACC derivatives were compared for their antibacterial and antifungal activity through a microdilution broth method and evaluation of MIC [72]. Since either bacterial or fungal cell membranes are anionic due to the presence of phospholipids, cationic polymers can interact with the negative charges and cause membrane damage, leading to cell death. In vitro studies confirmed this mechanism of action for HACC derivatives against bacteria or fungi.
Similarly, to evaluate the antimicrobial and antibacterial activities of DEMC, Avadi et al. [105], analyzed the MIC by a turbidimetric method, consisting in mixing different test tubes containing DEMC and E. coli suspensions and studying them for visible signs of growth or turbidity. Subsequently, MBC was evaluated by inoculating the live organisms that had shown no growth in the MIC test on Eosin-Methylene Blue and looking again for signs of growth. The final results showed that DEMC has a higher inhibitory effect against E. Coli and a higher antibacterial activity than chitosan, thanks to the high charge density through which it interacts with bacteria.
To investigate the antibacterial activity of TMC NP/chitosan composite sponge against E. Coli and S. aureus, Xia et al. [127], employed the transwell methods. A bacterial suspension was added to 12-well plates basolateral chamber, whereas the materials under test were placed in the apical chamber of a transwell plate for 24h. Serial dilution of bacteria culture were plated on Lysogeny broth (LB) agar plate and counted. The results showed a more intense antibacterial activity of TMC NP/chitosan with respect to chitosan alone. This was ascribed to the presence of several quaternary ammonium groups on the TMC NPs surface, which enhanced the electrostatic interaction with the anionic charges of bacteria components.
TMC/poly(vinyl alcohol) (PVA) hydrogels cross-linked with glutaraldehyde were studied in vitro for biodegradation, uptake and swelling ability and antimicrobial activity. The hydrogel samples were immersed in a simulated body fluid solution at pH 7.4, in order to study biodegradation and swelling ability at different times, up to 192 h. The antimicrobial test for Gram-positive and Gram-negative bacteria and fungi was carried out using the agar well diffusion method by measuring the inhibition zones against the test organisms. Moreover, MIC was determined by micro-dilution method in 96-well plates. The results showed that PVA increased hydrogel swelling ability and that the cross-linking with the highest percentage of glutaraldehyde reduced hydrogel biodegradation and enhanced hydrogel antimicrobial activity [92].
Li et al. [128], compared different quaternary ammonium derivatives containing pyridine or amino-pyridine and demonstrated that the position of the amino group on pyridine can influence their antioxidant properties. In fact, amino group on the C-3 position of the pyridine ring has an important role on the scavenging activity against hydroxyl radicals and DPPH-radicals. This group acts as an electron donor able to quench and stabilize reactive free radicals, thus showing high influence on the antioxidant activity.
Antimicrobial and bactericidal activities of N-quaternary ammonium-O-sulfobetaine-chitosan cotton fabrics against Gram-negative bacteria E. coli, Gram-positive bacteria S. aureus and the fungus C. albicans were evaluated by Zhang et al. [129], using the viable cell count quantitative method.
The results from all studies show that cotton fabrics have a strong antimicrobial activity against S. aureus than E. coli. In fact, the synergistic effect of quaternary ammonium chitosan and sulfobetaine can generate reactive oxygen species which are more destructive for S. aureus.
Wei et al. [21], evaluated the antioxidant activity of 6-O-imidazole-based quaternary ammonium chitosan derivatives through three in vitro radical scavenging assays, including DPPH, hydroxyl, and superoxide radicals. The authors also performed antifungal assays to determine the minimum inhibitory concentration and the mycelium growth inhibition. In all cases, the derivatives showed higher antioxidant activity than chitosan, due to the higher density of positive charge and the strong electron donor ability of the substituent which contribute to their bioactivity. All these studies show how important the positive fixed charge is for the antimicrobial activity of the polymers, however it must also be considered that at the neutral pH of the culture broths the chitosan is insoluble and that its lower activity can also depend on this aspect.
Evaluation of the Nanosystems Ability to Diffuse through Mucus
Different study methods have been developed to evaluate the nanoparticles transport through the mucus [130][131][132][133][134]. The most used method one is the multiple particle tracking [135], based on analysis of the diffusion of fluorescent NP into the mucus as observed by a microscope equipped with a camera. However, this method does not take into account the water movement through the mucus which could influence NPs diffusion. Inside the intestine there are two different water movements, the first one is longitudinal, responsible for transporting NPs away from the absorbent epithelium, the second one is transverse, from the gastrointestinal lumen to the absorbent epithelium and vice versa. Indeed, the fundamental task of the intestine is the absorption of nutrients and salts which implies water absorption to balance blood osmolarity. The transverse water movement in intestinal mucus is advective, that is, the water and dissolved substances are transported by bulk motion.
Recently we developed a simple method for studying the ability of nanosystems to penetrate through mucus [136]. Using this method, we succeeded in simulating the advective water movement from intestinal lumen to epithelium across the mucus lining. This method has appeared more predictive than, for example, the multiple particle tracking method. Indeed, with the latter the transport of nanoparticles in mucus is observed in the absence of the water movement which, in fact, can be decisive in pushing particles through the mucus to reach the epithelium, that is the absorption site. Two types of nanoparticles loaded with fluorescein isothiocyanate-dextran 4 KDa (FD4) were tested with this method: QA-Ch-based NPs QA-Ch-S-pro-based NPs. QA-Ch-S-pro NPs resulted more mucoadhesive than those prepared from QA-Ch, hence less able to diffuse in the mucus.
FD4 plasma concentration-time profiles and the corresponding pharmacokinetics parameters founds in vivo in rats showed that NP QA-Ch-S-pro-based NPs had a significantly higher bioavailability than QA-Ch-based NPs [55]. The results of this work indicate that drugs trapped in the more mucoadhesive NP type have a higher oral bioavailability than those trapped in less mucoadhesive ones. Indeed, mucoadhesivity tends to keep the formulation at the absorption site, while water movement facilitates NP transport across the mucus layer from lumen to epithelium where NP can be internalized by cells.
Cell Studies
The biocompatibility assessment of biomedical polymers is essential to ensure the safety of systems intended for use. The biological evaluation of medical devices, in terms of the procedures to identify and quantify the biological risks associated with the use of biomedical materials, is governed by the International Organization for Standardization (ISO) 10993 in Europe (ISO 10993) and by the Food and Drug Administration (FDA) blue book memorandum G95-1 in the United States (#95-1, US FDA). The biocompatibility of polymers remains one of the key aspects that must be investigated because it is the major factor limiting their use in the biomedical field. In vitro cell cultures represent a powerful tool for the preliminary screening of biomaterial cytotoxicity. Cell-based assays can provide essential information on the potential effects of chemicals on specific cell properties and provide sound basis for further molecular studies [137]. Although chitosan is considered non-toxic and biologically biocompatible, its modifications could make it more or less toxic, hence the biological effects of chitosan derivatives should be investigated.
Oral/Intestinal
Usually, the Caco-2 cell monolayer is the most used model for studying the ability of mucosal delivery systems to improve intestinal absorption [138]. However, the research is increasingly moving toward cell models that are more capable of mimicking the epithelial/endothelial barriers characterizing a specific mucosal surface. Indeed, innovative cell models including co-culture and triple co-cultures, e.g., Caco-2/HT29, Caco-2/Raji-B and Caco-2/HT29/Raji-B), have been recently developed for studying intestinal permeation and cell uptake of nanoparticles with different physical-chemical characteristics [139].
The triple co-culture model Caco-2/HT29-MTX/Raji B was used by Beconcini et al. [63], to study the oral permeability of QA-Ch and QA-Ch-S-pro nanoparticles loaded with cherry extracts. The model was prepared by a previously described and validated method [140]. Briefly, Caco-2 and HT29-MTX cells were seeded together into a transwell insert, subsequently Raji B were added to the basolateral compartment, and the transepithelial electrical resistance (TEER) was monitored starting from the second week of co-culture. It was observed that only the NPs based on QA-Ch-S-pro significantly promoted the permeation of the encapsulated cherry extracts, thanks to a stronger adhesion to the mucus layer with respect to the less mucoadhesive QA-Ch-based NPs.
The in vitro cytotoxicity of TMC-and CMC/TMC-liposomes was investigated through CCK8 assay using mouse fibroblast cells (L929) and human colorectal adenocarcinoma cells (Caco-2) [42]. After 24 h of liposome incubation with both cell types, a concentration-dependent cell viability was observed. TMC liposomes showed a higher cytotoxicity than CMC/TMC-liposomes and control, due to an electrostatic interaction between the positively charged TMC and the negatively charged cell membranes, which can cause cell rupturing. Then the CMC/TMC liposomes appeared to be the most cytocompatible.
Pulmonary
The new bronchial epithelial cell line VA10 was used for the evaluation of different quaternary ammonium chitosan derivatives, including TMC, as drug permeation enhancers and for possible pulmonary applications. In particular, the authors studied the ability of these derivatives to promote the paracellular transport of the macromolecular marker FD4 [88]. Since all derivatives, TMC in particular, caused a dose dependent decrease in TEER and an increase in paracellular permeability of FD4, they could be used to increase the paracellular permeation of hydrophilic macromolecules such as peptide and protein-based drugs.
In a research carried out by Felice, et al. [23], primary fibroblasts cell viability was tested by WST-1 assay in the presence of quaternary ammonium-chitosan conjugates and their thiolated derivatives with high or low molecular weight. All chitosan derivatives showed a complete biocompatibility at a concentration of 10 µg/mL for 24 h. Wei et al. [21], assessed HaCat cell keratinocytes viability by testing 100 µg/mL 6-O-imidazole-based quaternary ammonium chitosan derivatives by the MTT assay. The test exhibited values of cell viability up to 80% suggesting a good biocompatibility of the developed system.
HACC modified with nisin was studied for cytotoxicity on NIH-3T3 fibroblasts from mouse by using MTT assay [141]. After 24 h and 48 h cell incubation, HACC showed a low cytotoxicity but a reduced cell viability with respect to unmodified HACC, because of the introduction of nisin, which may damage the structure of glycoproteins on the NIH-3T3 cell membrane. However, the results demonstrated that HACC modified with nisin could be a promising candidate for wound dressing application if used in a certain concentration range.
Ocular
To assess the eye irritation of TMC NPs loaded with diclofenac sodium, confluent rabbit corneal cell line (SIRC) were incubated with a solution prepared from lyophilized TMC NPs (0.5-5%) following the short time exposure (STE) [124] validated by Takahashi et al. [142]. The treated cells were analysed by the MTT assay and the resulting score revealed that the TMC NPs were safe.
Meng et al. [34], studied in vitro cell viability of 16HBE cells taken as a model of nasal mucosa for nose-to-brain delivery system for Alzheimer disease therapy. MTT was carried out for 24 h and no toxicity was observed of PLGA nanoparticles conjugated with TMC up to a concentration of 10 mg/mL of TMC.
Ex-Vivo Studies
Ex vivo tests are important as well as in vitro tests for the development of a controlled release bioadhesive system because they contribute to studying permeation, compatibility, mechanical and physical stability, superficial interaction between formulation and mucous membrane and strength of the bioadhesive bond [144]. Therefore, different types of ex-vivo models have been used to evaluate the mucoadhesive properties of oral, buccal, periodontal, nasal, gastrointestinal, vaginal or rectal delivery systems [145]. However, the most reliable ex-vivo methods combine the study of mucoadhesive and permeation properties to gain useful information about the behaviour of the drug delivery systems.
In order to evaluate the potential of NPs to facilitate insulin transport, Yin et al. [94], monitored the transport of insulin either free or contained in TMC or TMC-Cys nanoparticles through rat ileum.
The formulations were incubated with the ileal loop. Subsequently the ileal loop was washed with saline and the remaining concentration of insulin was quantified. In order to evaluate the ability of nanoparticles to facilitate insulin transport, the formulations were syringed into the rat ileal sac and the insulin concentration in the incubation buffer (Kreb's-Ringer buffer) was quantified at different time intervals. It was found that TMC-Cys nanoparticles, thanks to the presence of mucoadhesive thiol groups, enhanced insulin permeation more than TMC nanoparticles. This result was confirmed by an in vivo mucoadhesion study.
Another reported method based on the use of the excised rat intestine was used to study the permeation characteristics of a drug delivery system. Briefly, excised rat intestine, accurately washed, was mounted in an Ussing type chamber and the transport of the drug from apical to basolateral side was measured at different time intervals [114]. To study the mucoadhesive properties of a drug delivery systems the excised rat intestine was used. This was cut into segments and incubated 3 h with the formulations to be tested. Using this method, the authors demonstrated that nanoparticles based on QA-Ch-SH were more mucoadhesive than nanoparticles based on QA-Ch due to the presence of thiol groups on the former [59].
Both these methods were used to demonstrate the ability of mucoadhesive QA-Ch and QA-Ch-S-pro derivatives to improve the intestinal permeation of cherry extracts (CE). The results showed that, although QA-Ch-S-pro nanoparticles were more mucoadhesive than QA-Ch nanoparticles, both nanoparticle types were able to promote the intestinal absorption of CE and protect CE polyphenols from degradation in the stomach, thus increasing their oral bioavailability [62,63].
Concluding Remarks
In this review, the relevance of the fixed positive charge, present on the quaternary ammonium chitosan derivatives, to the preparation of drug release systems was highlighted.
Many innovative systems have been prepared with these polymers, such as thermosensitive hydrogels, polymeric nanoparticles, liposomes, nanocomplexes for the administration of drugs, nutraceutical products or genes through different routes of administration or for local application.
These systems have shown excellent characteristics thanks to the positive fixed charge on the polymer backbone, which resulted in improved mucoadhesiveness, antibacterial and antimicrobial properties, enhancement of drug absorption through both the transcellular and paracellular pathways, and promotion of wound healing compared to unmodified chitosan-based systems. The nanosystems prepared with these polymers have also shown an improved ability to be internalized by a wide variety of cells, thus promoting the absorption of the encapsulated drugs. This effect was related to the positive charge density present on the surface of nanosystems. Certainly it must be taken into account that the cytotoxicity tests have shown a greater cytotoxicity of the quaternary ammonium derivatives compared to unmodified chitosan, however these polymers can be used safely within a fairly wide range of concentrations. | 12,734.4 | 2020-09-01T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Catalpol Protects Against Pulmonary Fibrosis Through Inhibiting TGF-β1/Smad3 and Wnt/β-Catenin Signaling Pathways
Idiopathic pulmonary fibrosis (IPF) is a fatal lung disease characterized by fibroblast proliferation and extracellular matrix remodeling; however, the molecular mechanisms underlying its occurrence and development are not yet fully understood. Despite it having a variety of beneficial pharmacological activities, the effects of catalpol (CAT), which is extracted from Rehmannia glutinosa, in IPF are not known. In this study, the differentially expressed genes, proteins, and pathways of IPF in the Gene Expression Omnibus database were analyzed, and CAT was molecularly docked with the corresponding key proteins to screen its pharmacological targets, which were then verified using an animal model. The results show that collagen metabolism imbalance, inflammatory response, and epithelial-mesenchymal transition (EMT) are the core processes in IPF, and the TGF-β1/Smad3 and Wnt/β-catenin pathways are the key signaling pathways for the development of pulmonary fibrosis. Our results also suggest that CAT binds to TGF-βR1, Smad3, Wnt3a, and GSK-3β through hydrogen bonds, van der Waals bonds, and other interactions to downregulate the expression and phosphorylation of Smad3, Wnt3a, GSK-3β, and β-catenin, inhibit the expression of cytokines, and reduce the degree of oxidative stress in lung tissue. Furthermore, CAT can inhibit the EMT process and collagen remodeling by downregulating fibrotic biomarkers and promoting the expression of epithelial cadherin. This study elucidates several key processes and signaling pathways involved in the development of IPF, and suggests the potential value of CAT in the treatment of IPF.
INTRODUCTION
Pulmonary fibrosis (PF), which usually manifests at the end stages of various interstitial lung diseases, is characterized by alveolar epithelial cell damage and abnormal deposition of extracellular matrix (ECM) (Herrera et al., 2018). There are a variety of causes of PF-idiopathic pulmonary fibrosis (IPF) is a form of an unexplained and severe PF with a 5-years survival rate of less than 30% (Nalysnyk et al., 2012). Currently, the only drugs recommended for the treatment of mild-tomoderate IPF are pirfenidone (PFD) and nintedanib, both of which, however, fail to prolong the survival of patients (Ogura et al., 2015;Jo et al., 2016). Moreover, PFD has several side effects, such as gastrointestinal reactions, rash, and photosensitivity (Cottin and Maher, 2015). Thus, there is an urgent need for the development of new drugs for IPF.
Bioinformatics utilizes sequence comparison and cluster analysis methods to extract biological information using technologies such as GeneChip, which enables more comprehensive and systematic study of disease pathology. With the development of high-throughput microarray and sequencing technologies in recent years, it has become possible to investigate the gene expression profiles of IPF and the corresponding changes in PF tissue and key genes. Moreover, the differentially expressed gene (DEG) data obtained in this manner has in many cases enabled the successful screening of potential drugs by the docking of small molecular compounds with the corresponding proteins.
Rehmannia glutinosa is a traditional Chinese herb that has been widely used for the treatment of circulatory diseases for thousands of years (Li and Kan, 2017;Xu et al., 2017). Catalpol (CAT), a compound extracted from R. glutinosa, is known to have anti-inflammatory, anti-epithelial-mesenchymal transition (EMT), anti-oxidative, anti-apoptoti, and anti-angiogenesis properties, and to have favorable pharmacological effects in patients with asthma (Chen et al., 2017), lung cancer, glomerulonephritis, and colon cancer (Zhu et al.,2017b;Yang et al., 2020). In recent years, studies have focused on the protective effects of R. glutinosa extract, represented by CAT, on nerve cells and kidney cells. The mechanism of action is to target SIRT1 and Wnt/β-catenin signaling pathways, stabilize cytoskeleton and enhance autophagy Liu et al., 2019;Zhang et al., 2019;Zhou et al., 2019;Cheng et al., 2020;Wang et al., 2020). However, its effects on IPF remain unknown. In order to investigate the effect of CAT on IPF, we screened the Gene Expression Omnibus (GEO) database for DEGs in IPF and investigated the important signaling pathways and proteins involved in the development of IPF. We also used molecular docking to virtually screen proteins expressed by the DEGs and estimate the theoretical stability of their binding to CAT. Furthermore, we established a rat IPF model to verify the efficacy of CAT against IPF and explore its mechanisms of action.
Data Processing
GEO2R allows users to compare different sample groups in the GEO series and to screen genes that are differentially expressed under different experimental conditions. GEO2R compares original submitter-supplied processed data tables using the GEOquery and limma (Linear Models for Microarray Analysis) R packages from the Bioconductor project. The GEOquery R package parses GEO data into R data structures that can be used by other R packages. The limma R package has emerged as one of the most widely used statistical tools for identifying DEGs. We used the GEO2R online software to analyze the microarray data provided by the original submitter and identify DEGs with recognition thresholds set to false discovery rate (FDR) < 0.05 and |log2 fold change (FC)| > 1. The upregulated and downregulated genes were analyzed, and volcano maps of the three datasets were drawn. We then selected the DEGs present in two or three datasets as the total differential genes, which exceeded 600 in number. Venn diagrams were drawn for upregulated and downregulated genes. Finally, hierarchical cluster analysis was performed on DEGs, with heat maps drawn for the three chips using the heatmap package.
Gene Ontology and Pathway Enrichment Analysis
DAVID (http://david.abcc.ncifcrf.gov/) is an online gene function annotation tool that provides information regarding the biological significance of a large number of genes (Huang et al., 2007). The GO analysis provided by DAVID for researchers includes the cellular components (CC), molecular functions (MF), and biological processes (BP) categories (Gene Ontology, 2006). We used this database for annotation, data analysis, and visualization. In addition, we performed a Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway function enrichment analysis on the DEGs described above (Kanehisa and Goto, 2000). p < 0.05 was considered statistically significant.
Molecular Docking
Based on the differential expression results of the above genes and proteins and information from the pathway enrichment analysis, core proteins were selected for forward molecular docking with CAT. CAT molecular structure data were obtained from the PubChem website (https://pubchem.ncbi.nlm.nih.gov/) (Kim et al., 2019) and protein crystal structure data from the RCSB website (http://www.rcsb.org/) (Berman et al., 2003). We used Discovery Studio 2016 (DS) for creating the 2D and 3D effect pictures and for the molecular docking calculations. DS was used to extract the ligands with crystal structure of protein. After exposing active sites, excluding crystalized water, hydrogens, and side chain residues, the CHARMm force field and Momany-Rone charge were added. The docking file of the active region was then obtained using default parameters. Docking files for crystal structures without ligand were automatically generated using default parameters. Using the DS CDOCKER module, the possible multiple conformations, interaction energies and main action sites of CAT to target protein docking were obtained. Molecular docking results can indicate the mechanism of action of CAT on IPF and guide the selection of related detection indicators in subsequent animal experiments. Rabbit anti-Wnt3a antibody (BS-1700r), rabbit antiphosphorylated-Smad3 antibody (bs-19452r), and rabbit anti-Smad3 antibody (BS-3484r) were purchased from Biosynthesis Biotechnology Inc. Beijing, China. Rabbit anti-α-SMA antibody (GB11044), rabbit anti-matrix metalloprotease (MMP)-7 antibody (A0695), rabbit anti-COL1A1 antibody (GB11022-3), rabbit anti-COL3A1 antibody (GB13023-2), rabbit anti-β-catenin antibody (GB12015), rabbit anti-GSK-3β antibody (GB11099), rabbit anti-E-cadherin (E-cad) antibody (GB13083), and HRPlabeled goat anti-rabbit IgG antibodies (GB23303) were purchased from Wuhan Servicebio Technology Co., Ltd. The rat TGF-β1 enzyme-linked immunosorbent assay (ELISA) kit (E04019240) was purchased from Cusabio Biotech Co., Ltd. Wuhan. Rabbit anti-anti-β-actin antibody (AC026), rabbit anti-phosphorylated-GSK3β antibody (AP0039), and rabbit anti-phosphorylated-β-catenin antibody (AP0979) were obtained from ABclonal Biotechnology Co., Ltd. Wuhan. Rat IL-6 (CSB-E04640r), IL-1β (CSB-E08055r), and TNF-α (CSB-E11987r) ELISA kits were purchased from Wuhan Huamei Bioengineering Co., Ltd.
Animal Grouping and Modeling
The study protocol was approved by the Research Ethics Committee of the Affiliated Hospital of Shandong University of Traditional Chinese Medicine (Approval No. AWE-2019-046) and followed the National Institutes of Health Guide for the Care and Use of Laboratory Animals (NIH Publications No. 8023, revised 1978). Male Sprague Dawley rats (180-220 g, SPF grade) purchased from Jinan Pengyue Experimental Animal Breeding Co., Ltd. (Certificate No. SCXK [Lu]2014-0007, Jinan, China) were maintained under 12-h light/12-h darkness conditions with free access to feed and water. After 7 days of adaptive breeding, the rats were randomly divided into six groups (6 in each group): 1) saline (NS) group; 2) BLM + NS group; 3) BLM + CAT (10 mg/kg/d) group; 4) BLM + CAT (20 mg/kg/d) group; 5) BLM + CAT (40 mg/kg/d) group; and 6) BLM + PFD (150 mg/kg/d) group. A single intratracheal instillation of BLM (5 mg/kg) was used to induce PF in rats. After the modeling, rats in the CAT groups were injected intraperitoneally with the corresponding concentration of drugs, while rats in the PFD group were administered PFD intragastrically; all animals were sacrificed 28 days later. Rat blood from the abdominal aorta was centrifuged at 5,000 rpm for 10 min at 4°C, and the serum was stored at -80°C. Lung tissues were also collected and weighed. Lung index was calculated as lung weight (g)/(body weight (g) × 100%. The whole lung was lavaged three times using 2 ml of physiological saline, and bronchoalveolar lavage fluid (BALF) was then collected. Some of the lung tissue was placed in 4% paraformaldehyde, with the rest frozen in liquid nitrogen and stored at −80°C for further examination.
Morphological and Histological Analysis
Lung tissues fixed using 4% paraformaldehyde for 48 h were embedded in paraffin and sectioned (thickness, 5 μm). The slices were stained with HE and Masson trichrome for the evaluation of lung tissue pathological changes and then imaged at a magnification of ×200 using an optical microscope. According to the Szapiel scoring standard and Ashcroft scoring standard, the degree of alveolitis and PF were scored, respectively (Szapiel et al., 1979;Ashcroft et al., 1988).
Measurement of HYP, MDA, and ROS Levels and SOD, ALT, and AST Activity
Lung tissues were ground in cold physiological saline to obtain a 10% homogenate, which was centrifuged at 3,500 rpm for 10 min at 4°C, and the supernatant was retained for the assessing HYP, MDA, and ROS levels and SOD activity. Serum samples were used to detect ALT and AST activity according to the corresponding kit instructions.
Statistical Analysis
Data are represented as mean ± standard deviation (SD). Differences between the groups were evaluated using one-way analysis of variance followed by the least significant difference (LSD) post hoc test. A value of p < 0.05 was considered statistically significant. IBM SPSS Statistics 19.0 (IBM SPSS FIGURE 1 | In this study, we screened the GEO database for DEGs in IPF to find the key signaling pathways and proteins in the occurrence and development of this disease. Next, molecular docking was used to virtually screen the DEG proteins and estimate the theoretical stability of their binding to CAT. Furthermore, a rat IPF model was established to verify the effect of CAT on the key signaling pathways in IPF and to explore its mechanism of action (A). The chemical structure of CAT (B). Upregulated genes in the datasets (C). Downregulated genes in the datasets (D). (E-G) are the volcano maps and heat maps of the highly expressed genes and the low expressed genes in the data sets GSE24206, GSE10667, and GSE53845. In the volcano map, green represents down-regulated genes and red represents upregulated genes; in the heat map, blue represents down-regulated genes, and red represents up-regulated genes between IPF samples and normal samples. p < 0.01 and FC > 1 were considered as cut-off values.
Frontiers in Pharmacology | www.frontiersin.org January 2021 | Volume 11 | Article 594139 Software, NY, United States) and GraphPad Prism Version 8.0 (GraphPad Software, San Diego, CA, United States) were used for statistical analyses and figure preparation.
Analysis of DEGs in IPF
The study flow chart is shown in Figure 1A. A total of 517 genes were upregulated in the three data sets during the IPF process, while 179 genes were downregulated. The numbers of upregulated and downregulated genes are shown in Figures 1C,D. The volcano map and heat map of 141 high expression genes and 25 low expression genes in GSE24206 are shown in Figure 1E. Among all up-or downregulated genes, the top five with the most significant changes in differential expression were ZMAT, CRIP, PSD, FNDC1 (upregulated), and BTNL8 (downregulated). Their functions are mainly related to cell cycle regulation and EMT. For example, ZMAT is a p53 target gene that regulates the cell cycle and apoptosis (Bersani et al., 2014). FNDC1 is ubiquitous in the cell matrix, and related membrane receptors and enzymes can mediate Cx43 phosphorylation and G protein signal transduction to regulate cell permeability and apoptosis (Sato et al., 2009). GSK-3β is an important signal transduction molecule in the Wnt/β-catenin signaling pathway, and CRIP1 promotes EMT through zincinduced p-GSK-3β in colorectal cancer . The downregulation of BTNL8 expression is related to excessive inflammation and destruction of epithelial tissue integrity (Mayassi et al., 2019). The DEGs in GSE10667 and GSE53845 are shown in Figures 1F,G and Supplementary Figures S1-S6.
GO Function Enrichment Analysis, KEGG Pathway Analysis, and PPI Network Analysis
GO analysis confirmed that the DEGs in IPF mainly coded for proteins involved in the ECM environment and the collagen metabolism process (Figures 2A,B), which was also evidenced by the CC and MF enrichment analyses. The BP analysis suggested that the humoral immune response of IPF patients is unbalanced and is characterized by the high expression of inflammatory mediators. Correspondingly, KEGG analysis ( Figures 2C,D) showed high confidence in the protein degradation and synthesis process and the interaction between cytokines and receptors. It has been reported that the Wnt signaling pathway is activated in IPF, and the regulation of EMT by it is an important biological process in IPF. The TGF-β signaling pathway is a key pathway for M2 macrophages to induce EMT (Zhu et al., 2017a;Ko et al., 2019) and can also promote the development of IPF by altering the 3'-UTR of target mRNAs.
The key signal transduction molecules in the two signaling pathways are shown in Figure 2E,F. An IPF-related PPI network ( Figure 3A) was constructed based on the STRING database. The ranking of the top 30 key proteins is shown in Figure 3E and Supplementary Table S1. The top five pivot proteins were IL-6, COL1A1, CXCL12, COL1A2, and IGF1 (PPI enrichment p-value:< 1.0e-16). MCODE module analysis ( Figures 3B-D) showed that chemokine signal transduction, cell cycle and proteasome, and collagen and vascular remodeling were the main functions of the three important core modules, suggesting that the expression of the MMP family was upregulated.
CAT Improves BLM-Induced PF
The 2D structure of CAT (PubChem CID: 91520, CAS No. 2415-24-9) is shown in Figure 1B. PF was successfully induced by intratracheal instillation of BLM (5 mg/kg) in rats. HE staining confirmed that the structure of rat lung tissues in the BLM group was disordered, with alveolar wall thickening, infiltration of a large number of inflammatory cells in the alveolar and the interstitial cavities, and the disappearance of some alveoli. All rats survived until the samples were collected. In the three days after modeling, the weight of all rats decreased, after which it gradually increased. Compared with those in the NS group, all rats administered BLM had different degrees of wheezing, coughing, and bradykinesia, and Masson trichrome staining revealed extensive collagen deposition ( Figures 4A-D). However, CAT significantly improved lung tissue structural damage caused by BLM. The protective effect of the 40 mg/kg dose was better than that of 150 mg/kg PFD, and the lung coefficient was significantly reduced in a dose-dependent manner ( Figure 4E). HYP is the main component of collagen, and TGF-β1 can induce fibroblasts to synthesize a large amount of collagen. After treatment with CAT at doses of 10-40 mg/kg, HYP levels in the lung tissues of rats with lung fibrosis and TGF-β1 levels in the serum were reduced compared to those in the BLM group ( Figures 4F,G). There was no significant difference between the 40 mg/kg CAT group and 150 mg/kg PFD groups, and no impairment of liver functions was observed ( Figures 4H,I). E-cad and α-SMA are considered biomarkers of epithelial cells and myofibroblasts, respectively. During EMT, the expression of E-cad decreases, while the expression of α-SMA increases significantly. In addition, the EMT process in lung tissues is accompanied by collagen deposition and the activation of MMPs in the ECM. As shown in Figures 5A-D, the WB results demonstrated increased expression of α-SMA, COL1A1, and COL3A1 in the BLM group, which was also evidenced by immunohistochemistry ( Figure 5E). However, CAT dramatically reversed the upregulation of these proteins and attenuated the expression of MMP-7 and the downregulation of E-cad (Supplementary Figure S7), with a better effect than that of 150 mg/kg PFD. These results suggest that CAT may be alleviating PF by inhibiting the EMT process to reduce ECM deposition.
CAT Reduces Oxidative Stress and Inflammation in Lung Tissue of Rats With PF
TNF-α induces activation of the NF-κB signaling pathway and exacerbates PF caused by BLM (Hou et al., 2018). To verify the anti-inflammatory and anti-oxidant effects of CAT, we used ELISA to detect the relevant biomarkers. As expected, 28 days after successful modeling, the levels of inflammatory mediators and oxidative stress markers in the lung tissue of rats in the BLM group were increased, while CAT reduced the levels of IL-1β ( Figure 7A), TNF-α ( Figure 7B), IL-6 ( Figure 7C), MDA ( Figure 7D), and ROS ( Figure 7E) and increased the activity of SOD ( Figure 7F) in the lung tissues of IPF rats. The CAT and PFD at 40 mg/kg were significantly different from BLM groups. In summary, CAT can downregulate the expression of cytokines in lung tissues and reduce the level of oxidative stress, thus alleviating BLM-induced PF in rats.
DISCUSSION
IPF is an irreversible, progressive, and fatal lung disease with a poor prognosis that often occurs during acute exacerbation (Border and Noble, 1994;Lopez et al., 2009). Since the molecular mechanisms underlying the occurrence and development of PF are not fully understood, the treatments approved so far are limited to those effective against mild-tomoderate IPF (Mora et al., 2017). The pathogenesis of IPF involves a variety of processes, such as inflammation, oxidative stress, and fibrosis. In this study, we discovered that 141 upregulated genes and 25 downregulated genes were common between the three IPF data sets analyzed. GO analysis of the DEGs revealed that they were mainly involved in humoral Immunohistochemical staining of α-SMA, COL1A1, COL3A1, MMP-7, and E-cad positive cells in the lungs. Data are presented as the means ± SD (n 3), # represents a comparison with the control group, * represents a comparison with the BLM group. #p < 0.01; *p < 0.05; **p < 0.01.
Frontiers in Pharmacology | www.frontiersin.org January 2021 | Volume 11 | Article 594139 9 immunity, collagen metabolism, epithelial cell proliferation, mesenchymal development, and cell matrix adhesion, which is consistent with the strong inflammatory response and the imbalance of collagen metabolism observed in IPF (Xin et al., 2019). KEGG pathway enrichment analysis showed significant enrichment in protein degradation and absorption, cytokinecytokine receptor interaction, and ECM-receptor interaction, all of which are areas most current studies on IPF are focused on (Du et al., 2019). Although hypoxia is a significant pathological feature of PF (Senavirathna et al., 2018)-for example, the nuclear HIF-1α protein is involved in hypoxiainduced EMT (Senavirathna et al., 2018)-several key protein targets for cellular hypoxia as well as oxidative stress did not appear in the PPI results. Correspondingly, the support for the GO analysis and KEGG pathway enrichment analysis is not high. Although the reasons for these differences are not clear, we suggested that researchers should re-examine the importance of the known key signaling pathways and biological processes in the development of IPF.
Traditional Chinese medicine is a potential source of treatments for PF (Li and Kan, 2017). We performed molecular docking of CAT, the active constituent of Rehmannia glutinosa, with several key protein targets suggested by the PPI and KEGG enrichment analyses and found that CAT has high binding to several components of the TGF-β1/Smad3 and Wnt/β-catenin pathways. TGF-β1 is a key protein among the many factors and cytokines that regulate PF (Border and Noble, 1994). It can induce FIGURE 6 | CAT alleviates PF by inhibiting the TGF-β1/Smad3 and Wnt/β-catenin signaling pathways. The docking conformation and main active site of CAT and Wnt3a (A), Smad3 (B), GSK-3β (C), and TGF-βR1 (D). (E) WB analysis of the protein levels of Wnt3a, p-β-catenin, β-catenin, p-GSK-3β, GSK-3β, p-Smad3, and Smad3 in lung tissues.(F) Immunohistochemical staining of Wnt3a, Smad3, GSK-3β, and β-catenin positive cells in the lungs. Data are presented as the means ± SD (n 3), # represents a comparison with the control group, * represents a comparison with the BLM group. #p < 0.01; *p < 0.05; **p < 0.01.
Frontiers in Pharmacology | www.frontiersin.org January 2021 | Volume 11 | Article 594139 alveolar epithelial cells to acquire the phenotype of mesenchymal cells, which become the main source of fibroblasts and myofibroblasts and lead to ECM deposition (Song et al., 2013). This process, as a typical EMT, is a very important mechanism of PF (Piera-Velazquez et al., 2011), and had a high score in GO analysis and KEGG pathway enrichment analysis. Specifically, TGF-β1 interacts with fibroblast surface receptors and phosphorylates Smad proteins, which form Smad3/4 complexes. These enter the nucleus and bind to the promoter regions of fibrogenic genes, such as those for type I collagen, fibronectin, and α-SMA, to activate downstream target gene transcription (Santibanez et al., 2011). However, although TGF-βR1-mediated receptor-activated Smad proteins are most closely related with the occurrence of PF (Ask et al., 2008) and can participate in the regulation of EMT through multiple pathways, the Wnt/β-catenin signaling pathway is even more critical for mediating EMT. It is a key pathway regulating cell proliferation and differentiation, with β-catenin as its major signal transduction molecule. Trauma and other stimuli activate the expression and secretion of the Wnt protein, which leads to the inhibition of β-catenin phosphorylation (Li et al., 1999). Once a certain amount of free β-catenin accumulates, it enters the nucleus and activates target gene transcription (Nusse and Clevers, 2017). The epithelial phenotypic marker E-cad is downregulated and the myofibroblast markers α-SMA and type I collagen are upregulated in PF (Qu et al., 2015;Lacy et al., 2018), and the high expression of α-SMA is associated with a lower survival rate in patients with PF. As targets gene for β-catenin activation, the expression of MMPs accelerates the degradation of ECM. MMPs are the main rate-limiting enzymes that regulate ECM metabolism. MMP-7, which is able to activate other proteases while degrading ECM components such as cell-associated Fas ligands and E-cad, plays a key role in regulating various cell processes such as matrix remodeling, apoptosis, and EMT (Zhou et al., 2017). Wnt/β-catenin signaling pathway is a key factor in the regulation of MMP-7 in vivo. The activation of β-catenin promotes the expression of MMP-7 (He et al., 2012;Zuo and Liu, 2018). Thus, it can be seen that the Wnt/β-catenin pathway plays a significant role in the process of PF. In recent years, several studies have shown that there is crosstalk between the TGF-β/Smad3 and Wnt/β-catenin signaling pathways. Axin and GSK-3β in the Wnt/β-catenin pathway can affect TGF-β signaling by controlling the stability of Smad3 (Guo et al.,2008), while Smad3-mediated regulation enhances the stability of β-catenin and promotes the activation of downstream target genes (Zhang et al., 2010). In addition, p-βcatenin/p-Smad2 complexes were also found in the lung tissues of patients with PF (Kim et al., 2009), while Wnt3a/β-catenin/GSK-3β were mainly localized in alveolar and bronchial epithelial cells (Konigshoff et al., 2008). Therefore, targeting the TGF-β1/ Smad3 and Wnt/β-catenin signaling pathways is an effective strategy for regulating EMT and inhibiting the progression of PF. In vivo experiments showed that the TGF-β/Smad3 and Wnt/β-catenin pathways were activated in a rat model of PF (Henderson et al., 2010;Kim et al., 2011), which was consistent with previous studies. Molecular docking, WB, and immunohistochemistry experiments confirmed that CAT can bind to Wnt3a, Smad3, GSK-3β, and TGF-βR1. Furthermore, by inhibiting the activation of the TGF-β/Smad3 and Wnt/β-catenin pathways, CAT reduced the secretion of key signal transduction molecules such as Smad3, Wnt3a, GSK3β, and β-catenin, and also reduced the expression of α-SMA, COL1A1, COL3A1, and MMP7, effectively inhibited EMT, ECM deposition, and lung structural remodeling and reduced the downregulation of the epithelial phenotype marker E-cad. This demonstrates that CAT can maintain the balance of ECM degradation in the local lung microenvironment, which is of great significance for improving outcomes in PF. As IPF occurs, fibroblasts and myofibroblasts proliferate extensively in fibroblast foci. Factors such as CXCL12 are released by damaged alveolar epithelial cells and cause CXC chemokine type 4 receptor-positive circulating fibroblasts to enter the lungs, expanding the fibroblast pool (Andersson-Sjoland et al., 2008;Mehrad et al., 2009). At the same time, alveolar epithelial cells entering the phenotypic state associated with aging produce and release a large number of cytokines such as TNF-α. Moreover, high inflammatory levels also cause local fibroblasts to migrate and hyperproliferate, and oxidative stress can also lead to the release of pro-inflammatory factors such as IL-1β, which plays an important role in the pathogenesis of IPF (Kliment et al., 2009). Oxidative stress is essentially the imbalance between oxidation and reduction caused by excessive generation of ROS in the body (Kala et al., 2017). SOD is an important antioxidant enzyme that maintains the dynamic balance between free radical generation and removal (He et al., 2016). BLM can cause alveolar epithelial cells and macrophages to produce a large amount of ROS, resulting in lipid peroxidation in biofilms, while MDA can reflect the degree of cell damage by reflecting lipid peroxidation (Teixeira et al., 2008;Yu et al., 2016). This study revealed that oxidative stress and inflammation levels were elevated in rats with PF, which is consistent with previous findings (Xin et al., 2019). With reference to the results of the bioinformatics analysis, in vivo experiments confirmed that CAT could increase the activity of SOD in rat lung tissues and decrease the levels of MDA, IL-6, IL-1β, TNF-α, and ROS, suggesting that CAT may inhibit the development of PF by reducing the levels of inflammation and oxidative stress. Although our results prove that CAT has a potential therapeutic effect on BLM-induced pulmonary fibrosis in rats, the more detailed mechanism of action is still unclear. For example, how does CAT inhibit β-catenin activation/phosphorylation in the Wnt signaling pathway? One possible explanation is that CAT blocks the interaction between Wnt3a and β-catenin. In addition, if we use primary human embryonic lung fibroblasts for in vitro experiments, our findings will be more meaningful.
CONCLUSION
In summary, the results of this study suggest that the collagen metabolism imbalance, inflammatory responses, and EMT activation are the core processes of IPF, and that the TGF-β1/Smad3 and Wnt/β-catenin signaling pathways and related signal transduction molecules are key targets for the treatment of IPF. The ability of CAT to protect against lung fibrosis induced by BLM in rats was reported for the first time. This mechanism is related to the downregulation of Smad3, Wnt3a, GSK-3β, and β-catenin as well as the phosphorylation of Smad3, GSK-3β, and β-catenin. This study also provided new insights into the potential value of CAT for the treatment of IPF. Considering that CAT did not harm the liver during this study and that there have been previous studies on CAT in combination with other compounds to reduce druginduced hepatitis, further research should focus on clinically evaluating the effectiveness and side-effects of CAT in patients with PF.
DATA AVAILABILITY STATEMENT
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/ Supplementary Material.
ETHICS STATEMENT
The animal study was reviewed and approved by Research Ethics Committee of the Affiliated Hospital of Shandong University of Traditional Chinese Medicine.
AUTHOR CONTRIBUTIONS
FY, WZ, and WL conceived and designed the experiments; ZH, HZ, and XC performed the experiments; RC, YL contributed reagents/materials/analysis tools; FY and RC analyzed the data and wrote the paper. All authors have read and agreed to the published version of the manuscript. | 6,595.4 | 2021-01-29T00:00:00.000 | [
"Biology"
] |
Broadband nonreciprocal linear acoustics through a non-local active metamaterial
The ability to create linear systems that manifest broadband nonreciprocal wave propagation would provide for exquisite control over acoustic signals for electronic filtering in communication and noise control. Acoustic nonreciprocity has predominately been achieved by approaches that introduce nonlinear interaction, mean-flow biasing, smart skins, and spatio-temporal parametric modulation into the system. Each approach suffers from at least one of the following drawbacks: the introduction of modulation tones, narrow band filtering, and the interruption of mean flow in fluid acoustics. We now show that an acoustic media that is non-local and active provides a new means to break reciprocity in a linear fashion without these deleterious effects. We realize this media using a distributed network of interlaced subwavelength sensor–actuator pairs with unidirectional signal transport. We exploit this new design space to create a stable metamaterial with non-even dispersion relations and electronically tunable nonreciprocal behavior over a broad range of frequencies.
Reciprocity in wave-bearing acoustic media is remarkably robust, especially in linear systems, maintained in viscoelastic solids [1], fluid-structure systems [2], and structural-piezoelectric-electrical coupled systems [3]. Further, as is well-established, anisotropy and inhomogeneity, while generating interesting wave propagation phenomena, do not engender linear nonreciprocity [1]. Acoustic reciprocity, formally introduced by Helmholtz in 1860 (as discussed in [4]) and later generalized by Lyamshev [5] to include fluid-structure interaction and multiple scatterers, dictates that the response to a disturbance is invariant upon interchange of the source and receiver. Fluid and solid acoustic media that break reciprocity over broad frequency ranges would enable new and unexplored forms of control over vibrational and acoustic signals, with enormous implications for spectral filtering and duplexing in the communications industry [6], and noise control [7,8]. Efforts aimed at achieving nonreciprocity in both linear and nonlinear electromagnetic systems have been particularly successful primarily because of the effectiveness of a biasing magnetic field in devices such as the Faraday isolator [9]. These successes have spurred research in analogous acoustic systems where instead of an external magnetic field, introduction of mean flow in the acoustic medium [10] has been used to achieve a high level, narrowband nonreciprocity. Similarly, biasing in a solid using a DC electric field can result in asymmetric damping and nonreciprocal wave propagation in piezoelectric semiconductors [11,12] as well as in a two-dimensional electron gas coupled to piezoelectric semiconductors [13]. In magnetoelastic and polar media, a DC magnetic field can lead to nonreciprocal effects, although these nonreciprocal effects are often relatively weak (as discussed in [1,14]). Other approaches to acoustic nonreciprocity rely on breaking the spatial or temporal symmetry in the governing equations by introducing nonlinear interactions [15,16] or spatiotemporal modulation of the properties of the medium [17,18]. Theoretical analysis has shown that spatiotemporal modulation of strongly magnetoelastic materials, like terfenol, and piezoelectric materials, like PZT, can lead to impressive nonreciprocity, as shown in [19]. Both nonlinearity and spatiotemporal modulation introduce secondary tones that require later demodulation or signal processing to prevent signal corruption. To circumvent the disadvantages associated with background bias or spatiotemporal parametric modulation, other studies have utilized sensor-actuator pairs to modulate the wave propagation in the medium in a linear fashion [20][21][22][23]. To our knowledge, we are the first to exploit a system with distributed control using non-collocated sensor-actuator pairs to introduce inherent violation of parity and time symmetry through non-local mechanisms, and achieve linear acoustic nonreciprocity.
In our approach, we use an asymmetric unit cell consisting of a sensor and actuator pair, separated from one another by a subwavelength distance d ff , as shown in figure 1(a). The pairs are arrayed and interlaced along the length of the waveguide. This arrangement breaks spatial symmetry and creates a preferential direction because information is transmitted nearly instantaneously in a unidirectional fashion from sensor to actuator via a distributed amplifier network, while acoustic disturbances propagate bidirectionally at the much slower group velocity of the waveguide. This non-local active metamaterial (NAM) is similar to the canting of the hair cells and phalangeal processes seen in the mammalian cochlea, a feature hypothesized to play a role in wave amplification and dispersion in the hearing organ [24].
To illustrate this general NAM concept as a tool to engineer nonreciprocal behavior, we use an airborne acoustic system as shown in figure 1(a), although this paradigm could be adapted for other wave-bearing media, like piezoelectric or magnetoelastic materials, with appropriate electronic control. First we consider the system in the limit where the acoustic wavelength is much larger than the spacing between successive sensors or actuators (Δx) so we can treat the active medium as a continuum. The sensed pressure is fed forward to the monopole sources located at a distance d ff downstream. If we assume that the source can be manipulated electronically to precisely match the upstream pressure and that the electronic control is instantaneous, the acoustic source strength can be written as g p p(x − d ff ), where g p is the open loop gain between the sensor and the actuator. This simplifying assumption will be relaxed later to reflect the dynamics of the acoustic source. With these assumptions, the pressure in the waveguide (p) can be modeled using a modified version of the one-dimensional Helmholtz equation with an additional pressure-proportional source term as where c is the acoustic speed, ω is the radian frequency (assuming an e −iωt time dependence), and k = ω/c. The gain g p is non-zero for x ∈ (0, L) and is zero elsewhere. To show the nonreciprocity of the NAM, let p I (x) be the solution of equation (1) due to a point source Q I at x = x 1 and p II (x) the solution due to a point source Q II at x = x 2 , where x 1 < 0 and x 2 > L. Following standard arguments typically used to prove reciprocity in acoustics [1], we find (2) so that acoustic reciprocity, given by p II (x 1 )Q I = p I (x 2 )Q II [25], holds only at exceptional frequencies when the left-hand side integral vanishes. The spatial separation of the sensor and the actuator and the unidirectional sensor-signal transmission are the crucial elements in achieving inherent nonreciprocity in the NAM system. This is fundamentally different from the case where active elements of an acoustic waveguide are coupled via a bidirectional transmission line [26], because such a system is reciprocal. The non-local approach is also different from case where the sensor and source are collocated and local impedance modification or bianisotropy is utilized to achieve nonreciprocity [20,21], because the non-locality, even though subwavelength, affords additional flexibility in achieving nonreciprocity.
To further investigate the nonreciprocal wave characteristics of the NAM, we assume harmonic waves of the form p 0 e iγx to obtain the dispersion relation in the active region (equation (1)) given by where γ is the wavenumber. Owing to the exponential term on the right-hand side of (3) To determine if the nonreciprocity seen in the continuous system is conveyed to a system composed of discrete sensors and actuators, we consider an array of N = 10 uniformly spaced pairs (Δx = 5 cm) arranged in the active section in an infinite acoustic duct as shown in figure 1(a). We retain the assumption that the electronics can provide the gain necessary to guarantee that the acoustic source strength of each actuator is equal to the discrete gain, g d , times the measured pressure at a distance d ff = 10 cm upstream, similar to the source term in equation (1). We modeled this numerically in two ways. First, we used one-dimensional (1D) acoustic theory, with the actuators idealized as point sources. Second, we used a full-wave (FW) solution that consisted of a complete three-dimensional finite element acoustic model in COMSOL Multiphysics that included the finite extent of the sources, treated as boundary velocity forcing, and three dimensionality of the fluid domain. Parameters for the 1D and FW models are given in the supplemental material. We define the transmission coefficient T as the ratio of the amplitude of the transmitted and the incident pressure field, and the reflection coefficient R as the ratio of the amplitude of the reflected and the incident field, expressed in dB. For a plane wave incident from port A, the subscript A → B is used while the subscript B → A represents the opposite situation. Analytical expressions for these coefficients derived from the 1D model are as follows, where H = g d (I − g d G) −1 , I is an N × N identity matrix, and G is a matrix of Green's functions, with G mn = 1 2ik e ik|x pm −x sn | denoting the Green's function from the nth source located at x = x sn to the mth sensor probe located at x = x pm . Equation (4) reveals the complex interaction of g d , Δx, and d ff that result in nonreciprocal wave transmission through the system. As shown in figure 2(a), T A→B = T B→A over the frequency range plotted (except at a single frequency) resulting in a non-symmetric scattering matrix, demonstrating the nonreciprocal nature of the system [27]. The reflection coefficients (figure 2(b)) are equal in amplitude, |R A→B | = |R B→A | but differ in phase by 2kd ff radians (see proof in the supplemental material) (http://stacks.iop.org/NJP/22/063010/mmedia) for our equispaced sensor-actuator system. This is in stark contrast with PT symmetric systems, where the transmission coefficients from either direction are the same and the reflection coefficients differ [28]. Further, if the actuator has sufficient authority to deliver pressure at very low frequencies, this system reflects incoming waves from both directions at those frequencies, acting as a subwavelength wall for sound. Finally, near 1600 Hz, T A→B = T B→A representing a special frequency in the discrete system where reciprocity is restored, a scenario predicted by the reciprocity analysis of the prototypical continuous system (equation (2)). Multiple such special frequencies exist beyond beyond 1600 Hz, between which transmission coefficients oscillate about 0 dB with diminishing amplitude. Due to the presence of the wavenumber k in the denominator in equation (4), at very high frequencies, the system is effectively transparent to impinging waves from either direction. The FW simulations (symbols) are in good agreement with the 1D acoustic theory (solid lines) in this frequency range. The degree of nonreciprocity quantified by the isolation factor IF, defined as the difference of T B→A and T A→B , exceeds 40 dB over a broad range of frequencies from DC up to 800 Hz, as shown in figure 2(c), and displays a 20 dB IF bandwidth of more than 1 kHz. As noted, these impressive nonreciprocal effects diminish with increasing frequency.
To further explore the effectiveness of the NAM, the spatial variation of the real part of the total pressure field due to incidence of 692 Hz plane wave from port A is shown in figure 3(a) and incidence from port B in figure 3(b). This frequency was chosen to establish the efficacy of the NAM away from the maximum IF. The plane wave incident from port B (figure 3(a)) is amplified by 29 dB whereas the wave incident from port A ( figure 3(b)) is attenuated by 31 dB, leading to a remarkable net acoustic IF of 60 dB. To determine the effectiveness of the distributed active media under transient loading, we simulated the response of the NAM to a cosine squared windowed incident pulse 0.2 ms in duration and centered at frequency of 692 Hz, the envelope of which is shown in figure 3(c). Time domain calculations show that the transmitted wave packets exhibit minimal distortion, and the wave packet traversing from port B to A (red line) is amplified, whereas the transmitted wave packet traveling from port A to B (blue line) is reduced, consistent with the 60 dB IF predicted by the steady state response. The system was shown to be stable by casting the solution of the 1D model into the canonical closed loop transfer function form and applying the Nyquist stability criterion [29] as outlined in the supplemental material. The FW solution stability was confirmed by finding the impulse response, consistent with the transient results shown in figure 3(c) which also show stability.
To verify the viability of the spatial feed-forward control with real electromechanical transducers, we relaxed the assumption that the source strength is precisely equal to the sensed pressure, as introduced in (equation (1)). Instead, we used the voltage output from each microphone (sensor) multiplied by a gain factor g d as the input voltage to the corresponding electrodynamic speaker (actuator) to simulate a real experiment. Using standard electrodynamic driver theory [25], we modeled each of the 10 sources with the nominal Thiele-Small parameters for a typical minispeaker, as documented in the supplemental materials. Figure 4(a) shows the IF spectrum for the maximum stable discrete gain, g max d = 0.086 m −1 , 0.04 m −1 , 0.01 m −1 , and passive waveguide (g d = 0 m −1 ) to show the change in the IF spectrum with decreasing gain. Reducing g d reduces the peak magnitude and the peak frequency IF. To explore the programmable tunability of the NAM, we added electronic filters in series with the gain g d . Figure 4(b) shows the effect of setting g d = 0.07 m −1 and adding a single pole low-pass filter with corner frequencies of 2000 Hz (LPF 1 ) and 600 Hz (LPF 2 ). The filters introduced a phase shift that resulted in a delay in the signal actuating the speaker, artificially decreasing d ff and lowering the peak IF frequency by 140 Hz and 290 Hz for the LPF 1 and LPF 2 filters, respectively. Figure 4(c) shows FW simulation of the spatial variation of the pressure field at 900 Hz corresponding to unfiltered system. For a 1 Pa (94 dB SPL) incident field, the voltage applied to the speakers remained under the maximum voltage rating for this speaker over the entire range of frequencies. We define Δf IF as the 20 dB IF bandwidth, and calculated it to be 456 Hz for this system, equal to 51% of the peak IF frequency. Other studies utilizing linear mechanisms to achieve nonreciprocity have reported peak IF magnitudes of around 40 dB (Δf IF = 4 Hz) for the acoustic circulator [15] and 25 dB (Δf IF = 250 Hz) for the Willis metamaterial [21]. Hence, this proposed mechanism has the potential to exceed the maximum level and bandwidth achieved by other approaches [15,21] without disrupting mean fluid flow. Further, the IF spectrum can be manipulated by electronically modulating g d , either in magnitude or in phase (see figures 4(a) and (b)), providing a highly flexible mechanism for in situ optimization of the NAM system for specific applications.
We have shown that it is possible to induce linear broadband nonreciprocity in acoustic systems, essentially creating a new stable media using the NAM mechanism. This mechanism consists of an array of interlaced subwavelength sensor-actuator unit cells (the total active region can be sub-or supra-wavelength). Although we have demonstrated the approach using a fluid-acoustic medium, this technique can be adopted and applied to many different wave-bearing media and systems. For instance, the locally sensed force or strain in either an interdigitated surface acoustic wave device [30][31][32] or a layered stack of bulk-wave piezoelectric elements [33,34] can be fed forward to actuator elements using the NAM approach, creating a preferred direction and nonreciprocity. The NAM approach expands the design space, holding the potential to enhance the desired capability of the device (e.g., filtering or sound output). An extensively studied prototype for wave propagation and control in dispersive systems is an elastic beam bonded to piezoelectric patches arrayed down the beam. When the piezoelectric elements are electrically interconnected by a transmission line, a coupled elastic-electric waveguide is created [35]. While this coupled waveguide system can be designed to achieve excellent stop-band behavior or high losses, it is still reciprocal. By breaking the bidirectionality of the transmission line using the feed forward distributed control of the NAM, these reciprocal systems would be converted to nonreciprocal ones. Another popular approach is to use collocated sensor-actuator patch approaches to control wave propagation on beams, as in [36]. These too can be converted to nonreciprocal systems by feeding forward the control signal to the neighboring patch. Finally, one can also envision creating nonreciprocal anisotropy in two-dimensional media, potentially enabling one-way waveguding. Hence, our theoretical work opens up the possibility of reconfiguring a vast array of well-studied systems rendering them nonreciprocal.
In addition to showing the effectiveness of the NAM concept in breaking reciprocity, we have also highlighted its flexibility in designing systems with desired isolation factor magnitudes and frequencies (figures 4(a) and (b)). This tunability was accomplished with a controller consisting of a spatially constant gain with simple spectral variations. A vast design space associated with the spatio-spectral variation of the amplitude and phase of the gain associated with each sensor-actuator pair as well as the distance between them remains to be explored. Such new designs hold great potential for noise control as well as for enhancement of the performance of electromechanical filters and amplifiers. | 4,039 | 2020-04-17T00:00:00.000 | [
"Physics"
] |
FEEDBACK ON A PUBLICLY DISTRIBUTED IMAGE DATABASE: THE MESSIDOR DATABASE
The Messidor database, which contains hundreds of eye fundus images, has been publicly distributed since 2008. It was created by the Messidor project in order to evaluate automatic lesion segmentation and diabetic retinopathy grading methods. Designing, producing and maintaining such a database entails significant costs. By publicly sharing it, one hopes to bring a valuable resource to the public research community. However, the real interest and benefit of the research community is not easy to quantify. We analyse here the feedback on the Messidor database, after more than 6 years of diffusion. This analysis should apply to other similar research databases.
INTRODUCTION
Public databases are precious tools for researchers.They bring the necessary data to develop and test new methods, and allow for quantitative comparisons between different approaches.The Messidor database is one of such databases.It was created within the Messidor project to evaluate different lesion segmentation methods for color eye fundus images, in the framework of diabetic retinopathy screening and diagnosis.It has been publicly distributed since 2008.
THE MESSIDOR DATABASE
The Messidor download page1 gives an appropriate description of the database, which we quote here: "The 1200 eye fundus color numerical images of the posterior pole for the MESSIDOR database were acquired by 3 ophthalmologic departments using a color video 3CCD camera on a Topcon TRC NW6 non-mydriatic retinograph with a 45 degree field of view.The images were captured using 8 bits per color plane at 1440*960, 2240*1488 or 2304*1536 pixels.800 images were acquired with pupil dilation (one drop of Tropicamide at 0.5%) and 400 without dilation.
The 1200 images are packaged in 3 sets, one per ophthalmologic department.Each set is divided into 4 zipped sub sets containing each 100 images in TIFF format and an Excel file with medical diagnoses for each image."Note that, as the description indicates, the database contains a medical diagnosis for each image, but no manual annotations on the images, such as lesions contours or position.This is an important difference with respect to other databases, such as DIARETDB1 and e-ophtha.
The download procedure asks the user to fillin the following fields: E-mail address; First Name; Last Name; Professional Interests; Country and University/Organization.An e-mail is then sent to a member of the Messidor team, who checks the validity of the request, and sends an appropriate link to the submitter.Some requests are not accepted, typically because the fields requested in the download procedure are clearly incorrectly filled.Precise statistics on refused requests are not kept, but we estimate that they represent less than 25% of the total number of requests.They are not taken into account in the statistics below.
It should be noted that Messidor database users are asked to acknowledge the Messidor project partners in their related publications.
EXPERIENCE FEEDBACK ON MESSIDOR
Most of the statistics on the Messidor database diffusion presented in this section are summarized in Fig. 1.People tend to underestimate support and maintenance costs associated to a publicly distributed database.For instance, given the increasing number of download requests for the Messidor database, processing these requests and related questions requires approximately one hour per week.On top of that, users ask general questions about the database -even if most answers are available in the website.Finally, hosting the database and web pages also takes resources.Another measure on the success of the database can be obtained through access statistics to the corresponding web page (see Table 2).Again, one can see a clear increase in web site access since 2008.The number of visitors is approximately two times higher in 2013 than in 2011.This trend clearly appears in Fig. 1.The link between download requests or web access and the actual contribution to the research domain is not necessarily simple to apprehend.Indeed, people might download the database or consult the web site Image Anal Stereol ?? (Please use \volume):1-4 for reasons not related to public research.In order to clarify this point, we have looked into the number of citations of the Messidor database in scientific papers.The results are summarized in table 3. Interestingly, it can be seen that the Messidor database has been cited three times more often in 2013 than in 2011 -the same increase as for the number of download requests (see Fig. 1).Finally, if we pool the results for two of the most cited journals in the field of biomedical image processing, that is Medical Image Analysis and IEEE Transactions of Medical Imaging, we find that, since 2008, 47 papers deal with "diabetic retinopathy", and among these 10 papers cite the Messidor database.
Note that other databases used in the same domain follow similar trends.DIARETDB1, which has been distributed since 2007, has been cited 295 times (as of June 19, 2014), while HEI-MED, which was established in 2012, 26 times.
CONCLUSION
The Messidor database has been publicly distributed since 2008.It is of interest mainly for researchers in a relatively specialized domain: retinal image processing, and more specifically computerassisted diagnosis of diabetic retinopathy.In spite of this, it has gathered a large amount of citations.We have also shown that the number of web site visitors, as well as the number of download requests, seem to be correctly correlated with the number of citations, which provides a simple and convenient method to monitor the success of a database.
The experience gathered by our team on the management of the Messidor database allows us to propose some recommendations for the design of future databases: -Hosting and managing the database takes resources; this point should be taken into account during the database design, in order to reduce this cost as much as possible.
-The database is typically described on a web page.This description has to be clear and complete, in order to limit the number of requests for additional information (and therefore to reduce the management cost).
-The database managers should ask potential users to acknowledge the database or, better, to cite a relevant paper on the database.This simplifies the evaluation of the success of the database.
-Last but not least, we have shown that an automatic validation procedure seems to be enough to treat download requests.
Moreover, we believe that this study confirms the important role that databases play in medical image processing.In the case of the Messidor database, this is true in spite of the fact that the images contained in the database are progressively getting outdated.Indeed, they were acquired before 2007, and modern fundus cameras offer increasing image resolutions and sensitivities.As far as we know, only two databases have been released in this field after 2010: HEI-MED (for exudate-based macula oedema detection) and e-ophtha (microaneurysms and exudates segmentation).This stresses the importance of new databases, corresponding to the current clinical practice.
Fig. 1 .
Fig. 1.Evolution of number of citations, web site visitors and dowload requests.
Table 1
gives the number of download requests per year between 2011 and 2013, broken into different countries.It can be seen that download requests clearly increase over time: there have approximately been three time more requests in 2013 than in 2011.This increase comes mainly from less developped countries.
Table 1 .
Download requests for the Messidor database, per year.Some countries, where only few requests originated, are not indicated.
Table 3 .
Citations per year.Values were obtained through Google Scholar using the keywords "Messidor diabetic retinopathy" on June 19, 2014. | 1,698.2 | 2014-08-26T00:00:00.000 | [
"Computer Science",
"Medicine"
] |
Suzuki-Miyaura Reactions Catalyzed by C2-Symmetric Pd-Multi-Dentate N-Heterocyclic Carbene Complexes
Suzuki-Miyaura coupling reactions are promoted by Pd complexes ligated with C2-symmetric multi-dentate N-heterocyclic carbenes derived in situ from Pd(OAc)2 and imidazolium salts. Good to excellent yields were obtained for aryl bromides as substrates. Turnover numbers of up to 105 could be achieved with 5 × 10−4 mol% of Pd(OAc)2/1 × 10−3 mol% NHC precatalyst in 24 h.
Introduction
The formation of C-C bonds catalyzed by transition metal complexes represents one of the most powerful tools in organic synthesis [1][2][3] and has found important applications in the synthesis of organic molecules such as pharmaceuticals [4], natural products [5], and polymers [6]. The Suzuki-Miyaura reaction, the C-C cross-coupling reaction between aryl halides and arylboronic acids, is an example of this kind of reaction [7][8][9]. As is well known, many Pd-phosphine complexes have been employed as the catalysts for this transformation [10,11], but most phosphine ligands, especially those displaying good catalytic properties, are expensive, toxic and air-sensitive. Accordingly, Pd-complexes overcoming these limitations are highly desirable. N-Heterocyclic carbenes (NHCs) have OPEN ACCESS received increasing attention after the isolation of free NHC by Arduengo and co-workers in 1991 [12], and Herrmann's seminal work demonstrating the catalysis of coupling reactions using Pd-NHC complexes [13]. The excellent σ-donor and lower π-acceptor characteristics of NHC, in combination with their good stability towards air and moisture, make them attractive as ligands in catalytic reactions [14][15][16]. Furthermore, substituents attached to the NHC framework could be easily modulated to tune their electronic as well as the steric properties. These results have shown NHCs to be an alternative for conventional phosphine ligands in homogeneous catalysis including olefin metathesis [17], hydrosilylation [18,19], hydrogenation [20,21], C-C coupling reactions, etc. [22].
Suzuki-Miyaura coupling reactions promoted by Pd-NHC complexes were initiated by Herrmann's report in 1998 [23]. Since then, Pd-NHC complexes have been found as efficient catalysts for this kind of coupling reaction [24][25][26][27][28][29][30], and new NHCs or their precursors have been synthesized to confer more efficient catalytic properties or ease of operation [31][32][33][34]. Among these catalysts, most of them are derived from monodentated NHCs, and some are bidentate anionic ligands [26,27]. The chelating NHC ligands predominantly consist of two NHC moieties linked by a chain or a ring, e.g., Shi's cis-chelating, bidentate NHC derived from binaphthyl-2,2′-diamine [35], and more importantly, hybrid NHC ligands. Typical examples of the hybrid NHC ligands used in palladium-catalyzed reactions include NHC,P chelating ligands derived from 1 and 2, and NHC,N chelating ligands in complexes 3, 4 and 5 ( Figure 1) [36][37][38][39]. Nonetheless, little attention has been paid to hybrid NHC chelating ligands bearing weakly-coordinating O-atom as a potential coordination atom until now [40][41][42]. The fact that Pd complexes containing both NHC and phosphine ligands, show higher activity than those with phosphine as the only ligand or Pd(NHC) 2 X 2 , was reasoned to be due to the strong Pd-NHC bond and relatively weak Pd-P bond, which results in the stabilization of Pd, easy dissociation of phosphine ligand and favorable oxidative addition of aryl halides to Pd. In metal-NHC complexes promoted reactions, metal complexes with both NHC and phosphine structural motif generally exhibit better activity and selectivity than those only with monodentate NHC ligands. Therefore, a second, relatively weak coordination atom in the chelating metal-NHC complex might be beneficial for improving the catalytic performance.
We have an interest in the applications of multidentate, C 2 -symmetric NHCs with one chelating carbene and two O-atoms as ligands in catalytic reactions, for which there have been few precedents. We envisioned that the coordinative ability of the heteroatom can be adjusted by changing its bonding. Previously, we have synthesized tridentate C 2 -symmetric imidazolinium salts 6 ( Figure 2) and their NHC precursors, which have two oxygen atoms in the arms, and found that their copper complexes could catalyze the asymmetric conjugate addition of Et 2 Zn to cycloalkenones efficiently [43]. To continue this work, we designed 7, the unsaturated counterpart of 6, to explore their applications in Suzuki-Miyaura coupling reactions, as it has been reported that Pd coordinated with unsaturated NHCs show higher reactivity than those with saturated NHCs [44]. Herein, we wish to report the preparation and catalytic properties of the novel C 2 -symmetric tridentate imidazolium salts 7 in the Suzuki-Miyaura coupling reaction.
Results and Discussion
The synthesis of the NHC precursor imidazolium salts 7a-f in 22-30% overall yield is shown in Scheme 1. The synthetic route to the NHC precursor imidazolium salts 7a-f started from (S)-ethyl lactate or (S)-ethyl mandelate by etherification with phenol or substituted phenols. The α-aryloxycarboxylates 8a-f were next reduced, yielding β-aryloxyalcohols 9a-f. Then the alcohol were converted into halides 11a-f, and reacted with imidazole giving imidazolium salts 7a-f, which contain β-aryloxyl groups in the side chains. They are quite stable towards air and moisture. The imidazolium salts 7c-f synthesized from (S)-ethyl lactate were optically pure. This can be deduced from the fact that only one set of NMR signals was observed in both 1 H-NMR and 13 C-NMR spectra of these compounds. Reaction of racemic bromides 11c-f with imidazole would yield racemic C 2 -symmetric (R,R), (S,S)-isomers of 7c-f, and Cs-symmetric meso-(R,S)-isomer of 7c-f. Even though the (R,R), (S,S)-isomers could not be distinguished by NMR, the signals of the racemic isomers and meso-isomer should be different. Therefore, the sets of NMR signals of 7 could be used to judge whether the products are optically pure.
On the other hand, use of (S)-ethyl mandelate yielded 7a and 7b as a mixture of diastereomers, since 1 H-NMR and 13 C-NMR signals for two sets or more than one set were observed. This indicates that racemization occurs using (S)-ethyl mandelate as the starting material, whose α-H is more acidic than that in ethyl lactate.
The catalytic activities of the Pd-complexes of the NHCs derived from imidazolium salts 7 in Suzuki-Miyaura coupling reactions were screened, using catalysts generated in situ from a mixture in a solvent of 7, Pd(OAc) 2 , and a base.
The reaction conditions were optimized on a model reaction between phenyl bromide and phenylboronic acid. First, the effect of solvents was examined using the catalyst generated from imidazolium salt 7a and Pd(OAc) 2 . The product biphenyl 12a was obtained in 95% yield in either toluene or DMF. Addition of H 2 O to DMF (V/V = 1:2) accelerated the reaction but no increase in the yield was observed. The reaction was completed in 1.5 h using toluene as the solvent. Moderate to low yields were obtained when polar solvents, including 1,4-dioxane, THF and EtOH, were used, even though 1,4-dioxane and EtOH have been reported in the literature as good solvents for this kind of reaction.
Using toluene as the solvent, bases were then optimized, and the results are shown in Table 1. Although Cs 2 CO 3 is generally efficient for the Suzuki-Miyaura coupling reactions catalyzed by Pd-NHC or palladium-phosphine complexes, in our case, only a 32% yield of 12a was observed, whereas the use of K 2 CO 3 resulted in an excellent yield (entry 2). With other bases like NaOH, KF and K 3 PO 4 , the reaction also proceeded well, but somewhat more slowly. Then the temperature was optimized. Phenyl bromide was quantitatively converted to 12a in 0.5 h at 110 °C, much faster than at 90 °C. At lower temperature, the reaction is very slow. Only a 53% yield of 12a was obtained after 21 h at 70 °C. Therefore, subsequent reactions were carried out using K 2 CO 3 as the base and toluene as the solvent at 110 °C. Next, the catalytic abilities of 7a analogues were evaluated and the results are shown in Table 2. The most efficient catalyst was derived from 7a (entry 1). A comparable yield was obtained with the methyl substituted analogue 7c (entry 3). Introduction of a nitro group to the aryloxy substituent results in a ligand-Pd complex generating a lower yield of the coupling product (entry 4), which shows that reduction of the electron density of the phenyl ring and the resultant decrease in the coordination ability of the O-atom reduced the catalytic ability of 7. The presence of a 4-methyl group in the aryloxy substituent also led to a lower yield (entry 5), but higher than that using 7d. Yields of 86-87% were generated when the (substituted)-phenoxy substituent was changed to a 2-naphthoxy group (entries 2 and 6), which is more steric demanding and resulted in the O-atom being less coordinative. Two typical imidazolium salts IMes·HCl (IMes = 1,3-bis(2,4,6-trimethylphenyl)-imidazol-2-ylidene) and IPr·HCl (IPr = 1,3-bis(2,6-diisopropylphenyl)-imidazol-2-ylidene) catalyze the coupling reaction giving inferior results (entries 7, 8). In the absence of imidazolium salt or other ligand, a 72% yield of 12a was obtained in 1h using Pd(OAc) 2 as the catalyst (entry 9). These results demonstrate that the ligand derived from imidazolium salts 7 which contains two potentially chelated oxygen atoms is more efficient than simple NHC ligands, e.g., IPr and IMes in the coupling reactions. This might be due to the presence of a NHC moiety and oxygen atoms in the hybrid NHC ligands. The strong coordinating of NHC can prevent the dissociate of ligand from palladium complex, and therefore, stablize the catalytically active palladium species. In the meanwhile, the weak coordinative oxygen atom(s) can easily dissociate from palladium complex and generate a coordination-unsaturated palladium species. This favors the oxidation of aryl halides or arylboronic acids toward the coordination-unsaturated palladium species and catalytic cycle. Indeed, formation of palladium balck, which is catalytic inert to the Suzuki reaction and an indication of ligands dissociate from palladium completely, is rarely observed in the coupling reaction using 7. The coupling reaction of phenylboric acid with phenyl bromide could be performed in air with a slightly reduced yield (78%). -1 72 a Reaction conditions: 0.5 mmol phenyl bromide, 0.75 mmol phenylboronic acid, 1.5 mmol K 2 CO 3 , 0.5 mol% Pd(OAc) 2 , 1 mol% 7, 3.0 mL toluene, 110 °C. b GC yield.
The reactions of various aryl halides with phenylboronic acids were then investigated under the optimized conditions, and the results are summarized in Table 3. Only 5 minutes were needed for the complete reactions of 4-nitrophenyl bromide and 4-acetophenyl bromide, which are electron-deficient and more reactive in this kind of reaction. For those aryl halides with electron-donating groups (entries 7, 10, 13) or sterically hindered aryl bromides (entry 11), longer reaction times were needed. Generally, almost quantitative yields of coupling products could be obtained in 1 h. It is worth to note that 4-hydroxylphenyl bromide, which is very electron-rich in the phenyl ring and the C-Br bond, is highly reactive to coupling. As expected, a wide range of functional groups including keto, nitro, cyano, ester, amide, hydroxy and ether, were tolerated. The coupling reactions could be extended to benzyl bromide which contains a sp 3 -C atom bearing the halide, and therefore, a C-C coupling product between a sp 3 -C and a sp 2 -C was achieved (entry 14). The reaction of 4-chlorophenyl bromide gave 4-chlorobiphenyl as the only coupling product in high yield, and demonstrated the inertness of C-Cl in the presence of C-Br (entry 6). When phenyl chloride was used, only a 35% yield of biphenyl was obtained (entry 15). Introduction of an acetyl group to phenyl chloride led to an increase in the activity, and a 70% yield of the coupling product was achieved (entry 16). Gratifyingly, benzyl chloride also gave a reasonable yield of coupling product (entry 17), although in lower yield than that obtained with benzyl bromide. Palladium black was only observed unremarkably in reactions performed at 110 °C and after long reaction times.
In addition, reactions of various arylboronic acids with substituted phenyl bromides were performed. The reactions proceed smoothly in high yield using 2-tolylboronic acid. In contrast to aryl halides, the presence of an electron-withdrawing group results in a low coupling yield which may be attributed to the tendency of hydrolysis of the electron deficient arylboronic acids (Table 3, entry 21) [45]. Even though the presence of an ortho-Me group in either the aryl halide or the arylboronic acid did not affect the coupling yield, it was still difficult to obtain tri-ortho substituted biphenyl in high yields, a challenging reaction in literature, due to the large hindrance for our catalyst system. In the case of a very sterically hindered reaction (entry 22), the less bulky 2,2′-dimethylbiphenyl (the homocoupling product of two arylboronic acid molecules), was formed in preference to the more bulky 2,6,2′-trimethylbiphenyl product derived from the expected Suzuki-Miyaura coupling when 0.5 mol% of Pd catalyst was employed. This indicated the relative inertness of the 2,6-dimethylphenyl bromide. Therefore, the amount of the Pd salt was decreased to 0.05 mol% to depress the relative faster coupling reaction between the aryl groups in the boronic acid. In this case, the yield of the trimethylbiphenyl increased to 56% as judged by gas chromatography over a prolonged reaction time (entry 23). Similar results were observed for the reaction of 4-bromoacetophenone with phenylboronic acid (entries 1, 2 in Table 4 vs. entry 1 in Table 3). The efficiencies of Pd(OAc) 2 /7a were further tested with different catalyst loadings, and the results are summarized in Table 4
General
All chemicals were purchased from Alfa Aesar Co., Ltd. (Tianjin, China) and Accela ChemBio Co., Ltd. (Shanghai, China), except arylboronic acids which were products of Ally Chemical Ltd. (Dalian, China). The solvents were freshly distilled prior to use. NMR spectra were recorded on a Varian 400 MHz spectrometer or on a Bruker DRX500 spectrometer, using TMS as an internal standard. IR spectra were recorded on a Nicolet 550 spectrometer. MS spectra were measured on a Hewlett-Packard HP-6890/5973 gas chromatography-mass spectrometer. HRMS were recorded on a Micromass UPLC/Q-Tof Micro spectrometer. The reaction mixtures were analyzed by gas chromatography (Shimadzu GC-2010, capillary column SE-54, 30 m 0.32 mm 4 m; FID detector; N 2 gas). Column chromatography was performed with silica gel (200-300 mesh).
2-(2-Naphthoxyl)propan-1-ol (9f)
To an ice-water cooled round-bottom flask containing a solution of 8f (1.243 g, 5.09 mmol) in THF (10.0 mL) was added NaBH 4 (0.380 g, 10 mmol) in portions, and the mixture was stirred for 30 min at 0 °C. Then the temperature was recovered to room temperature slowly, and the mixture was stirred at room temperature for 6 h. The volatiles were removed by evaporation in vacuo. Water was added to the residue, and the aqueous phase was extracted with dichloromethane. The combined organic phase was dried with Na 2 SO 4 and purified by column chromatography, yielding 9f (0.85 g, 84.2% yield) as colorless oily liquid.
General Procedure for the Suzuki-Miyaura Coupling Reactions
Under an Ar atmosphere, Pd(OAc) 2 (0.6 mg, 0.0025 mmol, 0.5 mol%), imidazolium salt (0.005 mmol, 1 mol%), arylboronic acid (0.75 mmol), aryl halide (0.5 mmol), K 2 CO 3 (207.30 mg, 1.5 mmol), and toluene (3.0 mL) were added to a dried Schlenk tube in sequence. The mixture was stirred at 110 °C and the progress of the reaction was monitored by TLC and gas chromatography. Upon the consumption of aryl halide, the mixture was cooled to room temperature, and H 2 O (3.0 mL) was added to quench the reaction. The organic layer was separated, and the aqueous layer was back-extracted with CH 2 Cl 2 (3.0 mL × 3). The organic phases were combined, dried over Na 2 SO 4 and concentrated. The product was isolated by column chromatography with petroleum ether-ethyl acetate as the eluents or analyzed by gas chromatography using diethylene glycol di-n-butyl ether as an internal standard. The structures of the coupling products were confirmed by comparison of 1 H-NMR, 13 C-NMR with those reported in literature. All products, 12a-q, showed molecular ionic peak in MS specta. [62,63].
Conclusions
In conclusion, we have synthesized a range of novel C 2 -symmetric NHC precursor imidazolium salts containing side arms substituted with aryloxyl groups. The Suzuki-Miyaura coupling reaction could be catalyzed remarkably by the Pd/NHC catalysts formed in situ. Various functionalized and sterically hindered aryl halides and arylboronic acids could be used. TON of up to 10 5 was achieved with 5 × 10 −4 mol% Pd catalyst. | 3,768.8 | 2012-10-01T00:00:00.000 | [
"Chemistry"
] |
Standardized Judgment Method of Shooting Training Action Based on Digital Video Technology
Aiming at the difficulty of standardizing the action of basketball shooting training, a new method of standardizing the action of basketball shooting training is proposed based on digital video technology. )e digital video signal representation, video sequence coding data structure, and video sequence compression codingmethod are analyzed, and the pixels of basketball shooting training action position space are sampled to collect basketball shooting training images.)e time difference method is used to extract the movement target of basketball shooting training fromadigital video sequence. Based on digital video technology, the initial background image is estimated, and the update rate is introduced to update the background estimation image. According to the pixel value sequence of the basketball shooting training image, the pixel model of the basketball shooting training image is defined and modified. By judging whether the defined pixel value matches the background parameter model, the standardization of shooting training can be realized.)e experimental results show that the proposed method has good stability, high precision, and short time in determining the standardization of shooting movement, can correct the wrong shooting movement in real time, and can effectively guide basketball shooting training.
Introduction
Basketball is quite different from other sports. It is a high-intensity and comprehensive sport [1,2]. Basketball belongs to the same field antagonistic event group dominated by technical and tactical ability, and the technical and tactical level is the decisive factor for the competitive level of basketball [3]. In the actual competition process, basketball players need to have diversified basketball qualities to ensure the victory of the competition. Among them, the more important point is the coordination and stability of athletes' physical functions. In the development of basketball, shooting is its key offensive technology. e essence of a basketball game is a shooting game, which also shows that the stability of shooting has an important relationship with the outcome of the game [4]. Among them, the factors affecting the standardization of athletes' shooting action mainly include athletes' bodies, technology, and psychology. In the process of competition, athletes need to ensure that they master the nature, time, and score of the competition [5]. In the actual basketball game, the attacking team needs to use different techniques or tactics to create more shooting opportunities and ensure shooting scores [6,7]. e defensive team should actively defend and prevent the other team from scoring. e accuracy and standard of shooting in basketball are directly related to the score. erefore, it is of great significance to reasonably test and judge the shooting action of basketball.
At present, scholars in related fields have carried on the research on action judgment and obtained some research results. Reference [8] proposed a motion similarity judgment method based on motion primitives. Based on the computational model of kinematics, the similarity of motion and its performance to the human similarity judgment of the same motion are determined. By performing the action similarity task and comparing it with the computational model solving the same task, the action similarity judgment was realized by classifying the actions based on the learned kinematics primitive. e method has high reliability and provides necessary basis for human action classification. Reference [9] proposed a basketball motion image target detection method based on an improved Gaussian mixture model. Edge detection, gray processing, target capture, target recognition, image detection, and other technologies are integrated into basketball sports video, and Gaussian probability density mixing is used to select the appropriate number of continuously updated parameters and each pixel area to achieve basketball sports image detection. e method is effective to some extent. However, the above methods are difficult to determine the standardization of basketball shooting training movements.
In view of the above problems, a method to judge the movement standardization of shooting training based on digital video technology is proposed. e innovation of the research method is to use the time difference method to extract the movement target of basketball shooting training. Based on digital video technology, the initial background image is estimated and the update rate is introduced to update the image. According to the pixel value sequence of the basketball shooting training image, the pixel model of the basketball shooting training image is defined and modified. By discriminating the matching relation between pixel value and background parameter model, the standardization judgment of shooting training movement can be realized. Compared with the previous research results, the method designed based on digital video technology has better stability, high accuracy, and short time and can correct the wrong shooting action in real time, which can effectively guide basketball shooting training.
Digital Video Technology
Digital video technology is to first use video capture equipment such as cameras to convert the color and brightness information of external images into electrical signals, and then record them into storage media [10][11][12]. Digital video is video recorded in digital form, as opposed to analog video. Digital video has different production methods, storage methods, and broadcast methods. For example, digital video signals are generated directly through digital cameras and stored on digital tape, P2 card, blue disc, or disk, so as to obtain different formats of digital video, which is then played on a PC, a specific player, etc.
Representation of Digital Video
Signal. Video is described as a group of continuous images, and each image is regarded as a two-dimensional pixel array. e color representation of each pixel includes three components: red R, green G, and blue B, which is called the RGB space representation of the image. e color coordinates used for the three digital TV systems are different. For digital video capture and display, all three digital TV systems use RGB primary colors, but the definition of each primary color spectrum is slightly different. For the transmission of digital video signal, in order to reduce the required bandwidth and be compatible with monochrome digital TV system, the brightness/chroma coordinate system is adopted [13,14]. e color coordinates used in the NTSC, PAL, and SECAM systems are all derived from the YUV coordinates used for PAL, and YUV is derived from the XYZ coordinates. According to the relationship between the RGB primary color and the YUV primary color, the value of the luminance component Y can be determined by the value of RGB. e two chromaticity values U and V are proportional to the color differences B − Y and R − Y, respectively, and are adjusted to the desired range. e classic conversion relationship between the YUV coordinate system and the RGB primary color value is e conversion of two color spaces is based on the characteristics of a human visual system: in RGB space, if one of the three signals R, G, and B changes, the color of the total image will change, and the human eye can easily detect this change. However, human eyes have different responses to the changes of Y U and V signals. Among them, they are sensitive to the changes of luminance signals, but not very sensitive to the changes of chrominance signals. In this way, we can consider more luminance signals and adopt some processing methods for chrominance signals to improve the compression ratio.
Data Structure of Digital Video Sequence Coding.
In the coding scheme, the video sequence is divided and multiplexed by multiple layers to establish such a data structure.
(1) Sequence: the video sequence starts with the sequence header, including several image groups, and ends with a sequence end code. (2) Group of pictures (GOP): GOP is a head followed by a series of images, which allows fast random access to the sequence, fast search, and editing. It is the smallest coding unit that can be decoded independently in the sequence [15]. e first image in the GOP is an intracoded image (I frame), followed by a forward prediction coded image (P frame) and a bidirectional prediction image (B frame). Each GOP has only one I frame, and this I frame is used as the first frame to start coding. e P frame is encoded by motion-compensated prediction relative to the previous I frame or P frame, and the P frame can be used as a reference frame for other P frame or B frame coding. B frame is encoded by motion compensation prediction of two frames, one is the past frame, and the other is the future frame. e frame arrangement of the GOP is shown in Figure 1. e standard does not specify the number of P and B frames in a GOP, nor their specific sequence, except for the first frame, and only one frame is I frame. Any sequence and frame number can be used to design the encoding scheme. e prediction of each P and B frame is based on the previous reference prediction frame. Too many frames in the group layer will affect the quality and compression ratio of coding. erefore, 10-15 B frames are generally selected in each group layer, and 2-3 B frames are separated between the two P frames.
(3) Image: image is the basic coding unit of video sequence [16]. e image is composed of three rectangular matrices representing the luminance Y and two
Transform Coding.
Transform coding does not directly encode the spatial image signal, but first maps and transforms the spatial signal to another orthogonal vector space to generate a batch of transform coefficients, and then encodes these transform coefficients [18][19][20]. In the digital video sequence image compression and coding technology, the compression performance and error of discrete cosine transform (DCT) are very close to those of K-L transform, and DCT has the characteristics of moderate computational complexity, separability, and fast algorithm. erefore, there are many schemes using DCT coding in image data compression [21][22][23]. At present, DCT is used in almost all transform-based image encoders.
Scientific Programming
In the MPEG series, since the basic unit of DCT transformation is a luminance block or a chrominance block, the size is 8 × 8, so N � 8 can be used in formula (2), so that In practical application, considering the characteristics of separable variables in formula (4), rewriting the latter part of formula (4) can obtain e coefficient can be written as It can be seen that after such processing, the two-dimensional DCT transform is decomposed into one row DCT and one column DCT transform, which is easy to be realized by computer. For the inverse transformation, the variables can still be separated to facilitate the implementation of the algorithm.
Predictive Coding.
Predictive coding is a technique to improve compression performance through statistical redundancy. Based on the previously encoded pixel values, the encoder can estimate and predict the pixel values to be encoded and decoded [26][27][28]. For a large number of static or slowly varying regions in the sequence image, the conditional patching method can be used to store the first frame image in the reference frame and send it to the other party. en, the predicted value I ⌢ k (z) of the pixel sampling value I k (z) of the k frame image at z � (x, y) is the restoration value I k−1 ′ (z) of the pixel value at the same position of the k − 1 frame image. e interframe difference is expressed as For sequence images with relatively moderate amount of motion, encode the frame difference FD k (z) of the transmitted subblock, where k is the subscript of the subblock, and use the following formula to restore the subblock:
Standardized Judgment Method of Shooting
Training Action e standardized judgment method of shooting training action based on digital video technology is mainly to collect the basketball shooting training image by sampling and characteristic analysis of the pixels in the position space of basketball shooting training action. Using the time difference method, the basketball shooting training target is found and extracted in real time in the digital video sequence. Based on digital video technology, the initial background estimation image is introduced, and the update rate is introduced to update the background estimation image. According to the pixel value sequence of the basketball shooting training image, the pixel model of the basketball shooting training image is defined, and the defined pixel value model parameters are modified. e standardized judgment of shooting training action is realized by judging whether the defined pixel value matches the background parameter model.
Collecting Basketball Shooting Training Images.
Assuming that the Gaussian mixture model labels the spatial position rotation of basketball shooting action, at multiple points in the basketball shooting space, the shape coordinate of the basketball shooting action under the initial deformation is X, the width of the entire characteristic image of the basketball court is W, and the height is H. e threedimensional spatial feature image I of basketball shooting is divided into several subblocks by using the grid model. e matching coordinate of the central point of the matching point along the gradient direction on the grid model is calculated as X ′ , and then, the spherical grid model of basketball in the hands of players is calculated. e triangular partition pheromone of single-frame basketball shooting action is obtained at the j manual calibration point (x ij , y ij ): e basketball shooting action sampling image has 8 × 8 pixels in the grid surface. e sampling point density feature is extracted, and the mean square error between the standardized feature points (x ij ′ , y ij ′ ) of the shooting action is In the above formula, N is the total number of uniformly distributed grids of the image. Considering all the pixel feature points of n spatial positions, the difference error vector of basketball players in shooting and lifting the ball is obtained as us, the pixel sampling and feature analysis of three main position spaces of basketball shooting training action are realized, and the image acquisition of basketball shooting training is completed.
Extracting the Goal of Basketball Shooting Training.
e time difference method mainly uses the difference of two or several consecutive frames in the digital video sequence to extract the moving target of basketball shooting training [29][30][31]. e basic process of the time difference method is shown in Figure 2.
(1) Calculate the difference image D k between the k frame image f k and the k − 1 frame image f k−1 ; according to the following three methods, the difference image obtained is expressed as follows.
Positive difference: Negative difference: Full difference: (2) Binarize the image D k after the difference to obtain In the above formula, D k contains the change of the scene between two consecutive frames of images. is change is composed of many factors. It can be considered that the change of the moving target is obvious. Given a threshold, when the difference of a pixel value in the differential image is greater than a given threshold, the pixel is considered to be a foreground pixel, possibly a point on the target; otherwise, it is considered a background pixel [32][33][34].
(3) Postprocessing the image R k to obtain R k ′ , where the area of the moving target should be greater than the given threshold. Morphological filtering and noise removal can be used to eliminate noise in small areas.
(4) Judge the postprocessing result R k ′ , mark the area larger than the given threshold as the target, and obtain its complete location information. rough the above steps, the basketball shooting training sports goal is extracted.
Judging the Standardization of Basketball Shooting Training Action
(1) Initial background estimation image: the single Gaussian distribution background model is suitable for single-modal background situations. It establishes a model η(x, μ t , σ 2 t ) represented by a single Gaussian distribution for the color of each image point, where t represents time. Let the current color value of the basketball shooting training image point be f i ; calculate the average brightness μ 0 of each pixel and the variance δ 2 0 of pixel brightness of the basketball shooting training image in the digital video sequence, and take the image B 0 with Gaussian distribution composed of μ 0 and δ 2 0 as the initial background estimation image, which is expressed as Scientific Programming In formula (17), In the above formula, In the above formula, p i,t � [w i,t , m i,t , l i,t ] is each single model, which consists of three parameters, where w i,t is the weight of this single model, and its size reflects the current reliability of the pixel value represented by this model; m i,t is the mean value of this single model, which reflects the center of each single peak distribution; and l i,t is the width of the unimodal distribution of this single model, and its size reflects the degree of instability of the pixel value, and its role is equivalent to that of the aforementioned single model. K is the number of single models, which reflects the number of peaks in the multipeak distribution of pixel values. Its selection depends on the pixel value distribution and also on the computing power of the system. e usual value is between 3 and 5. In order to keep the model close to the current distribution of pixel values, it is necessary to update the parameters of this model for each defined pixel value [35]. (4) Correct the pixel value model parameters: the parameter correction steps are as follows: Step 1: for each new pixel value, first check whether it matches the model. e detection method is Step 2: after the detection in step 1, the weight of the single model matching the defined pixel value is corrected as e parameters of the single model that matches the defined pixel values are corrected as follows: Step 3: after completing the above correction, it is necessary to normalize the weight of single model in the model as follows: (5) Establishment of background pixel model: the above model is used to define the pixel value of the basketball shooting training image; that is, it is necessary to judge whether the defined pixel value is target pixel or background pixel, so as to realize the standardized judgment of shooting training action. Calculate each single model w i,t /l i,t , arrange each model in descending order of the single model, consider the previous model to be a background model, and obtain the model of background pixels [36] expressed as rough the above steps, the pixel value of the basketball shooting training image is defined by using the background parameter model, and the standardized judgment of shooting training action is realized by judging whether the defined pixel value matches the background parameter model.
Binarization
Post processing Distinguish Result Figure 2: Basic process of the time difference method.
Experimental Environment and Data.
In order to verify the effectiveness of the method for determining the standardization of shooting training movements based on digital video technology, the experiments were conducted on a computer with Intel Core i7-6800K, 3.4 GHz, Nvidia GeForce GTX1080 (8G) graphics card and 24G memory. e operating system is Win10, and the software platform is Anaconda3 and Visual Studio 2015.
e resolution of basketball shooting action visual image sampling is 320 × 240. A group of basketball shooting action visual image simulation data express a basketball shooting action.
ere are 100 test sample image sets in each shooting action mode and a total of 1024 × 1000 test sets in basketball shooting action visual image database. e reference background neighborhood is 5 × 5 image blocks, that is, 20 × 20 pixels, and the model update parameters are β 0 � 0.95, β 1 � 0.99, β 2 � 0.90, and the threshold t IBSCI is determined after many experiments. In order to ensure the absolute fairness of the experimental results, the ball selection processing in the whole experimental process is completed by the artificial intelligence robot, and the relevant participants only serve as the detection and verification personnel to supervise and investigate the ball selection operation of the robot. According to the above parameters, SolidWorks is used to establish a simplified visual analysis model of basketball shooting action, import the analysis data into ADAMS software for image processing and analysis, and make standardized judgment on basketball shooting action. e standardized action mode of basketball shooting is shown in Figure 3.
Save the basketball shooting standardized action data shown in Figure 3 as . TXT text data, load it into the image data processing software, conduct computer vision analysis, guide the actual shooting action, collect the basketball shooting training image, and obtain the original basketball shooting information. e collection results are shown in Figure 4. Figure 4 shows the original basketball shooting information collection results. In order to realize the standardized judgment of shooting training action, it is necessary to extract the training target from the collected original basketball shooting information. e proposed standardized judgment method of shooting training action based on digital video technology is used to extract the collected original basketball shooting training target and judge the standardization of shooting training action. e results are shown in Figure 5.
It can be seen from the analysis of Figure 5 that the standardized judgment method of shooting training action based on digital video technology can effectively realize the extraction and detection of moving targets in basketball shooting training, correct shooting errors in real time, and effectively guide basketball shooting training.
In order to evaluate and compare the proposed standardized judgment method of shooting training action, the accuracy rate and recognition rate are used as evaluation indexes, and the calculation formulas of accuracy rate and recognition rate are as follows: where TP is the number of foreground pixels that are correctly detected, FP is the number of pixels whose background is misjudged as foreground, and FN is the number of pixels whose foreground is misjudged as background.
Comparison of Standardized Judgment Accuracy of
Shooting Training Action. In order to further verify the judgment accuracy of the proposed method, the accuracy is taken as the evaluation index. e higher the accuracy, the higher the judgment accuracy. By comparing the method of reference [8] and the method of reference [9], the standardized judgment accuracy of shooting training action of different methods is obtained, and the comparison results are shown in Figure 6.
According to the analysis of Figure 6, when the number of experiments is 30, the average standardized judgment accuracy of the shooting training action of the method of reference [8] is 84.6%, the average standardized judgment accuracy of the shooting training action of the method of reference [9] is 70.3%, and the average standardized judgment accuracy of shooting training action of the proposed method is as high as 95.2%. erefore, compared with the method of reference [8] and the method of reference [9], the proposed method has a higher accuracy of the standardized judgment of shooting training action and can effectively improve the accuracy of standardized judgment of shooting training action.
Comparison of Standardized Judgment and Stability of
Shooting Training Action. Further verify the judgment stability of the proposed method, and take the recognition rate as the evaluation index. e higher the recognition rate, the better the judgment stability of the method. By comparing the method of reference [8], the method of reference [9], and the proposed methods, we get the comparison results of the standardized judgment stability of shooting training actions of different methods, as shown in Figure 7.
According to the analysis of Figure 7, when the number of experiments is 30, the average standard judgment Scientific Programming recognition rate of shooting training action of the method of reference [8] is 88.4%, the average standard judgment recognition rate of shooting training action of the method of reference [9] is 80.2%, and the average standard judgment recognition rate of the shooting training action of the proposed method is as high as 96%. It can be seen that compared with the method of reference [8] and the method of reference [9], the proposed method has better stability in judging the standardization of shooting training action.
Comparison of Standardized Judgment Time of Shooting
Training Action. On this basis, the judgment time of the proposed method is verified, and the method of reference [8], the method of reference [9], and the proposed method are compared. e standardized judgment time of the shooting training action of different methods is compared, and the comparison results are shown in Table 1.
According to the data in Table 1, with the increase in the number of experiments, the standardized judgment time of shooting training actions of different methods increases. When the number of experiments reaches 30, the standardized judgment time of the shooting training action of the method of reference [8] is 23.9 s, the standardized judgment time of the shooting training action of the method of reference [9] is 26.5 s, whereas the standardized judgment time of the shooting training action of the proposed method is only 15.3 s. erefore, compared with the method of reference [8] and the method of reference [9], the standardized judgment time of shooting training action of the proposed method is shorter.
Conclusion
(1) e proposed standardized judgment method of shooting training action based on digital video technology gives full play to the advantages of digital video technology (2) e standardized judgment of shooting training action is high, which can effectively shorten the judgment time and has good judgment stability (3) Correct shooting mistakes in real time, and effectively guide basketball shooting training However, in the process of standardized judgment of shooting training action, dimension reduction is not considered to deal with the characteristics of shooting training action, so as to reduce the amount of calculation. erefore, in the next research, the dimension of shooting training action characteristics is reduced to further reduce the judgment time.
Data Availability e raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
Conflicts of Interest
e authors declare that they have no conflicts of interest regarding this work. | 6,087.2 | 2021-12-20T00:00:00.000 | [
"Computer Science"
] |
Lightweight Anomaly Detection for Wireless Sensor Networks
Anomaly detection in wireless sensor networks (WSNs) is critical to ensure the quality of senor data, secure monitoring, and reliable detection of interesting and critical events. The main challenge of anomaly detection algorithm in WSNs is identifying anomalies with high accuracy while consuming minimal resource of the network. In this paper two lightweight anomaly detection algorithms LADS and LADQA are proposed for WSNs. Both algorithms utilize the one-class quarter-sphere support vector machine (QSSVM) and convert the linear optimization problem of QSSVM to a sort problem for the reduced computational complexity. Experimental results show that the proposed algorithms can keep the lower computational complexity without reducing the accuracy for anomaly detection, compared to QSSVM.
Introduction
Wireless sensor networks (WSNs) have been widely used in various applications including civil and military domains [1]. However, the harsh deployment environment and the constrained capabilities of sensors (energy, CPU, memory, etc.) make WSNs more vulnerable to different types of misbehaviors or anomalies. In WSNs, an anomaly or outlier (these terms are used interchangeably in this paper) is defined as the measurement that significantly deviates from the normal pattern of the sensed data [2]. Sensor data are affected by these anomalies that always correspond to node software or hardware failure, reading errors, unusual events, and malicious attacks. Therefore, it is critical to efficiently and accurately identify anomalies in the sensor data to ensure data quality, secure monitoring, and reliable detection of interesting and critical events.
The context of sensor networks and nature of sensor data make design of an appropriate anomaly detection technique challenging [3]. The constrained environment of a WSN impacts on anomaly detection algorithms. Node constraints on computational power and memory mean that algorithms for anomaly detection should have low computational complexity and occupy little memory space. Moreover, prelabelled data are expensive or difficult to obtain in WSNs. Anomaly detection for WSNs should be able to operate on unlabelled data. In general, the key challenge of anomaly detection algorithm in WSNs is identifying anomalies with high accuracy while consuming minimal resource of the network.
Recently, support vector machines (SVM) in the form of the one-class quarter-sphere (QSSVM) have been used for anomaly detection in WSNs due to their reduced computational complexity and ability to operate on unlabelled data [4,5]. QSSVM can convert the quadratic optimization problem of one-class SVM to a linear optimization problem. A family of algorithms based on QSSVM are proposed for anomaly detection in WSNs and have shown the potential for anomaly detection. However, the main disadvantage of those algorithms derived from QSSVM is the high computational cost for the solution of a linear programme.
In this paper, we convert the linear optimization problem of QSSVM to a sort problem and propose two lightweight anomaly detection algorithms for WSNs to identify anomalies. Simulations show that our proposed algorithms are able to reduce the computational complexity without reducing the accuracy for anomaly detection.
Our paper makes the following contributions: (1) We present a mathematic method to prove that the linear optimization problem of QSSVM can be converted to a sort problem. (2) Based on the presented method, we propose two lightweight anomaly detection algorithms for WSNs, namely, lightweight anomaly detection algorithm using sort (LADS) and lightweight anomaly detection algorithm using quick select (LADQS). It is shown that our algorithms are equivalent to QSSVM but have lower computational complexity.
(3) The experimental evaluation of the effectiveness and efficiency of the proposed algorithms on real world WSN dataset is presented.
The remainder of this paper is structured as follows. Related works on QSSVM-based anomaly detection models are presented in Section 2. In Section 3, first the principle of QSSVM is described, and then a method to convert the linear optimization problem of QSSVM to a sort problem is discussed. Furthermore, our proposed lightweight anomaly detection algorithms are also explained. Experimental results and performance evaluation are reported in Section 4. Section 5 concludes the paper and suggests some directions for future research.
Related Work
Anomaly detection typically makes use of data mining and machine learning techniques to detect abnormal activities in the systems [2]. Anomaly detection techniques for WSNs can be categorized into statistical-based, nearest neighborbased, clustering-based, classification-based, and spectral decomposition-based approaches [6]. Classification models are important models of data mining and machine learning community in which a classification model is learned using the known training data and used after that to classify the unseen testing data into different types of classes. SVMbased techniques are one of the popular classificationbased approaches and have been widely used to detect anomalies due to the advantages of no requirement of an explicit statistical model and prevention from the curse of dimensionality.
In WSNs, prelabelled data are expensive or difficult to obtain. Several one-class SVM-based anomaly detection techniques have been proposed to process the unlabeled data. Their main idea is to use a nonlinear function to map the data vectors collected from the original input space to a higher dimensional space called feature space. Then a decision boundary of normal data is found, which encompasses the majority of data vectors in the feature space. Those data vectors falling outside the normal boundary are classified as anomalous. To this end, Schölkopf et al. have presented a hyperplane-based one-class SVM by fitting a hyperplane from the origin. Those data vectors near the origin are considered as anomalous [7]. Tax and Duin have proposed a hypersphere-based one-class SVM by fitting a hypersphere with a minimum radius [8]. Those data vectors falling outside the hypersphere are considered as anomalous. However, these one-class SVM-based techniques still require solving a quadratic optimization, and that is extremely costly.
In order to reduce expensive computational complexity of the quadratic optimization, Campbell and Bennett have formulated a linear programming approach for the hyperplanebased SVM proposed in [9], which is based on attracting the hyperplane towards the average of the distribution of mapped data vectors. Laskov et al. have extended work in Tax and Duin by proposing a quarter-sphere one-class SVM, which converts the quadratic optimization problem to a linear optimization problem by fitting a hypersphere centered at the origin and consequently reduces computational complexity of learning the normal boundary of data vectors [5,8]. Based on QSSVM, several distributed outlier detection techniques for WSNs are proposed by Rajasegarar et al. and Zhang et al. [10,11]. After that, Rajasegarar et al. have further extended work in [12] by proposing a hyperellipsoidal one-class SVM using a linear optimization. However, the solution of a linear optimization, rather than a quadratic, still requires expensive computational complexity due to the fact that the solution of the linear programme is ( 3 ) where is the number of data vectors in the training set.
In this paper, we propose two lightweight anomaly detection techniques which convert the linear optimization problem of QSSVM to a sort problem and consequently reduce computational complexity of learning the normal boundary of data vectors.
Lightweight Anomaly Detection Algorithm for WSNs
In this section, we first introduce the principles of modeling the one-class quarter-sphere support vector machine (QSSVM) proposed in [5]. After that, we use a mathematical method to prove that the linear optimization problem of QSSVM can be converted to a sort problem and further propose two lightweight anomaly detection algorithms for WSNs.
Principles of the One-Class Quarter-Sphere SVM.
In this section we discuss the principles of QSSVM proposed by Laskov et al. in [5]. They have converted the quadratic optimization problem of the one-class SVM to a linear optimization problem by fixing the center of the quartersphere at the origin. The geometry of hypersphere SVM is shown in Figure 1.
Assume that data vectors { ∈ , = 1, . . . , } of variables in the input space are mapped into the feature space using a certain nonlinear mapping function ( ). The constrained optimization problem of QSSVM can be formalized as follows: subject to: where { : = 1, 2, . . . , } are the slack variables that allow some of the data vectors to fall outside the quarter-sphere. The regularization parameter V is a representation of the number of data vectors that are expected to be anomalies, where V ∈ (0, 1).
Obtaining the dual form of the optimization problem allows its formulation in terms of dot products of the data vectors in the training set. Using the kernel trick, the dot products ‖ ( )‖ 2 are replaced by the kernels ( , ). The dual formulation of (1) will become It can be seen from (2) that optimization problem is stated in terms of a dot product of an image vector with itself; this causes an issue with distance-based kernels, such as the RBF kernel, as the diagonal term ( , ) becomes equal for all the vectors. This can be solved by centering the kernel matrix in feature space where the mean of the image vectors is subtracted from each image vector as follows: There is no explicit vector in feature space that represents the mean; however the dot product = ( ( ) , ( ) ) of the centred image vectors can be obtained in terms of the kernel matrix = ( , ) = ( ( ) ⋅ ( )) using = − 1 − 1 + 1 1 , where 1 is an × matrix with all values equal to 1/ [13]. When the kernel matrix is centred in feature space the norms of the kernels are no longer equal and the terms ( , ) of (2) are replaced by the diagonal elements ( , ) of the centered kernel matrix . √ ( , ) can be considered as the distances between data vectors and the origin of its centered quarter-sphere in the feature space. Consequently, the dual problem (2) can be solved.
After solving (2) for { }, the data vectors can be classified as follows. Data vectors with = 0, which lie inside the hypersphere, are considered as normal and their distances from the origin are smaller than the radius of the quartersphere. Data vectors with = 1/V are considered as anomalies, which lie outside the hypersphere. Data vectors with 0 < < 1/V , which lie on the surface of the hypersphere, are called the border support vectors. Moreover, the minimal radius of the hypersphere can be obtained using 2 = ( , ) for any border support vector .
Lightweight Anomaly Detection Algorithm Using Sort.
In the previous section, we introduced the principle of QSSVM which convert the quadratic optimization problem to a linear optimization problem. We are aware that the learning process of QSSVM is to find the minimal radius of one-class quarter-sphere for the training set, which can be obtained by the distances between border support vectors and its origin in the feature space using 2 = ( , ). This process has high computational and memory complexity due to the fact that it requires solving the linear programme and finding data vectors with 0 < < 1/V , that is, the border support vectors. To reduce the cost of modeling QSSVM, we propose a lightweight anomaly detection algorithm using sort to identify anomalies. Our algorithm can obtain the minimal radius of QSSVM, that is, the distance √ ( , ) between data vector with 0 < < 1/V and the origin, by a descending sorted sequence instead of the solution of a linear optimization problem. That is to say, our algorithm will be equivalent to QSSVM but converts the linear optimization problem to a sort problem. For doing so, we first fix the data vectors at the origin in the feature space through generation of kernel matrix and the transformation of central kernel matrix like QSSVM [5] and obtain the formulation of (2). Note that the terms ( , ) of (2) have been actually replaced by the diagonal elements ( , ) of the centered kernel matrix as we discussed in the previous subsection. Next we require finding the minimal radius from (2). Obviously, the process for calculation of only needs to find a distance √ ( , ) corresponding to satisfying the formulation of (2) and 0 < < 1/V , instead of solving the linear programme for (2). In order to find the minimal radius , the sequence { ( , )} is sorted in descending order and becomes . For convenience, we replace ( , ) with . Consequently, the dual formulation of (2) will be simplified to International Journal of Distributed Sensor Networks where 1 ≥ 2 ≥ ⋅ ⋅ ⋅ ≥ , V ∈ (0, 1) is the regularization parameter that represents the fraction of outliers, and is the number of the data vectors in the training set.
Actually, there exists an implied constraint that = if and only if = . This implies that the data vectors which have the same distances to origin in the feature space either are normal or not. Now we prove that the minimal radius can be computed directly from the descending sequence { 1 , 2 , . . . , } through the parameters V and .
is the solution of (4), the value of the objective function attains maximum under the constrained condition and can be denoted as 1 = 1 1 + 2 2 + ⋅ ⋅ ⋅ + . Firstly, we prove that 1 is the maximum of { 1 , 2 , . . . , } if 1 is the maximum of { 1 , 2 , . . . , }. Assume the contrary that the value of 1 is not maximal if the value of 1 is maximal. This assumption indicates that there exists such that value of is the maximal of { }. We exchange the value of with 1 , so the value of the objective function becomes Subtract 1 from 2 and we have From the contradictions, we have 1 ≥ and ≥ 1 . So we get 2 − 1 ≥ 0. This contradicts the fact that 1 is not the maximum of the objective function. Thus we have derived that 1 is the maximum of { | = 1, 2, . . . , }.
Step 4. If ( , ) > 2 is outlier Else is normal End If Algorithm 1 Theorem 4. The minimal radius of the quarter-sphere in QSSVM can be obtained by where ( +1) is the ( + 1)th largest squared distances ( , ) between and the origin of its centered quarter-sphere in the feature space.
From Theorem 4, the minimal radius of the quartersphere in QSSVM can be obtained by a descending sorted sequence using (9). So the linear optimization problem of QSSVM is converted to a sort problem of { ( , )}. Now, we propose a lightweight anomaly detection algorithm using sorting (LADS), which is equivalent to the QSSVM. The detail of LADS is described as follows.
After the generation of kernel matrix and the transformation of central kernel matrix, the data vectors are centered at the origin in the feature space the same as QSSVM. These squared distances to the origin are sorted in descending order and form a descending sequence { ( , )}, where ( 1 , 1 ) ≥ ⋅ ⋅ ⋅ ≥ ( , ). The ( + 1)th largest squared distance ( , ) ( = ⌊V ⌋) is chosen from the descending sequence as the minimal radius by (9). The data vectors can be classified depending on . The data vectors whose distances to origin are not larger than are considered as normal. Otherwise, the data vectors whose distances to origin are larger than are considered as outliers. Now, our algorithm can be described in Algorithm 1.
Lightweight Anomaly Detection Algorithm Using Quick
Select. As discussed in the previous section, the minimal radius of the quarter-sphere in QSSVM can be obtained from the descending sequence { ( , )}. The computational complexity for obtaining the descending sequence is ( log 2 ). In fact, the process for the descending sequence is unnecessary, because can be determined by (9) if the (⌊V ⌋+ 1)th largest element of the original sequence { ( , )} can be found. So we propose an anomaly detection method LADQS based on Quickselect algorithm [14] to find the (⌊V ⌋ + 1)th largest distance to the origin, that is, minimal radius . In computer science, Quickselect is a selection algorithm to find the th largest element in an unordered list. Quickselect uses the same overall approach as quicksort [14], choosing one element as a pivot and partitioning the data in two based on the pivot, accordingly as less than or greater than the pivot. However, instead of recursing into both sides, as in quicksort, Quickselect only recurses into one sidethe side with the element it is searching for. This reduces the average computational complexity from ( log 2 ) to ( ). The pseudocode of LADQS algorithm is shown in Algorithm 2.
Experimental Results and Evaluation
This section specifies the performance evaluation of our two techniques compared to QSSVM. In our experiments, we have used real data gathered at the Grand-Saint-Bernard [15], which is similar to the one used in [16]. For simulation, we use Matlab to implement our algorithms and QSSVM on a single node in a WSN. For fairness, we use the average of the tests operated on 7 different nodes, respectively, as the experimental results.
Experimental Datasets.
The real data are collected from a closed neighborhood from a WSN deployed in Grand-Saint-Bernard as shown in Figure 2. The closed neighborhood consists of node 2 and its 6 spatially neighboring nodes, namely, nodes 3, 4, 8, 12, 20, and 14. In our simulations, we test the real data collected during the period of 6 am-6 pm on September 20, 2007, with two attributes: ambient temperature and relative humidity for each sensor measurement. The data is preprocessed and normalized to the range [0, 1]. The number of anomalous data is about 10% of normal data. Measurements are labeled depending on the degree of dissimilarity between one another.
Experimental Results and Evaluation.
We choose radial basis function (RBF) kernel function to generate kernel
matrices. RBF kernel function
where is the width parameter of the kernel function. And the kernel width parameter is set to 0.25 in our experiments.
We have examined the effect of the regularisation parameter V for our two anomaly detection algorithms and QSSVM. V represents the fraction of anomalies in training set and we have varied it in the range from 0.01 to 0.25 in intervals of 0.03. And we also have examined the training time for the three algorithms. Figures 3 and 4 show that the detection rate and the false alarm rate obtained for our algorithm LADS use RBF kernel function for real data, respectively. As we discussed in the previous section, LADS, LADQS, and QSSVM have the same principle of data classification but have difference in the way of finding the minimal radius . This means the LADS behaves in a similar manner to LADQS. Therefore, results of LADQS and QSSVM have been omitted. Figure 5 shows the training time elapsing for the three algorithms. We can see that the training time of our algorithms LADS and LADQS is significantly less than that of QSSVM. It indicates that our two algorithms have less time and lower computational complexity, compared to QSSVM.
Simulation results show that our two algorithms LADS and LADQS have the lower computational complexity without reducing the accuracy for anomaly detection, compared to QSSVM.
Computational complexity of our techniques is presented in Table 1, where denotes the number of data in the training sets, represents the dimensionality of the measurement, V represents the fraction of anomalies in the training set, and ( ) represents the computational complexity of solving a linear optimization problem.
Conclusion
In this paper we propose two lightweight anomaly detection algorithms for WSNs, LADS and LADQS. Both algorithms are based on QSSVM but convert the linear optimization problem of QSSVM to a sort problem. Simulation results show that our algorithms reduce the computational complexity while achieving the same accuracy for anomaly detection. Our future research includes selecting the optimal parameters for V and implementing our algorithms on multiple sensor nodes in real-life. | 4,744.6 | 2015-08-01T00:00:00.000 | [
"Computer Science"
] |
Anticancer Activities of Pterostilbene-Isothiocyanate Conjugate in Breast Cancer Cells: Involvement of PPARγ
Trans-3,5-dimethoxy-4′-hydroxystilbene (PTER), a natural dimethylated analog of resveratrol, preferentially induces certain cancer cells to undergo apoptosis and could thus have a role in cancer chemoprevention. Peroxisome proliferator-activated receptor γ (PPARγ), a member of the nuclear receptor superfamily, is a ligand-dependent transcription factor whose activation results in growth arrest and/or apoptosis in a variety of cancer cells. Here we investigated the potential of PTER-isothiocyanate (ITC) conjugate, a novel class of hybrid compound (PTER-ITC) synthesized by appending an ITC moiety to the PTER backbone, to induce apoptotic cell death in hormone-dependent (MCF-7) and -independent (MDA-MB-231) breast cancer cell lines and to elucidate PPARγ involvement in PTER-ITC action. Our results showed that when pre-treated with PPARγ antagonists or PPARγ siRNA, both breast cancer cell lines suppressed PTER-ITC-induced apoptosis, as determined by annexin V/propidium iodide staining and cleaved caspase-9 expression. Furthermore, PTER-ITC significantly increased PPARγ mRNA and protein levels in a dose-dependent manner and modulated expression of PPARγ-related genes in both breast cancer cell lines. This increase in PPARγ activity was prevented by a PPARγ-specific inhibitor, in support of our hypothesis that PTER-ITC can act as a PPARγ activator. PTER-ITC-mediated upregulation of PPARγ was counteracted by co-incubation with p38 MAPK or JNK inhibitors, suggesting involvement of these pathways in PTER-ITC action. Molecular docking analysis further suggested that PTER-ITC interacted with 5 polar and 8 non-polar residues within the PPARγ ligand-binding pocket, which are reported to be critical for its activity. Collectively, our observations suggest potential applications for PTER-ITC in breast cancer prevention and treatment through modulation of the PPARγ activation pathway.
Introduction
The incidence of cancer, in particular breast cancer, continues to be the focus of worldwide attention. Breast cancer is the most frequently occurring cancer and the leading cause of cancer deaths among women, with an estimated 1,383,500 new cases and 458,400 deaths annually [1]. Many treatment options, including surgery, radiation therapy, hormone therapy, chemotherapy, and targeted therapy, are associated with serious side effects [2][3][4][5]. Since cancer cells exhibit deregulation of many cell signaling pathways, treatments using agents that target only one specific pathway usually fail in cancer therapy. Several targets can be modulated simultaneously by a combination of drugs with different modes of action, or using a single drug that modulates several targets of this multifactorial disease [6].
Peroxisome proliferator-activated receptors (PPAR) are ligandbinding transcription factors of the nuclear receptor superfamily, which includes receptors for steroids, thyroids and retinoids [7,8]. Three types of PPAR have been identified (a, b, c), each encoded by distinct genes and expressed differently in many parts of the body [8]. They form heterodimers with the retinoid X receptor, and these complexes subsequently bind to a specific DNA sequence, the peroxisome proliferating response element (PPRE) that is located in the promoter region of PPARc target genes and modulates their transcription [9]. PPARc is expressed strongly in adipose tissue and is a master regulator of adipocyte differentiation [10]. In addition to its role in adipogenesis, PPARc is an important transcriptional regulator of glucose and lipid metabolism, and is implicated in the regulation of insulin sensitivity, atherosclerosis, and inflammation [10,11]. PPARc is also expressed in tissues such as breast, colon, lung, ovary, prostate and thyroid, where it regulates cell proliferation, differentiation, and apoptosis [12][13][14].
Although it remains unclear whether PPAR are oncogenes or tumor suppressors, research has focused on this receptor because of its involvement in various metabolic disorders associated with cancer risk [15][16][17]. The anti-proliferative effect of PPARc is reported in various cancer cell lines including breast [18][19][20][21], colon [22], prostate [23] and non-small cell lung cancer [24]. Ligand-induced PPARc activation can induce apoptosis in breast [13,20,25,26], prostate [23] and non-small cell lung cancer [24], and PPARc ligand activation is reported to inhibit breast cancer cell invasion and metastasis [27,28]. Results of many studies and clinical trials have raised questions regarding the role of PPARc in anticancer therapies, since its ligands involve both PPARcdependent and -independent pathways for their action [29].
Previous studies showed that thiazolidinediones can inhibit proliferation and induce differentiation-like changes in breast cancer cell lines both in vitro and in xenografted nude mice [13,30]. Alternately, Abe et al. showed that troglitazone, a PPARc ligand, can inhibit KU812 leukemia cell growth independently of PPARc involvement [31]. In addition to in vitro studies, in vivo administration of PPARc ligands also produced varying results. The use of troglitazone was reported to inhibit MCF-7 tumor growth in triple-negative immunodeficient mice [13] and in DMBA-induced mammary tumorigenesis [32], and administration of a PPARc ligand (GW7845) also inhibited development of carcinogen-induced breast cancer in rats [33]. In contrast, a study by Lefebvre et al. showed that PPARc ligands, including troglitazone and BRL-49653, promoted colon tumor development in C57BL/6JAPCMin/+ mice, raising the possibility that PPARc acts as a collaborative oncogene in certain circumstances [34]. It thus appears that PPARc activation or inhibition can have distinct roles in tumorigenesis, depending on the cancer model examined. Hence determining possible crosstalk between PPARc and its ligand in cancer is critical for the development of more effective therapy.
Trans-3,5-dimethoxy-4-hydroxystilbene (PTER) is an antioxidant found primarily in blueberries. This naturally occurring dimethyl ether analog of resveratrol has higher oral bioavailability and enhanced potency than resveratrol [35]. Based on its antineoplastic properties in several common malignancies, studies suggest that PTER has the hallmark characteristics of an effective anticancer agent [36][37][38][39][40]. Recent research from our laboratory showed that PTER-ITC conjugate (Fig. 1A), a novel class of hybrid compound synthesized by appending an isothiocyanate moiety to the PTER backbone, can induce greater cytotoxicity in tumor cells than PTER alone [41,42]. In human breast and prostate carcinoma cells, PTER-ITC induces strong anticancer activity at a much lower dose than the PTER parent compound [41,42].
Here we analyzed the anti-cancer activity of PTER-ITC in MCF-7 and MDA-MB-231 breast cancer cells. As PPARc mediates anti-tumor activity in a variety of cancer types, we hypothesized that PTER-ITC could modulate the activity of PPARc pathway in breast cancer cells and inhibit tumor cell growth. Our results show that PTER-ITC induced apoptosis in breast cancer cells through caspase activation, which increased the Bax/Bcl-2 ratio and downregulated survivin. Our molecular docking study also demonstrated that PTER-ITC make contact with amino acids within the ligand-binding pocket of PPARc that are crucial for its activation. We found that PPARc activation has an important role in PTER-ITC-induced apoptosis and reduced survivin levels. Our studies thus provide evidence for the usefulness of PTER-ITC in breast cancer therapy involving various pathways, including PPARc.
Cell lines and culture
Three breast cancer cell lines (MCF-7, MDA-MB-231 and T47D) with distinct characteristics were obtained from the National Center for Cell Science (NCCS; Pune, India). MCF-7 and T47D are estrogen receptor (ER)-positive and lack HER-2 expression, while MDA-MB-231 is ER-negative and has low HER-2 expression. MCF-7 cells express wild-type p53, whereas MDA-MB-231 and T47D express mutant p53. All three cell lines express PPARc protein. T47D cells were maintained in RPMI medium supplemented with 2 mM L-glutamine, 4.5 g/L glucose and 0.2 U/ml insulin. MCF-7 and MDA-MB-231cells were maintained in Dulbecco's modified Eagle's medium (DMEM) supplemented with 10% fetal bovine serum (heat-inactivated) (both from Invitrogen, Life Technologies) and 1% antibiotic mix (100 U/ml penicillin, 100 mg/ml streptomycin) at 37uC, 5% CO 2 in a humidified atmosphere.
Cytotoxicity assays
The anti-proliferative effect of PTER and PTER-ITC was determined by the MTT assay as described [41]. Briefly, MCF-7 and MDA-MB-231 cells were seeded at a density of ,56103 cells/well in a 96-well microtiter plate and incubated overnight. Cells were then exposed to increasing PTER and PTER-ITC concentrations (1, 10, 20, 40 and 60 mM) for 24 h. Control cells were treated with 0.1% DMSO (vehicle control). The effect of the inhibitor GW9662 on PTER-ITC-induced cell death was also studied to evaluate involvement of PPARc activation in this process. After 24 h, cultures were assayed by addition of 20 ml MTT (5 mg/ml) and incubation (4 h, 37uC). MTT-containing medium was then aspirated and 200 ml DMSO was added to dissolve the formazone crystal. Optical density (OD) was measured at 570 nm in an ELISA plate reader (Fluostar Optima, BMG Labtech, Germany). Absorbance values were expressed as percentage of control.
Change in nuclear morphology of apoptotic cells
Changes in nuclear morphology of apoptotic cells were examined by fluorescence microscopy of DAPI-stained cells. In brief, 0.56106 cells were seeded in a 6-well plate and incubated (24 h) with 10 and 20 mM concentrations of PTER-ITC in presence and absence of GW9662. For this the cells were pretreated with 10 mM GW9662 for 1 h, followed by treatment with 10 and 20 mM PTER-ITC (for the next 24 h). The cells were then washed with PBS (phosphate-buffered saline) and incubated with 500 ml DAPI (0.5 mg/ml; 10 min, in the dark) and observed by fluorescence microscopy (Zeiss, Axiovert 25). annexin V, Alexa Fluor 488 (Alexa488) and propidium iodide for cell staining in binding buffer (room temperature, 15 min in the dark). Stained cells were analyzed on a fluorescence activated cell sorter (FACS Calibur, BD Biosciences, San Jose, CA) and data were analyzed using Cell Quest 3.3 software.
Immunofluorescence staining
For immunofluorescence staining, cells were washed with PBS and fixed in 3% paraformaldehyde, permeabilized with 0.1% Triton X-100 and blocked with 1% BSA (bovine serum albumin; 30 min, room temperature). Cells were then incubated with anti-PPARc antibody (1:200 in blocking buffer; 1 h, room temperature). Finally, the cells were washed with PBS and incubated with FITC-labeled anti-rabbit secondary antibody (1:1000 in blocking buffer; 30 min, room temperature) and observed by fluorescence microscopy (Zeiss, Axiovert 25).
Luciferase assay
PPARc activity was studied by luciferase assay as described [18]. Briefly, cells were seeded at density of ,46104 cells/well in 12-well microtiter plates, and incubated overnight. Cells were then incubated in serum-free DMEM for $1 h before transfection with PPREx3-tk-Luc (three PPRE from rat acyl-CoA oxidase promoter under the control of the Herpes simplex virus thymidine kinase promoter) and Renilla-luc plasmids as an internal control. For PPAR study, cells were transfected with 25 ng pcMX-PPARa, pcMX-PPARb and pcMX-PPARc plasmids, each with 250 ng of reporter gene plasmid using Polyfect transfection reagent (Qiagen), according to instructions. Transfected cells were exposed to vehicle, various concentrations of PTER, PTER-ITC and PPAR agonist or antagonist in charcoal-stripped medium (24 h). Cells were then lysed and luciferase activity measured according to kit instructions (Promega, Madison, WI). Triplicates were measured for each experimental point; variability was ,10%. Luciferase values for each lysate were normalized to Renilla luciferase activity.
Oil Red O staining of MCF-7 cells
Approximately 105 cells were cultured on glass coverslips and treated at different PTER-ITC and rosiglitazone concentrations. After 2 days, and every 2 days thereafter, cells were switched to fresh drug-containing medium. MCF-7 cells differentiated for a total of 7 days were washed twice with PBS (pH 7.4) and fixed with 2 ml 10% formalin in PBS (30 min, room temperature). Cells were then washed twice with 2 ml distilled water and stained with 0.5% Oil Red O (Sigma, St. Louis, MO) for 10 min with gentle agitation. Excess stain was removed with 60% isopropanol and cells were washed twice with distilled water before imaging under a light microscope. Accumulated lipids were extracted in 2 ml 100% isopropanol and absorbance measured at 510 nm.
RT-PCR
Total RNA was extracted from the treated cells using an RNA isolation kit (Genei). Samples were then quantified and equal amounts of the individual treatments were transcribed with the RT-PCR kit (Genei) according to instructions. Similar treatments, followed by RNA isolation and RT-PCR were carried out three times to eliminate inter-assay variations. Primers for PPARc, PTEN and b-actin were designed using Primer 3 software and standardized in the laboratory. Primer sequences were 59-TCTGGCCCACCAACTTTGGG-39 (sense) and 59-CTTCA-CAAGCATGAACTCCA-39 (anti-sense) for PPAR-c, 59-AC-CAGG ACCAGAGGAAACCT-39 (sense) and 59-GCTAGCCTCTGGATTTGACG-39 (anti-sense) for PTEN and 59-TCACCCACACTGTGCCCCATCTACGA-39 (sense) and 59-CAGCGGA ACCGCTCATTGCCAATGG-39 (anti-sense) for b-actin. Amplification of PPARc and PTEN comprised of 29 cycles (PPARc: 94uC for 60 s, 55uC for 45 s, 72uC for 2 min; PTEN: 94uC for 60 s, 58uC for 45 s, 72uC for 2 min), and for the b-actin control: 25 cycles (94uC for 60 s, 57uC for 45 s, 72uC for 2 min). PCR conditions were optimized to maintain amplification in the linear range to avoid the plateau effect. PCR products were then separated on a 2% agarose gel and visualized in a gel documentation system (BioRad, Hercules, CA). Band intensity on gels was analyzed using ImageJ 1.43 software (NIH, Bethesda, MD) and normalized to b-actin PCR products. Each RT-PCR was carried out three times.
Western blot analysis
For western blot analysis, lysates were prepared by harvesting cells in lysis buffer [20 mM Tris pH 7.2, 5 mM EGTA, 5 mM EDTA, 0.4% (w/v) SDS and 1X protease inhibitor cocktail]. Protein was quantified with a BCA protein estimation kit (Sigma). Total protein samples (,40 mg) were analyzed on 12% polyacrylamide gels, followed by immunoblot analysis using a standard protocol. In brief, proteins were transferred to nylon membrane, which was blocked with TBS-T buffer (20 mM Tris-HCl, pH 7.5, 150 mM NaCl, 0.05% Tween-20) containing 5% skim milk powder. The blots were washed with TBS-T buffer and incubated (overnight, 4uC) in the same buffer with primary anti-PPARc, -PTEN, -survivin, -Bcl-2, -Bax caspase-9 (1:500) or -b-actin (1:1000) antibodies (all from Santa Cruz Biotechnology). Blots were then washed and incubated with HRP (horseradish peroxidase)-conjugated anti-rabbit or -mouse secondary antibody (1:20,000). Color was developed in the dark using the ECL kit (GE Healthcare, Bucks, UK) and blots were analyzed by densitometry with ImageJ 1.43 using b-actin as internal control.
Molecular docking study
Docking simulations were performed with Glide using the Maestro module of the Schrödinger suite (Suite 2011: Maestro v. 9.2, Schrödinger, New York NY). The crystal structure of PPARc bound to ligand Telmisartan was used as the starting model (PDB ID 3VN2) [43]. Using the protein preparation wizard, the complex was prepared by addition of hydrogens and sampling at neutral pH. The structure was refined with the optimized potential for liquid simulations (OPLS) 2005 force field [44] and minimized to a root mean square deviation (RMSD) of 0.30 Å . The Telmisartan binding pocket, which lies within the protein ligandbinding domain (LBD; residues 225-505), was identified on the PPARc/Telmisartan complex and the receptor grid was generated. During this process, no Van der Waal radius sampling was done; the partial charge cut-off was set at 0.25 and no constraints were enforced [45]. Ligands under study were drawn with ChemDraw [46] and 3-D structure files were generated at Online SMILES Translator and Structure File Generator (http://cactus. nci.nih.gov/services/translate/), followed by preparation with the Maestro LigPrep wizard. Each ligand was subjected to a full energy minimization in the gas phase employing OPLS2005 force field [44], with the generation of structures by different combinations of ionized states and considering all possible tautomeric states in a pH range of 5 to 9. Docking calculations were done using the Extra Precision (XP) mode of Glide [47], maintaining the receptor fixed and ligand flexible. This mode incorporates a more refined and advanced scoring function for protein-ligand docking, which gives an overall approximation of the ligand binding free energy. The function is given by where E coul is coulomb interaction energy; E vdw is Van der Waals interaction energy; E bind is binding energy and E penalty is energy due to disolvation and ligand strain. Finally, post-docking energy minimization was used to improve the geometry of the poses.
Statistical analysis
Data are expressed as mean 6 SEM and statistically evaluated with one-way ANOVA followed by the Bonferroni post hoc test using Graph Pad Prism 5.04 software (Graph Pad Software, San Diego CA). A p value of ,0.05 was considered statistically significant.
Results
PPARc is involved in PTER-ITC-induced inhibition of cell proliferation MCF-7 and MDA-MB-231 cells were treated with increasing concentrations (1-60 mM) of PTER and PTER-ITC for 24 h and cell survival was determined by MTT assay. Our data showed that treatment of these cells with PTER and PTER-ITC resulted in dose-dependent inhibition of cell proliferation, which was more pronounced after PTER-ITC treatment compared to vehicletreated control cells (Fig. 1B, C). In MCF-7 cells treated with 10 and 20 mM PTER-ITC, viable cell numbers decreased from 75% to 55%, which was about 92% and 85% respectively, after PTER treatment (Fig. 1D). Preincubation of cells with 10 mM GW9662 (a PPARc antagonist) increased cell survival from 75% to 87% in the presence of 10 mM PTER-ITC, which was 55% to 67% in the case of 20 mM PTER-ITC (p,0.05) (Fig. 1D). PTER treatment did not lead to improvement in viability when cells were pretreated with GW9662. Results were similar for MDA-MB-231 cells, in which with 10 mM GW9662 pretreatment increased cell survival from 82% to 97% in the presence of 10 mM PTER-ITC, and 70% to 87% after 20 mM PTER-ITC treatment (p, 0.05) (Fig. 1E).
Differential PPARc expression in distinct breast cancer cell lines
Three breast cancer cell lines (MCF-7, MDA-MB-231, T47D) were analyzed for PPARc expression. RT-PCR results showed that PPARc transcription was highest in MDA-MB-231 cells compared to the other two cell lines ( Fig. 2A, left). In accordance, we found that PPARc protein expression was also higher in MDA-MB-231 cells, followed by MCF-7 and T47D cell lines ( Fig. 2A, right). Based on these results, we selected MCF-7 and MD-MB-231 cells as in vitro models for the remaining part of the study.
PTER-ITC upregulates PPARc expression and activity
To examine changes in PPARc mRNA and protein expression following exposure to different drugs, we used RT-PCR, immunoblot and immunofluorescence analysis. In MCF-7 cells, the PPARc transcript level increased in response to PTER-ITC in a dose-dependent manner, which was ,1.5-fold at the highest dose tested (Fig. 2B). In contrast, PTER showed no significant increase, while the PPARc agonist rosiglitazone caused a 1.7-fold upregulation in its expression, as anticipated. Results were similar in MDA-MB-231 cells, in which PTER-ITC, PTER and rosiglitazone showed 1.6-, 1.1-and 1.8-fold increases in PPARc mRNA levels at a 20 mM concentration (Fig. 2C). This result was validated by immunoblot analysis, in which we observed a dosedependent increase in PPARc protein expression after PTER-ITC treatment in MCF-7 (2.1-to 2.8-fold) and MDA-MB-231 cells (1.5-to 2.6-fold) (Fig. 2D, E) (p,0.05). Treatment with 20 mM PTER had little or no effect, while treatment with same dose of rosiglitazone led to a significant increase in PPARc expression in MCF-7 and MDA-MB-231 cells (p,0.05). Immunofluorescence analysis of PPARc localization also showed increased nuclear accumulation of PPARc for PTER-ITC-and rosiglitazone-treated MCF-7 (Fig. 3A) and MDAMB-231 cells (Fig. 3B) compared to control cells, which was markedly inhibited by GW9662. PTER treatment led to no increase in PPARc expression or activity. These data show that PPARc expression was upregulated by PTER-ITC at both the transcriptional and translational levels.
PPARc participates in PTER-ITC-mediated upregulation of the PTEN tumor suppressor gene
To determine the effect of PTER, PTER-ITC and rosiglitazone on the expression pattern of the tumor suppressor gene PTEN, we treated MCF-7 and MDA-MB-231 cells with various concentrations of drugs for 24 h. RT-PCR and immunoblot analysis showed that PTER-ITC increased PTEN expression at both the transcriptional (Fig. 2B, C) and translational levels (Fig. 2D, E) in a dose-dependent manner (p,0.05). The most effective dose was 20 mM PTER-ITC, which caused an increase almost comparable to that of rosiglitazone. There was little or no difference in the relative level of PTEN in the PTER-treated group compared to controls (Fig. 2D, E) (p,0.05).
PTER-ITC increased PPARc and PPARb activity in MCF-7 cells
We used a luciferase reporter-based transactivation assay to study the effect of PTER-ITC on the activity of various PPAR types in breast cancer cells. Cells were transfected with plasmids encoding each PPAR protein (pcMX-PPARa, pcMX-PPARb or pcMX-PPARc) and with PPRE-tk-Luc and Renilla luciferase plasmids as internal control. Cells were then treated with PTER and PTER-ITC (24 h), followed by extraction of whole-cell lysates for analysis of luciferase activity. PTER-ITC induced PPARb and PPARc activities, but had no significant effects on PPARa ( Fig. 4A; p,0.05), whereas PTER induced PPARa activity, with no significant change in PPARb and PPARc activities ( Fig. 4A; p,0.05). We examined the specificity of PTER-ITC on PPARc and PPARb activity, using their respective agonists and antago- nists. The PPARb antagonist GSK0660 did not reverse PTER-ITC-induced PPARb activity (Fig. 4B), suggesting that the PTER-ITC effect on PPARb was non-specific. The PPARc antagonist GW9662 reversed PTER-ITC-induced PPARc activity significantly (Fig. 4C, left), as well as the activity of rosiglitazone, a PPARc agonist (Fig. 4C, right). These data suggest that PTER-ITC activity is mediated via the PPARc but not the PPARb pathway.
Effects of PTER-ITC on MCF-7 cell differentiation
PPARc activation induces cells to a more differentiated, less malignant state and causes extensive lipid accumulation in cultured breast cancer cells [30]. We thus used Oil Red O staining to test whether addition of PTER-ITC and rosiglitazone in MCF-7 cells also induces differentiation. Untreated MCF-7 cells showed nominal lipid accumulation as measured by Oil Red O staining (Fig. 4D, left). In contrast, rosiglitazone treatment (10 mM) strongly induced lipid accumulation; PTER-ITC treatment also caused a dose-dependent increase in lipid accumulation, albeit to a lesser extent than rosiglitazone (Fig. 4D). Maximum lipid accumulation was found at 5 mM PTER-ITC (Fig. 4D, right).
Molecular modeling of PPARc LBD/PTER-ITC binding
Since PTER-ITC increased PPARc transactivation by acting as a selective PPARc ligand, we used molecular docking analysis to further study PPARc LBD (ligand-binding domain)/PTER-ITC interaction at the cellular level. PTER-ITC, its parent compound (PTER), and resveratrol were docked into the PPARc LBD (see Methods); the binding mode of each ligand to PPARc LBD is shown in Fig. 5A, with their respective docking scores and interaction energies in Table 1. The terms ''XP Glidescore or docking score'' and ''Emodel'' were used to denote interactions between ligand and receptor. Based on these two scores, we observed that the PTER-ITC molecule might have better binding affinity for PPARc (Table 1). In terms of interaction with different residues, PTER-ITC showed better performance than PTER and resveratrol. In the best-docked position, PTER-ITC formed two hydrogen bonds with the receptor, involving residues His323 and Tyr327 (Table 1; Fig. 5B). In addition, through extensive hydrophobic interactions, it bound more firmly to the receptor than the other two ligands (Fig. 5C). Tyr473 is involved in hydrogen bond formation with both PTER and resveratrol, indicating a similar orientation of the two molecules, which is also evident from close analysis of their docking positions (Fig. 5A). Besides hydrogen bonds and hydrophobic interactions, PTER-ITC is also involved in the formation of p-p stacking between LBD residues His449 and Phe282 and their central benzene rings. This stacking could stabilize PTER-ITC after binding and strengthen the interaction. Similar stacking is partially observed in PTER, which involves only His449.
PPARc antagonist GW9662 inhibits PTER-ITC-induced apoptosis
We analyzed PTER-ITC apoptosis induction by flow cytometry, using annexin V and propidium iodide (PI) double staining to assess the cause of decreased cell survival after PTER-ITC treatment. We incubated MCF-7 cells with varying concentrations of PTER-ITC, alone or with GW9662 (10 mM; 24 h). PTER-ITC treatment significantly increased the percentage of apoptotic cells, and the effect was partly attenuated by pre-incubation with GW9662 ( Fig. 6A; p,0.05). Results were similar for MDA-MB-231 cells (not shown). PTER-ITC also induced apoptosisassociated morphological changes, as cells with condensed nuclei and nuclear fragmentation were apparent after treatment (Fig. 6B), which was minimal in vehicle-treated MCF-7 and MDA-MB-231 cells. The apoptotic nuclear changes were clearly reduced in cells pre-treated with 10 mM GW9662 (Fig. 6B). These data suggest that blockade of PPARc activity blunted the druginduced cell apoptosis.
PTER-ITC induces caspase-dependent apoptosis
Apoptosis is a complex activity that mobilizes a number of molecules, and its mechanisms are classified as caspase-dependent or -independent. The caspase-dependent pathway can be further divided into extrinsic or intrinsic pathways, determined by involvement of caspase-8 or caspase-9, respectively. Both of these pathways involve activation of caspase-3/7, which is important for inducing downstream molecules responsible for DNA cleavage. To further examine the mechanism that underlies PTER-ITCinduced death of breast cancer cells, we studied a possible role for caspase in this process by measuring the enzymatic activity of caspase-3/7, -8 and -9. We observed a gradual increase in caspase-9 and caspase-3/7 activities in MCF-7 and MDA-MB-231 cells treated with 10 and 20 mM PTER-ITC for 24 h (Fig. 7A). In contrast, there were no significant changes in caspase-8 activity in MCF-7 cells, whereas we found a dose-dependent increase in activity in MDA-MB-231 cells. Our data thus suggest that PTER-ITC induced activation of the intrinsic caspase pathway in MCF-7 cells, while it induced both extrinsic and intrinsic caspase pathways in MDA-MB-231 cells.
To determine whether caspase activation was involved in PTER-ITC-induced death of cultured breast cancer cells, we used pharmacological caspase inhibitors to test whether they protect cells from undergoing apoptosis. In the case of MDA-MB-231 cells, the general caspase inhibitor Z-VAD-FMK inhibited apoptosis most efficiently (up to 70-80%; Fig. 7B, p,0.05), suggesting that apoptosis is the predominant form of cell death induced by PTER-ITC in these cells. Z-LEHD-FMK, a specific inhibitor of caspase-9, inhibited PTER-ITC-induced apoptosis by 50-55% (p,0.05), while Z-IETD-FMK, a specific inhibitor of caspase-8, inhibited PTER-ITC-induced apoptosis by 65-70% (p,0.05). In contrast, Z-LEHD-FMK inhibited PTER-ITC-induced apoptosis by 66-70% in MCF-7 cells, while Z-IETD-FMK did not effectively block PTER-ITC-induced apoptosis in this cell line, which confirmed previous reports [41]. Our data thus demonstrate that PTER-ITC-induced apoptosis is a caspasedependent process that involves both caspase-8 and -9 in MDA-MB-231 cells and only caspase-9 in MCF-7 cells.
MAPK and JNK are involved in PTER-ITC-induced PPARc activation and apoptosis
To test for a role of MAPK (mitogen-activated protein kinase) in PTER-ITC-induced PPARc activation and apoptosis of breast cancer cells, we pre-treated MCF-7 and MDA-MB-231 cells with 20 mM ERK inhibitor (PD98059), 10 mM JNK inhibitor (SP600125) or 10 mM p38 MAPK inhibitor (SB203580) for 1 h, followed by PTER-ITC treatment for an additional 24 h. Total proteins were then isolated for analysis of PPARc expression patterns. In both breast cancer cell lines, SB203580 and SP600125 pre-treatment completely blocked PTER-ITC-induced PPARc expression, whereas pre-treatment with PD98059 or DMSO had no effect (Fig. 8A). We therefore suggest that PTER-ITC induces p38 MAPK and JNK pathways to upregulate PPARc expression in MCF-7 and MDA-MB-231 cells.
Since both p38 MAPK and JNK pathways had important roles in PTER-ITC-induced PPARc expression, we evaluated whether inhibition of either pathway protected cells from PTER-ITCinduced apoptosis. The breast cancer cells were pre-treated with 10 mM SB203580 (p38 MAPK inhibitor) or SP600125 (JNK inhibitor) for 1 h, followed by PTER-ITC treatment for an
PTER-ITC induces apoptosis by targeting PPARc-related proteins
To elucidate the mode of action of PTER-ITC as an apoptotic agent in the PPARc-dependent pathway, we studied its effect on the regulation of PPARc-related genes in both breast cancer cell lines. PTER-ITC significantly increased PPARc, PTEN and Bax, and decreased Bcl-2 expression in a dose-dependent manner both at the level of transcription (not shown) and translation (Fig. 9A, B). Moreover, PTER-ITC significantly decreased expression of survivin, which blocks caspase-9 and -3, thereby inhibiting apoptosis.
To determine whether the increase in apoptosis and decrease in PPARc-related genes was due to PTER-ITC-induced PPARc activation, we performed two sets of experiments. First, we used the PPARc antagonist GW9662 to block PPARc pathway activation, followed by 24 h PTER-ITC treatment. Second, PPARc protein expression was knocked down in MCF-7 and MDA-MB-231 cells by transfection of PPARc siRNA, followed by 24 h PTER-ITC treatment. Our results showed that MCF-7 and MDA-MB-231 cells in both treatment protocols restored the inhibition of Bcl-2 and survivin caused by PTER-ITC alone ( Fig. 9A-D). In addition, PTER-ITC upregulated Bax and PTEN protein expression in a dose-dependent manner, which was inhibited by the PPARc antagonist or PPARc siRNA (Fig. 9A-D), indicating that PTER-ITC modulation of Bax and PTEN is PPARc-dependent. Furthermore, PTER-ITC induction of cleaved caspase-9 in both MCF-7 and MDAMB-231 cells was attenuated by GW9662 or PPARc siRNA treatment (Fig. 9). These data suggest that PTER-ITC induced PPARc expression, which subsequently enhanced expression of downstream components of this pathway, finally leading to apoptosis.
Discussion
Breast cancer is the most commonly diagnosed cancer and the second leading cause of cancer death [48]. The mortality rate of breast cancer is high because of disease recurrence, which remains the major therapeutic barrier in this cancer type. Although many cytotoxic drugs have been developed for clinical use, cancer chemotherapy is always accompanied by adverse effects, which can be fatal in some cases. Due to the lack of satisfactory treatment options for breast cancer to date, there is an urgent need to develop preventive approaches for this malignancy. There is a growing interest in combination therapy using multiple anticancer drugs that affect several targets/pathways. A single molecule containing more than one pharmacophore, each with a different mode of action, could be beneficial for cancer treatment. Here, we studied the effectiveness of a new synthetic derivative of pterostilbene, a phytochemical isolated from Pterocarpus marsupium stem heart wood, in hormone-dependent (MCF-7) andindependent (MDA-MB-231) breast cancer cell lines.
PPARc is widely expressed in many tumors and cell lines, and has become a promising target for anticancer therapy. This nuclear receptor has a critical role in breast cancer proliferation, survival, invasion, and metastasis [13,18,20,21,[25][26][27][28]. The effectiveness of PPARc agonists as anticancer agents has been examined in various cancers including colon, breast, lung, ovary and prostate [49]. We tested whether PTER-ITC mediates its anti-proliferative and pro-apoptotic effects in breast cancer cells through activation of the PPARc signaling cascade. Our results showed that PTER-ITC activated PPARc expression in a dosedependent manner, followed by downregulation of its antiapoptotic genes (Bcl-2 and survivin) to induce noteworthy levels of apoptosis in hormone-dependent (MCF-7) and -independent (MDA-MB-231) breast cancer cells.
The PTER-ITC conjugate can be considered more advantageous than existing PPARc ligands such as rosiglitazone or pioglitazone for breast cancer treatment, as PTER-ITC causes more pronounced cell death at a much lower dose than other ligands [50][51][52]. In addition, most (if not all) the other ligands are estrogenic in nature [53], and could thus act as positive factors for ER-dependent breast, ovary and uterine cancers, whereas PTER-ITC is anti-estrogenic at the dose used for this study. Considering these two major points, we consider that the drug could be used at much lower concentrations, which might help reduce the side effects reported for most other PPARc ligands. PTER-ITC molecule nonetheless requires further validation before use in clinical trials that target the PPARc pathway.
The most important characteristic of a cancer cell is its ability to sustain proliferation [54]. The pathways that control proliferation in normal cells are altered in most cancers [55]. We thus analyzed the PTER-ITC effect on proliferation of breast cancer cells, and found that PTER-ITC caused significant, dose-dependent inhibition of breast cancer cell growth in vitro. This effect was partially reversed, however, when PTER-ITC was combined with PPARc antagonists. This result suggests that the PTER-ITC anticancer effects are mediated through the PPARc activation pathway. These data coincide with findings in several in vivo and in vitro studies in which PPARc agonists such as rosiglitazone or troglitazone decreased proliferation of breast cancer cell lines, mediated in part by a PPARc-dependent mechanism [26,56].
To elucidate the molecular mechanisms that underlie the anticancer effects observed for PTER-ITC, we studied its effect on activation of PPARc. To the best of our knowledge, this is the first report showing PTER-ITC participation in the PPARc-dependent signaling pathway. Our data show that PTER-ITC increased PPARc transcriptional and translational activity in MCF-7 and MDA-MB-231 cells. To establish the essential role of PTER-ITC in PPARc-mediated apoptosis of breast cancer cells, we used PPARc siRNA and its drug antagonist to inhibit PPARc signaling, and demonstrated apoptosis prevention and caspase activation. We also observed an increase in PPARb activity after PTER-ITC treatment, with no significant reduction after antagonist treatment, suggesting that the increase was non-specific. Although some earlier studies reported involvement of PPARb activity in tumorigenesis, many others contradicted this idea. The PPARb ligand GW501516 was reported to promote human hepatocellular growth [57], although another study showed that certain PPARb ligands such as GW0742 and GW501516 reduced growth of MCF-7 and UACC903 cell lines [58]. The role of PPARb in cancer therapeutics is therefore complex and not yet fully defined [59]. Hence the relationship between PTER-ITC and PPARb could provide an alternative platform to study the involvement of this pathway in cancer therapy.
PPARc is a phosphoprotein, and many kinase pathways, such as cAMP-dependent protein kinase (PKA), AMP-activated protein kinase (AMPK) and mitogen-activated protein kinase (MAPK) such as ERK, p38 and JNK, have been implicated in the regulation of its phosphorylation [60,61]. Phosphorylation notably inhibits PPARc ligand-independent and -dependent transcriptional activation [60,61]. Research showed that PPARc agonists activate different MAPK subfamilies, depending on cell type [62][63][64][65] and that these kinases are involved in cell death [66][67][68][69]. The role of MAPK signaling pathways in cell death induced by PPARc agonists is controversial. According to certain studies, PPARc agonist-induced ERK activation mediates anti-apoptotic signaling [64], while others showed its involvement in inducing cell death [66,70]. p38 activation by PPARc agonists is also reported to be regulated differently in various cell types. PPARc agonists induce p38 activation, leading to apoptosis of cancer cells have been reported in chondrocytes [64], human lung cells [68], liver epithelial cells [62] and skeletal muscle [71]. This coincides with our data, where using pharmaceutical inhibitors, we show that activation of p38 and JNK pathways, but not of ERK, is necessary and sufficient to phosphorylate PPARc and cause subsequent apoptosis in the breast cancer cell lines studied. At present, we do not know whether PTER-ITC activates p38 and JNK directly, or if it activates other cellular kinase pathways such as PKA and AMPK, which in turn could activate MAPK. Further validation is needed to conclusively establish the pathway(s) involved.
PTEN is a tumor suppressor gene involved in the regulation of cell survival signaling through the phosphatidylinositol 3-kinase (PI3K)/Akt pathway [72]. PI3K/Akt signaling is required for an extremely diverse array of cellular activities that participate mainly in growth, proliferation, apoptosis and survival mechanisms [73,74]. Activated Akt protects cells from apoptotic death by inactivating compounds of the cell death machinery such as procaspases [73]. PTEN exercises its role as a tumor suppressor by antagonizing the PI3K/Akt pathway [73]. The PPARc-dependent increase in PTEN caused by PTER-ITC in our experiments not only indicates that the tumor suppressor gene contributes to the growth-inhibitory activities of the compound, but might also trigger its pro-apoptotic actions.
Our results further showed that PTER-ITC downregulated PPARc-related genes, including Bcl-2 and survivin. These genes are commonly associated with increased resistance to apoptosis in human cancer cells [75]. PTER-ITC-induced PPARc activation was reduced in the presence of GW9662, together with reversal of decreased survivin and Bcl-2 levels. Furthermore, molecular docking analysis suggested that PTER-ITC could interact with amino acid residues within the PPARc-binding domain, including five polar and eight non-polar residues within the PPARc ligandbinding pocket that are reported to be critical for its activity. Together these results suggest that PTER-ITC can be considered a PPARc agonist, and the survivin and Bcl-2 decrease is due to activation of the PPARc pathway by PTER-ITC.
Two cellular pathways, differentiation and apoptosis, are the main focus in the development of anti-cancer therapies. Induction of differentiation is one potent mechanism by which some cancer therapeutic and chemopreventive agents act [76][77][78]. Lipid accumulation in MCF-7 cells is supported by the fact that tamoxifen and a few other anti-cancer agents such as ansamycins and suberoylanilide hydroxamic acid induce high lipid production (as high as 5-fold in the case of ansamycins) and by triglyceride accumulation, which results in MCF-7 cell differentiation to a more epithelial-like morphology [79][80][81]. In a previous study, we showed that long-term exposure to PTER causes growth arrest in MCF-7 cells, which might be linked to mammary carcinoma cell differentiation into normal epithelial cell-like morphology and activation of autophagy [38]. In the present study, PTER-ITC also caused differentiation of MCF-7 cells, albeit to a higher level compared to its parent compound PTER than previously reported [38]. Based on these data, it can thus be suggested that PTER-ITC inhibits MCF-7 cell growth mainly through apoptosis, while it can also induce differentiation of these breast cancer cells.
Conclusions
In conclusion, this study highlights the anticancer effects of the novel conjugate of PTER and ITC, and shows that the mechanism involves activation of the PPARc pathway via PTER-ITC binding to the receptor, which affects its regulated gene products (Fig. 10). PTER-ITC induces apoptosis by enhancing expression of PPARc genes at both transcriptional and translational levels, which appears to be triggered at least in part by modulation of PTEN. In addition, activation of caspase-9 and downregulation of Bcl-2 and survivin contribute to PTER-ITC-induced cell death. PTER-ITC exhibits differentiation-promoting as well as anti-proliferative effects on MCF-7 cells. Together these results suggest that the PTER-ITC conjugate acts as a PPARc agonist and is a promising candidate for cancer therapy, alone or in combination with existing therapies. These preliminary data show that further studies are warranted in in vitro and in vivo models to elucidate the exact mode of action responsible for the effects of this compound. | 8,663.2 | 2014-08-13T00:00:00.000 | [
"Chemistry",
"Medicine"
] |
Donald Trump’s Denial Speeches of the 2020 United States Presidential Election’s Results: A Critical Discourse Analysis Perspective
The primary concern of the present study is to provide a critical discourse analysis of Donald Trump’s denial speeches of the 2020 United States presidential election’s results. Using Van Dijk’s framework of critical discourse analysis, this study investigates the linguistic features in five speeches of Donald Trump delivered after announcing the results of the US presidential election. The data analysis is conducted focusing on the use of 25 discursive devices presented by Van Dijk (2006), which represent the micro-level of text analysis to reveal the ideologies of positive self-representation and negative other-representation which represent the macro-level of text analysis. The findings of the study show that Trump made use of the majority of the discursive devices, with a special emphasis on using the following: lexicalization, evidentiality, example/illustration, number game, polarization, actor description, hyperbole, categorization, victimization, and authority. Furthermore, the analysis at the macro-level shows that Donald Trump used the ideologies of positive self-representation and negative other-representation, but he relied more on using negative other-representation. The findings also show that Trump used these discursive devices to justify his denial of the election results and gain the empathy of American people by showing a positive image of himself and his supporters while portraying others negatively by emphasizing their bad deeds during the election.
INTRODUCTION
The 2020 presidential election in the United States took place on Tuesday, November 3, 2020, and it was the United States of America's 59 th quadrennial Presidential election. In this election, the Democratic candidate, Joe Biden defeated the Republican candidate, Donald Trump, the then incumbent President of the United States. However, Donald Trump gave several speeches after the Election Day in which he denied and questioned the election's results, and he attempted to overturn the election's results by claiming a widespread voter fraud, as well as by interfering with the vote-counting process. This study follows a qualitative and quantitative approach in analyzing the speeches of Donald Trump, which are delivered after the United States Presidential Election. The framework of critical discourse analysis proposed by Teun A. van Dijk (2006) is adopted in order to unveil the discursive devices and the embedded ideologies used in the language of Donald Trump.
This study sheds light on the way Donald Trump expresses his denial of the United States Presidential election results by using different linguistic discursive devices, and various embedded ideologies in terms of Critical Discourse Analysis. Therefore, it integrates micro-level text analysis in accordance with Van Dijk's (2006) 25 discursive devices with macro-level text analysis based on the employment of positive self-representation and negative other-representation.
Based on the literature review, it is found that many research papers have tackled Donald Trump's speeches in terms of Critical Discourse Analysis. These papers shed light Donald Trump's Denial Speeches of the 2020 United States Presidential Election's Results: A Critical Discourse Analysis Perspective 33 on the speeches that were delivered in different occasions, such as Presidential campaign speeches, announcement speeches, and so forth, while no research papers conduct the task of investigating the Trump's speeches which he delivered after the Presidential Election as a denial of the election results. Accordingly, this point has sparked the researchers to investigate this significant topic. Generally, the study's primary objective is to investigate the discursive devices used in Donald Trump's denial speeches of the election results in terms of Critical Discourse Analysis, considering the fact that these speeches received no linguistic attention to begin with. The aim of this study is to point out the linguistic discursive devices involved in Donald Trump's denial speeches of election's results. Furthermore, the study aims to elucidate the primary intended ideologies presented in the language under analysis. As a result, the current paper aims at investigating Donald Trump's language in relation to the CDA's central tenets and principles, as well as the linguistic discursive devices used to identify how Trump convinces his addresses to believe in his ideas. In addition, the study attempts to Uncover the primary ideologies expressed in Donald Trump's speeches.
METHODOLOGY AND DATA OF THE STUDY
The corpus of this study includes the transcripts of five speeches delivered by Donald Trump after the 2020 presidential elections as a denial of the results of the elections. These speeches were delivered in English language. The speeches are named and ordered according to their chronological delivery. The corpus involves a total of 23284 words. The transcripts of speeches were retrieved from the internet, on the following website: (https://www.rev.com/blog/transcript-category/donald-trump-transcripts).
The study started by gathering the data needed for qualitative and quantitative analysis. To accomplish this, the scripts of five speeches delivered by Donald Trump were collected from the internet. To double-check the accuracy and authenticity of the speeches, the video files of the five speeches were downloaded and reviewed.
To carry out the qualitative analysis on the scale of micro-level, the researchers read each script in order to identify how frequently Donald Trump employed Van Dijk's 25 discursive devices. To determine which phrases or words are considered to be one of the Van Dijk's discursive devices, the researchers depended mainly on the definitions of these devices introduced by Van Dijk (2006) and several researches. Furthermore, the researchers used AntConc software tool to identify keywords and study their linguistic context in which they occur. Moreover, the researchers read several articles and papers that adopt these 25 devices for analyzing different speeches. Using these resources, the researcherswere able to determine which phrases or words fit into each of these discursive devices. Furthermore, for the qualitative analysis at the macro-level, the researcher investigated how these devices are used by Donald Trump to spread the ideologies of positive self-representation and negative other-representation of his group and out-groups.
For the quantitative analysis, the researchers used Microsoft Word 2010 tables to show the results of data analysis regarding the 25 discursive devices and the ideologies of positive/negative representation. The first data set involves the frequency of each discursive device in each of the five speeches, the total number of discursive devices used in each speech, and the overall frequency of the 25 devices used in the five speeches. The second data set involves the frequency of using the ideologies of positive/negative representation in each speech, and the overall frequency of these two ideologies utilized in all the five speeches. Moreover, the quantitative analysis includes the percentage of the use of the ideologies of positive/negative representation in each speech and in all five speeches.
The researchers in this study limit themselves to five speeches only delivered by Donald Trump. Actually, the researchers left a lot of other issues that merit significant attention and academic investigation, such as the tweets and Facebook posts of Donald Trump. Additionally, the study's central theme is geared toward linguistic objectives apart from political ones. Therefore, this study is never intended to make any political allegations.
Critical Discourse Analysis
Critical Discourse Analysis was introduced in the late 1980s and become a well-established domain within the social sciences. Wodak states that CDA can be viewed as a problem-oriented interdisciplinary research program that encompasses a range of approaches, each based on a distinct set of epistemological principles and using a distinct set of theoretical models, researches, methods, and agendas (Wodak, 2001). According to Van Dijk (1993) CDA should focus specifically on the discourse aspects of power abuse and the resulting oppression and inequality. In other words, CDA, unlike other areas of discourse analysis is characterized by an overemphasis on domination and inequality since it is mainly concerned with social issues that it hopes to better comprehend via discourse analysis. Van Dijk also states that critical discourse analysis is concerned with the strategies and the characteristics of text, talk, verbal actions, and communicative events that contribute to discourse production. Furthermore, CDA's goal is to explain, interpret and investigate the language's form and function. That is to say, grammar, morphology, semantics, syntax, and pragmatics all contribute to the form of language while language's function encompasses how people employ language in a variety of situations in order to accomplish their goals (Rogers, 2004). Coffin (2001) argues that CDA's primary objective is to demonstrate the use of language within the confines of text to create particular ideological views characterized by unequal power relations. As a result, CDA is concerned not only with the linguistic characteristics of language but also with its use. According to Orpin (2005), CDA may offer useful insights into language relationships because it provides a Hallidayan view of language, in which language is indivisibly linked to its sociolinguistic context, its ideological ALLS 13(1):32-40 mediation, and its relationship to social power structures. Therefore, through recognizing the linguistic mechanisms or semantic frameworks used to create ideology, CDA may illuminate the hidden strategy an author might use through discourse to construct views of the world, either consciously or unconsciously (Orpin, 2005, cited in Post, 2009). Wodak (1997) states that critical discourse analysis "studies real, and often extended, instances of social interaction which take (partially) linguistic form. The critical approach is distinctive in its view of (a) the relationship between language and society, and (b) the relationship between analysis and the practices analyzed" (1997, p. 174).According to her, CDA aims to decode the opaque and obvious structural connections among domination, discrimination, control, and hegemony whether they are manifested in written or spoken discourse, and also the social context underlying the discourse.
To conclude, CDA is an effective tool for dismantling the ideological plan formed through discourse, which enables its participants to view the actual world through unique and often biased lenses, therefore preferring the dominant group's desires (Coffin, 2001). Therefore, CDA is theoretically needed to link the eminent "distance" between micro and macro levels of discourse, that is, obviously, a sociological framework in and of itself (Van Dijk, 2003).
Van Dijk's 2006 Framework on Analyzing Political Discourse
Van Dijk's (2006) framework has been approved as a detailed and accurate conceptual framework to provide researchers with the aspects of ideological manipulation.In contrast to other frameworks introduced in the field of CDA, Van Dijk's (2006) design incorporates argumentation, political strategies, rhetorical devices, semantic strategies, and stylistic information, making it an effective framework for identifying reality distortions during the discourse production process (Sardabi, Biria, & Azin, 2014). Political discourse is established in order to achieve political objectives, such as power, dominance, and hegemony. Additionally, politicians produce or reproduce political language in order to engage in political abuse, justify their political pleas, and increase their public approval (Bayram, 2010). Therefore, the use of language in the realm of politics is to encapsulate the people's vision, interpretation, and worldview, and its intended perlocutionary influence is to have the views expressed or lines of action taken directly believed or adopted (Bello, 2013). In this respect, Van Dijk (2006) claims that it is essential to link such use to particular aspects of the political situation, such as who is speaking, where, where, and with/to whom. He also states that, a cognitive interface between such a situation and talk or text is needed, namely, a political situation's mental model. These mental models describe how participants experience, interpret and reflect the political situation that is significant for them.
Normally, the relationship between discourse and political ideologies is examined based on political discourse structure, as with the usage of biased lexical items, syntactic structures like active and passive, the use of pronouns like we and them, metaphors or topi, argumentation, implication, and a variety of other discourse characteristics (Van Dijk, 2006). Van Dijk (2002) states that, even though the defining properties of political discourse are primarily contextual, this does not mean we can abandon our analysis of political discourse structures: analysis of "topics, topoi, coherence, arguments, lexical style, disclaimers, and several rhetorical features (metaphors, euphemisms, hyperbolas, etc)" (Van Dijk, 2002, p. 214).
Thus, politicians through political discourse can legitimize their own actions, and delegitimize others' actions. Legitimization which is typically directed to the self involves acts of positive self-representation, like self-praise, self-apology, self-justification, and so forth. On the other hand, delegitimization can take the forms of negative other-representation, marginalization, excluding, and so forth (Chilton, 2004).
Van Dijk (2006) states that, Ideologies are usually polarized in their structure, especially in representing or categorizing a competing or conflicting group membership between ingroups and outgroups. Furthermore, these structures often manifest themselves in more specific political views, and essentially in group members' personal mental models. Thus, discourse contents are influenced by these mental models i.e. if they are polarized; discourse is likely to exhibit different forms of polarization as well.
The framework developed by Van Dijk (2006) appears to be a systematic practical method for investigating such ideological polarization of political discourses. In this framework, Van Dijk introduced what he called the "ideological square", which has different strategies for analyzing ideological discourses. These strategies are the following: • Emphasize Our good things • Emphasize Their bad things • De-emphasize Our bad things • De-emphasize Their good things (Van Dijk, 2006, p.734). Rashidi & Souzandehfar (2010) described this square as a fundamental dichotomy, with an emphasis on "positive self-representation and negative other-representation". Bello (2013) states that actors are polarized by this square into ingroups and outgroups in which the former emphasizes their positive characteristics and ignores their negative ones; while the latter emphasizes their negative characteristics and ignores their positive ones (Bello, 2013, p.86). Therefore, the main focus of political speeches, interviews, programs, etc. is devoted to the favored issues of group or party i.e. our well-done achievements, while issues like war, violence, drugs, and a lack of liberty and so are associated with political opponents (Van Dijk, 2006).
In addition to the general strategies of positive self-representation and negative other-representation that represent the macro-strategy of investigating discourses, Van Dijk (2006)
Quantitative Analysis
This part represents the findings of the quantitative analysis of Donald Trump's five speeches at micro and macro levels of analysis. The analysis is summarized in the tables which are accompanied by some explanations. The descriptive statistics presented in Tables 2-3 illustrate the results of the two levels of analysis of Donald Trump's five speeches; the analysis of the 25 discursive devices (microlevel), and the ideologies of positive/negative representation (mac-ro-level). Table 2 illustrates the frequency of use of each 25 discursive devices in Donald Trump's five speeches that represent the micro-level analysis.
At the macro-level of analysis, Table 3 illustrates the frequency of use of the ideologies, positive self-representation, and negative other-representation in Donald Trump's five speeches.
Qualitative Analysis
This part is devoted to the qualitative analysis of Donald Trump's five speeches at micro and macro-levels of analysis. Therefore, the quantitative analysis illustrated in the previous part and some illustrative examples of the five speeches will be used to investigate the most frequent discursive devices used by Donald Trump at the micro level, and how they are used to invalidate Biden's victory. Furthermore, these results and examples will be used to investigate the employment of the ideologies; positive self-representation and negative other-representation at the macro level.
Lexicalization
As illustrated in table 2, lexicalization is used 179 times as the most frequent device utilized by Donald Trump in his five speeches. As mentioned in chapter two, lexicalization is defined as the process of using the semantic qualities of words in order to positively or negatively depict someone or something. The reason for Donald Trump's increased usage of lexicalization is that political speakers frequently employ lexicalization to ingrain their ideas in the minds of people (Van Dijk, 2006;Matic, 2012). This is specifically apparent when speakers have a tendency to portray themselves positively while portraying others negatively. Matic (2012) states that lexicalization is the primary means of achieving positive self-presentation and negative other-presentation. Therefore, lexicalization is utilized to represent others negatively or to delegitimize their behaviors through using strongly negative words (Van Dijk, 1995). As shown in Table 3, Trump employs the ideology of negative other-representation more frequently than positive self-representation. This means that he used more negative words to describe others in his speeches. In his five speeches, Trump focused on using some negative words such as, fraud (43 times), corrupt (24 times), illegal (22 times), bad (18 times), fraudulent (13 times), horrible (12 times), steal (11 times), suppression (10 times) to describe Democrats, and those who are responsible for the elections process negatively in order to invalidate Biden's victory in the presidential election. It is worth noting that using one word many times can be considered as circumlocution strategy by which politicians emphasize some messages and deepen the understanding of these messages. The following are some examples that show the use of lexicalization device in the five speeches. 1. "Democrat officials never believed they could win this election honestly. I really believe that. That's why they did the mail-in ballots, where there's tremendous corruption and fraud going on." (Donald Trump's second speech) In this example, lexicalization device is used through using the negative words "fraud" and "corruption" to describe the way by which Democrats won the elections by using mail-in ballots negatively as having corruption and fraud.
"While it has long been understood that the Democrat political machine engages in voter fraud from Detroit
to Philadelphia, to Milwaukee, Atlanta, so many other places." (Donald Trump's fourth speech) In this example, Trump also uses the word "fraud" to represent the democrats negatively by alleging that Democrats commit voting fraud in different states.
On the other hand, Trump uses lexicalization by using positive words to represent himself or his supporters positively. In the following example, Trump uses the words "fantastic" and "great patriots" to describe those people who defended his right to win the presidential election. Therefore, he tries to indicate that those people who supported him are the ones who love and support their country.
"I want to thank all of the people that signed affida-
vits and all of the speakers. You fantastic people. You're great Patriots." (Donald Trump's third speech)
Example/illustration & Evidentiality
Example/illustration device is defined in chapter two as the process of giving evidences by discourse producers in order to justify their opinions. On the other hand, evidentiality is defined as a discourse producer's use of evidences or facts to reinforce their views, beliefs.Example/illustration and evidentiality are the second most frequent devices used by Donald Trump in his five speeches. As
Name of the speech Positive selfrepresentation
Negative otherrepresentation The first speech 12 5 The second speech 21 50 The third speech 10 31 The fourth speech 18 74 The fifth speech 48 85 In this example, Trump represents himself positively by mentioning examples of his achievements in gaining the votes of some American society groups, such as african Americans and Asian Americans. Also, he mentions his achievement of growing the Republican Party's voters by 4 million voters. By giving this example, Trump tries to indicate the high number of votes that he gained in the election, implying that he actually won this election. 5. "In Pennsylvania, partisan Democrats have allowed ballots in the state to be received three days after the election, and we think much more than that. And they are counting those without even postmarks or any identification whatsoever. So you don't have postmarks; you don't have identification. There have been a number of disturbing irregularities across the nation." (Donald Trump's second speech) In this example, Trump gives an example of the fraud that happened during the election in the state of Pennsylvania. He again focuses on the case of ballots counting after the end of the Election Day. Furthermore, Trump implies that these ballots were fake by stating that they lack any postmark or identification.
Number game
Number game is defined in chapter two as the use of numbers in discourse to bolster the credibility or legitimacy of the discourse producers' views or beliefs. As illustrated in table 2, Donald Trump used number 120 times in his five speeches.
Politicians use numbers in their political speeches to enhance the credibility of their speeches and show objectivity (Van Dijk, 2004). Therefore, Trump used numbers in most examples that he mentioned regarding the election results to increase the credibility of these examples, and enhance the legitimacy of his demand to overturn the election results. The following is an example that shows the use of this device by Donald Trump in his five speeches. 6. " We won Texas by 700,000 votes and they don ' Trump's first speech) This example is taken from the first speech which was delivered by Trump on the election night. In these two examples, Trump presents the names of the states that he won and the number of votes that he gained in these states until that moment. Therefore, he does not only mention the names of these states, but he gives the accurate numbers of votes that he gained and the percentage of remaining votes to point out that Biden does not have any chance to catch him in votes numbers. By giving these numbers and percentages.
Polarization
As mentioned in chapter two, polarization is defined as the process of classifying discourse participants into a positively represented 'US' and a negatively represented 'THEM'. Table 2 shows that Donald Trump used polarization 86 times in his five speeches. It is worth noting that Trump uses polarization in most cases in order to represent Democrats negatively by focusing on the irregularities that they made in the election to make him lose it. On the other hand, he uses polarization to represent himself positively by focusing on his achievements of gaining a lot of votes and winning many states. The following are some examples that show the use of this device in Trump's speeches. 7. "I'd like to provide the American people with an update on our efforts to protect the integrity of our very important 2020 election. If you count the legal votes, I easily win. If you count the illegal votes, they can try to steal the election from us." (Donald Trump's second speech) This example is taken from the second speech, in which Trump started it by using polarization to represent himself positively as protecting the integrity of the election "our efforts", and also to represent Democrats negatively by saying that the votes they got in the election were 'illegal'. Trump also uses the verb 'steal' to represent them negatively as criminals who want to steal the election, when he said that "if you count these illegal votes, they can steal the election". Therefore, positive representation comes after the pronoun "our" as shown in the example while the negative representation comes after the pronoun "they" which shows the use of polarization in separating discourse participants into a positively represented 'us' or 'our',and a negatively represented 'them' or 'they'. On the other hand, Trump often uses polarization to represent himself positively by using "We" whenever he talks about victory in many other instances.
Actor description
Actor description is defined in chapter two as the way in which members of a particular group are described or portrayed, whether positively or negatively. Therefore, discourse producers tend to portray their groups positively, while they portray the other groups negatively. In his five speeches, Donald Trump used actor description device to achieve the ideologies of positive self-representation and negative other representation by portraying his constituents and supporters positively, while portraying Democrats and poll workers negatively. The following are some examples that show the use of this device in Donald Trump's five speeches.
"The officials overseeing the counting in Pennsylvania
and other key states are all part of a corrupt Democrat machine." (Donald Trump's second speech) In this example, Donald Trump portrays the officials, who are in charge of ballots counting, negatively, by saying that they are involved in a fraudulent Democrats machine. It is worthnoting that Trump tries to point out the irregularities that happened in the ballots counting by classifying those officials as a part of a corrupt Democrats machine. 9. "In Michigan, a career employee of the city of Detroit, with the city workers, coaching voters to vote straight Democrat, while accompanying them to watch who they ALLS 13(1):32-40 were voting for, violating the law and the sanctity of the secret ballot." (Donald Trump's fourth speech) In this example, Donald Trump uses actor description device to represent an employee and Detroit workers negatively, by giving an example of how they were urging voters illegally to vote for Biden. It is worth noting that by giving this example, Trump tries to shed the light on the way Biden won the election illegally, and how the voters were treated and urged to vote for him.
Hyperbole
As mentioned in chapter two, hyperbole is defined as a semantic rhetorical strategy used to intensify meaning within the framework of positive self-representation and negative other-representation. Table 2 shows that Donald Trump used hyperbole strategy 43 times in his five speeches. It is worth noting that hyperbole is employed by Trump to exaggerate the positive qualities of himself and Republicans. Furthermore, it is used to exaggerate the negative qualities of Biden and Democrats regarding the irregularities that happened in the election. The following are some examples that show the use of this device in Donald Trump's five speeches. 10. "The results tonight have been phenomenal and we are getting ready… I mean, literally we were just all set to get outside and just celebrate something that was so beautiful, so good." (Donald Trump's first speech) In this example, Donald Trump uses the hyperbolic term 'phenomenal',that indicates something extraordinary, to intensify the positive representation of his achievement in gaining a high number of votes in the election.
Another example of using hyperbolic terms by Trump is the use of the hyperbolic term 'tremendous' that means extraordinarily large. Therefore, it was used in his five speeches 24 times; whether to represent (positively) his supporters or the votes that he gained in the election or his achievements as a president and to represent (negatively) the fraud and the irregularities of Democrats in the election.
Categorization
As mentioned in chapter two, Categorization is defined as the process of classifying people based on their political or religious beliefs and attitudes. which is what they're doing and stolen by the fake news media." (Donald Trump's fifth speech) In the previous three examples, the negative representation of out-group is achieved by using the categorization device. In example (32), Trump represents Democrats negatively by using the word 'hopeless', indicating that there is not any possibility that they won the election. In example (33) he also represents the way by which they won the election as a theft, and by using the pejorative words 'brazen' and 'outrageous'. In example (34), he represents Democrats negatively as "radical left Democrats" who stole his victory.
Victimization
As mentioned in chapter two, victimization is defined as the process by which discourse producers portray people who are not members of their group negatively, while portraying members of their group as victims of bias or unfair treatment through the use of horrible stories about them. Table 2 shows that Donald Trump used victimization 37 times in his five speeches. It is worth noting that Donald Trump used victimization for two reasons; the first one is to represent Democrats negatively by showing how they treated the Republican voters and observers during the election. The second reason is to gain the empathy of American people by telling horrible stories about the bad treatment that the Republican voters got during the election. The following are some examples that show the use of this device in Trump's five speeches. 14. "But the poll watchers weren't allowed to They were in many cases, whisked out of the room. Not only into pens watch. that were 20, 30, 40, 60, 100 feet away where you couldn't even see. They were using binoculars. People are reporting that they had to use binoculars, and that didn't work. If you were a Republican poll watcher, you were treated like a dog and the Democrats had no problem, but they were rough." (Donald Trump's third speech) In this example, Trump uses victimization strategy to shed the light on the way in which Republican poll watchers were treated while they were observing the election process as they were prevented from watching anything inside the election halls, and they had to use binoculars to observe the election process; therefore, Trump tries to point out that those observers were prevented intentionally in order to give Democrats a chance to steal the election by rigging the votes. Another point related to this part is the horrible image used by Trump to depict poll watchers as they "were treated like a dog". By using victimization strategy, Trump represents Democrats negatively in order to invalidate Biden's victory, and he also tries to gain the empathy of American people.
Authority
As mentioned in chapter two, Authority is defined as the discourse producers' use of information given by authorities November 3 rd , when people put votes in and they put them in illegally, they put them in after the polls closed. And one of our great Supreme Court Justices made mention of that. And I can't imagine that any Justice or anybody looking at it could be thrilled when they vote after the election is over." (Donald Trump's third speech) In this example, Trump talks about the illegal votes that were counted after the end of the Election Day. To be more credible, Trump uses authority strategy by saying that these irregularities were mentioned by "one of our great Supreme Court Justices".
CONCLUSION
The study has presented a critical discourse analysis of Donald Trump's denial speeches of the 2020 United States presidential election's results based on Van Dijk's (2006) CDA framework. The researchers analyzed five speeches delivered by Donald Trump after the presidential election to unveil how Trump utilized discursive devices to convey his dogmatic ideological stance. To answer the research questions, the researchers analyzed the five speeches on two levels; the micro-level of analysis, with a particular emphasis on the use of Van Dijk's (2006) discursive devices, and the macro-level of analysis with an emphasis on Donald Trump's usage of the ideologies; positive self-representation and negative other-representation. The analysis of the five speeches revealed that Donald Trump made use of the majority of the discursive devices; 24 out of 25 discursive devices of Van Dijk's framework were relatively employed by Trump. The findings reveal that Trump oftentimes made greater use of some discursive devices such as lexicalization, evidentiality, example/illustration, number game, polarization, actor description, hyperbole, categorization, victimization, and authority. Regarding the micro-level of analysis, the results show that Donald Trump used the ideologies of positive self-representation and negative other-representation, with more emphasis on using negative other-representation. It is worth noting that Trump used these discursive devices and ideologies to achieve several communicative goals. The first one is that Trump tried to be more credible in the eyes of American people and to justify his denial and invalidation of the election results; therefore, he used devices like example/illustration, evidentiality, number game, and authority to persuade his audience and make them adopt his ideas and beliefs regarding the election results. Furthermore, Trump used devices such as victimization, actor description, hyperbole, and categorization to gain the empathy of American people by showing the negative image of the other group. Moreover, Negative other-representation was achieved through the negative use of lexicalization, polarization, actor description, hyperbole and categorization in which Trump focused on using negative terms to portray the out-groups.
On the basis of the study's findings, the researchers recommend for future research to Incorporate both linguistic and psychiatric studies in order to strictly analyze and understand the role of potential linguistic and psychological discourse in such political speeches. Accordingly, holding Discourse and Language training workshops for politicians, statesmen and senior leaders can be of paramount importance in order to enhance their overall performance in public speeches. | 7,425.6 | 2022-02-28T00:00:00.000 | [
"Political Science",
"Linguistics"
] |
Nanoengineering room temperature ferroelectricity into orthorhombic SmMnO3 films
Orthorhombic RMnO3 (R = rare-earth cation) compounds are type-II multiferroics induced by inversion-symmetry-breaking of spin order. They hold promise for magneto-electric devices. However, no spontaneous room-temperature ferroic property has been observed to date in orthorhombic RMnO3. Here, using 3D straining in nanocomposite films of (SmMnO3)0.5((Bi,Sm)2O3)0.5, we demonstrate room temperature ferroelectricity and ferromagnetism with TC,FM ~ 90 K, matching exactly with theoretical predictions for the induced strain levels. Large in-plane compressive and out-of-plane tensile strains (−3.6% and +4.9%, respectively) were induced by the stiff (Bi,Sm)2O3 nanopillars embedded. The room temperature electric polarization is comparable to other spin-driven ferroelectric RMnO3 films. Also, while bulk SmMnO3 is antiferromagnetic, ferromagnetism was induced in the composite films. The Mn-O bond angles and lengths determined from density functional theory explain the origin of the ferroelectricity, i.e. modification of the exchange coupling. Our structural tuning method gives a route to designing multiferroics.
T ype-II multiferroic materials have exquisitely coupled magnetic and ferroelectric orders and are interesting for future magnetoelectric devices for non-volatile memory 1,2 and sensing applications. In type II orthorhombic rare-earth manganites (o-RMnO 3 ) ferroelectricity is induced by inversionsymmetry breaking of the magnetic order through the Dzyaloshinskii-Moriya interaction 3 . o-RMnO 3 has a very rich functional phase diagram due to a changing magnetic spin state, from an A-type antiferromagnet (A-AFM) to an E-type antiferromagnet (E-AFM) and electrically from paraelectric (PE) to ferroelectric (FE) 3 . However, the ferroic order in o-RMnO 3 originating from cycloidal spiral spin order occurs at a very low temperature, typically below 40 K for bulk and below 100 K for thin films, which makes it impractical for applications. In addition, the electric polarisation (P) in these materials is much smaller (P < 0.1 μC cm −2 ) compared to Bi-based FE originating from the ordering of lone pairs, even at very low temperature 1,[4][5][6][7][8][9] .
For device applications such as energy efficient non-volatile random access memory (RAM) (whether ferroelectric RAM or multistate multiferroics RAM), a high P (>1 μC cm −2 ) is strongly desired at room temperature (RT) and above 10 .
In terms of the magnetic properties, low-temperature ferromagnetism with magnetisation (M) values near~1µ B Mn −1 have been reported in thin o-RMnO 3 films 11,12 . Ferromagnetic (FM) ordering in o-RMnO 3 originates due to epitaxial strain which changes the balance between AFM and FM interactions 13,14 and breaks the long range AFM order at the boundary of different domains 5,15 . Spin-driven ferroelectricity with large FE polarisation is expected with large Mn-O-Mn bond angles and small Mn-O bond lengths 16 . Hence, structural distortion of o-RMnO 3 could produce a RT FE-FM multiferroic. According to the Goodenough-Kanemori (GK) rules, a large bond angle will destroy the E-AFM ordering (collinear up-up-down-down: ↑↑↓↓) and simultaneously stabilise a FM phase, but will also lead to a non-collinear spin configuration, giving a low P or a PE A-AFM 16,17 . Therefore, achieving multiferroicity requires a delicate balance of bond angle and bond length tuning. Such tuning has not been demonstrated to date, and hence there are no reports of RT ferroelectricity in o-RMnO 3 13,14 . The highest reported FE transition temperature (T C,FE ) in o-RMnO 3 films is below~75 K and the highest spin-driven FE polarisation is P~1.5 μC cm −2 under high pressure and high magnetic field 5,18 . At the same time the highest T C,FM of the FM phase is~105 K 5 . Recently, however, theoretical and experimental results have demonstrated the possibility of the coexistence of spontaneous FM and enhanced FE polarisation in RMnO 3 arising from structural distortions 5,17,19 . From theoretical calculations, Iusan et al. 17 reported that under compressive strain (~−4%), the AFM phase of RMnO 3 is not stable and FM ordering emerges with highly enhanced polarisation. However, a −4% strain is very difficult to achieve in RMnO 3 thin films using epitaxial strain from the substrate.
Considering the fundamental theoretical works which promise to achieve RT ferroic properties in single phase o-RMnO 3 , here we have designed and demonstrated self-assembled vertically aligned nanocomposite (VAN) thin films of SmMnO 3 (SMO) + (Bi,Sm) 2 O 3 (BSO), where BSO forms nanocolumns in a SMO matrix. VAN films represent a unique way to create 3D strain 20 and they have several advantages for tuning strongly correlated systems. It is possible to tune both the in-plane and out-of-plane strain independently, thus giving another degree of freedom for bond length and angle tuning 21 . There is no intrinsic thickness limitation to the strain tuning 20 . Very uniform and high strain states can be engineered into the self-assembled VAN films 22 .
Of the different o-RMnO 3 phases, SMO is of particular interest since according to theoretical calculations of exchange coupling, a transition from A-AFM to E-AFM in SMO is possible due to strain or chemical pressure as it sits close to the phase transition in the phase diagram 16 . Hence, a very small perturbation in the bond angle or length is able to significantly modify the magnetic ordering of SMO. The relatively large Mn-O-Mn bond angle in SMO compared to other o-RMnO 3 gives a higher possibility to achieve FM ordering by modifying the FM in-plane nearest and AFM in-plane next-nearest neighbour exchange interactions of Mn moments, J 1 and J 2 , respectively, and in doing so, to induce ferroelectricity. Hence, owing to the type II nature of the multiferroicity, if the magnetic properties are modified via strain, the FE properties should also be readily modified.
With the appropriate nanopillars, the VAN system can induce 3D strain into SMO. In this work, we choose BSO as the nanopillar phase owing to the fact that any Bi substitution into SMO should not be strongly detrimental to the magnetic or FE properties of SMO, and also because of the relatively low melting point of Bi 2 O 3 which should mean high crystalline perfection of the pillars. Also, the relatively higher stiffness of BSO compared to SMO 21,23-26 means the BSO should control the strain in the SMO.
In our VAN SMO:BSO system, we find an increase of the FE transition temperature to above RT, which compares to T C,FE < 40 K in the bulk. Also, a FM transition temperature, T C,FM , of 90 K, and saturation moment of 1.02 μ B Mn −1 at 10 K are obtained without any external pressure or field which compares to the bulk which is AFM, T C,AFM~6 0 K. The RT ferroelectricity in the film is confirmed by using piezoresponse force microscopy (PFM), positive-up and negative-down (PUND) FE pulse tests, and second harmonic generation (SHG) measurements. The net switching polarisation (2P R where P R = remnant polarisation) and piezoresponse amplitude (d 33 ) are 3.9 μC cm −2 and 6.7 pm V −1 , respectively. In addition, long-term retention of polarisation from PFM at RT shows the stable ferroelectricity. SHG polar plots indicate a breaking of centrosymmetry of SMO. The spontaneous RT ferroelectricity with high T C,FM ferromagnetism in SMO is consistent with the presence of a unique strain state in the VAN films. This is proven by growing VAN films of different thicknesses, and by showing that only the thicker, more highly strained VAN films contain the FE-FM phase. In the thicker VAN films, the nanopillars rather than the substrate control the strain state. Density Functional Theory (DFT) calculations of the Mn-O bond angles and lengths indicate strong exchange coupling, which explains the change in spin state and hence the ferromagnetism and thus RT ferroelectricity. Overall, our work shows a route to achieving high temperature multiferroics using a simple 3D strain approach. Structural investigations of SMO:BSO VAN film by XRD and HRTEM. The epitaxial quality of three films were studied from XRD 2θ-ω scans, as shown in the Supplementary Fig. 1a. All the films show sharp (001) peaks, just to the left of the STO peaks. The peak at lower 2θ, labelled 'S' is understood by the fact that the SMO (structural information in Supplementary Table 1) is inplane compressed by the STO (bulk average pseudo-cubic lattice parameter of SMO is 3.944 Å, and STO 3.905 Å) and hence outof-plane tensed. For the 100 nm plain and VAN SMO films, an additional broad higher angle peak corresponding to relaxed SMO is observed. This corresponds to relaxed SMO, and is labelled 'R', with c-axis lattice parameter of~3.746 Å. The amount of relaxed SMO is large in the 100 nm plain film, as would be expected for a standard film of this thickness well above the critical thickness. It is only minor in the 100 nm VAN film and is relaxed to a much lesser extent (as observed from the strong overlap with the STO peak). This is because in VAN films the vertical strain state is controlled by domain matching epitaxy between SMO and BSO. This is discussed in more detail later. The relaxed peak is not observed for the 20 nm VAN film since this film is thin enough for the strain to be dominated by the substrate.
Growth
From X-ray φ-scans ( Supplementary Fig. 1b) the films are highly aligned in-plane (predominantly 45°rotated in-plane) with an epitaxial relationship of [100]SMO//[110]STO or [010]SMO// [110]STO. This is expected as SMO has the GdFeO 3 structure ð ffiffi ffi 2 p a p ffiffi ffi 2 p a p 2cÞ, where a p and c are, respectively, the in-plane and out-of-plane unit cell lattice parameters of a simple tetragonal perovskite unit cell.
As we show later, the 100 nm thick VAN SMO:BSO film shows the most interesting FM and FE properties of the films studied. Therefore, we focus here on the nanostructure, phase composition, and phase distribution in the 100 nm VAN film. Scanning transmission electron microscopy (STEM) high-angle annular dark-field (HAADF) images, both in cross-section ( Fig. 1a, b) and plan-view (Fig. 1d, e), as well as STEM energy dispersive X-ray spectroscopy (EDS) maps (Fig. 1c, f) show a clear phase separation between high-quality epitaxial SMO and BSO. The BSO is highly faceted with cubic facets as expected for the (001) orientation of this phase 5,15 .
EDS maps and elemental line profiles of the cross-section and plan-view images (Fig. 1c, f) show no measurable Bi in the SMO phase and a~1:1 Bi:Sm ratio in the BSO. The phase boundaries between the two phases in the VAN are very clean, i.e. no secondary phases are present, as expected, since the structure forms by self-assembly.
Room temperature ferroelectric properties of VAN films. Strong RT FE properties were observed in the 100 nm SMO:BSO VAN films (Fig. 2). PFM measurements, amplitude and phase of piezoresponse as a function of bias voltage at RT are shown in Fig. 2a. The RT FE behaviour is very different from bulk SMO, which is not FE 13,27 . We recall that the ground state of bulk SMO is A-AFM and PE 13,27,28 . In contrast, our plain 100 nm SMO film also does not show any RT FE properties. The piezoresponse amplitude measured (d 33 ) of 6.7 pm V −1 is as high as bismuth manganite (BiMnO 3 ) thin films 29 . The box-in-box phase mapping was measured in a 6 μm × 6 μm × 6 μm area, after polarising the film with the DC voltage from +5 V to −5 V to +5 V. A characteristic FE hysteretic behaviour is observed. The phase contrast for the opposite voltage (±5 V) remains stable after 24 h (Fig. 2b). The long retention time also confirms the stable FE behaviour of the VAN film. FE polarisation switching is observed by using PUND pulse tests (Fig. 2c).
PUND measurements allow the intrinsic polarisation to be determined, since these measurements mitigate any parasitic or leakage in the films by measuring remanent polarisation by subtracting the non-switching polarisation from the switching polarisation. The PUND pulses used were 500 kV cm −1 , with a 1 ms pulse width and 1000 ms pulse delays (Additional 0.1 ms and 0.01 ms data are shown Supplementary Fig. 4), allowing both switching (*) and non-switching (^) polarisations to be measured. The maximum net switching polarisation (2P R ) of 3.9 μC cm −2 was evaluated using the relation: 2P R = (±P*) − (±P Λ ). The measured value of net polarisation is enhanced compared to other FE o-RMnO 3 films which typically have P < 0.5 μC cm −2 , and at a much lower temperature (T C,FE < 50 K) 2,4-9 . The PUND measurement was also carried out locally on a~200 nm area by the nano-PUND technique ( Supplementary Fig. 5 shows the result of far-field reflection SHG polarimetry of s-and p-wave, at sample's tilt angle of 45°and in-plane angle of 0 or 90°. (See Supplementary Discussion on SHG). At the tilt angle of 0°, no SHG signal was detected, indicating that the tetragonal c-axis points out of the film's surface. The theory fit to the experimental SHG polar plots (lines in Fig. 2d) shows that the macroscopic pseudosymmetry point group of 100 nm SMO VAN film is identified as the polar point group of 4 mm. This result is in a good agreement with the X-ray φ-scans data. The temperature dependent SHG up to 623 K (350°C) is conducted with 45°reflection p-in, p-out geometry ( Supplementary Fig. 6).
The SHG signal decreases with increase of temperature indicating that there is a non-centrosymmetric to centrosymmetric structure transition over a broad temperature range. The change in slope showing onset of saturation of SHG to be below 360 K-370 K. The origin of the second hump in SHG near 500 K is currently of unknown origin.
Magnetic properties of VAN films. The magnetic properties of the 20 nm and 100 nm-thick VAN films are compared in magnetisation versus temperature M(T) plots in Fig. 3. The T C,FM are determined to be~70 K and~90 K for the 20 nm and 100 nm films, respectively. In both films, a possible cluster-glass like behaviour is observed at~20 K where the magnetic moment or susceptibility reach maximum values 32,33 . This is likely because of spin canting arising from the competition between AFM and FM couplings as is commonly observed in BiMnO 3 34 . The inset of and 100 emu cc −1 , respectively. The drastic increase in magnetisation in the thicker film is a prominent indication of a significantly modified spin structure in the upper part of the thicker film. The lack of magnetisation saturation in both films, even above 1 T, is a further indication of cluster-like glass behaviour.
Discussion
To understand more about why the 100 nm VAN film shows the unusual RT FE behaviour at the same time as strong FM, a more detailed analysis of the crystal structures of the three different films was undertaken by high-resolution asymmetric X-ray reciprocal space maps (RSMs) around the (113) reflection of STO. For all the films, the region around the (113) reflection of STO revealed split (206) and (026) peaks of SMO indicative of the orthorhombic SMO structure 5,15,35-37 as expected from the bulk structure of SMO (see Supplementary Table 1). We label the orthorhombic phase as o-SMO1, with the peak position overlapping strongly with (113) STO owing to the coherent growth on STO. This o-SMO1 phase corresponds to 'S', the strained phase labelled in Supplementary Fig. 1a. It is noted that split peaks from the orthorhombic structure are not observed in Supplementary Fig. 1a because of the close positions of (206) and (026) peaks. A clear difference in the RSMs of Fig. 4 emerges for the 100 nm VAN film cf. the 20 nm VAN film and the 100 nm plain film. A new split peak appears at larger Q X (smaller in-plane lattice parameter) and lower Q Z (larger out-of-plane lattice parameter), than for o-SMO1. This peak is labelled as o-SMO2. The splitting of the peaks again indicates an orthorhombic structure.
To understand the origin and evolution of the o-SMO2 phase, and to understand why the VAN structure promotes its '. For example along the a direction, the strain is (a film −a bulk )/a bulk × 100%. The strain with respect to bulk STO is calculated only for the 100 nm VAN film since the strain in this film was also calculated from STEM images, and we are interested in particular in how the strain changes through-thickness in this film.
Bottom of table: 'Calculated strain (%) in bulk SMO required to match ffiffi ffi S p TO (along the three orthogonal directions)'. For example, along the a direction, the strain is ffiffi ffi 2 p a STO À a SMO Þ= ffiffi ffi 2 p a STO 100. Looking first at the calculated strain in bulk SMO required to match STO (bottom half of Supplementary Table 1) there is an anisotropic misfit strain with STO because of the different a-and b-lattice parameters of SMO both in bulk and VAN. The strain equates to +3.0% along the a-axis and −4.7% along the b-axis, or an average in-plane strain of −1.0%. Hence, plain SMO films on STO will be compressed in-plane by STO and tensed out-ofplane. This was confirmed in Supplementary Fig. 1a.
Looking now at the strain in the three different films, we see that in the o-SMO1 phase the in-plane strain levels (2.7-2.8%) are very close to the calculated values for bulk SMO strained to STO (3%). Hence, for all the films, o-SMO1 is coherently strained to the STO. This occurs because o-SMO1 is in the bottom part of the film where SMO is adjacent to the STO substrate surface. The out-of-plane strain levels (i.e. along c) (>4% w.r.t. bulk SMO) are relatively high for all the films and this is likely related to non-stoichiometry effects which are common to manganite films.
In the 100 nm VAN film (which is of most interest to us here because this film shows RT FE behaviour), we observe that the o-SMO2 phase has a very different strain state to either the o-SMO1 phase or to bulk SMO. Strain values w.r.t. bulk SMO of −0.4% along a, and −6.3% along b are observed. Along d 110 this equates to −3.6% (or −2.5% w.r.t. STO). Notably, the average in-plane strain is much larger in o-SMO2 than in o-SMO1 where it is only~−0.9% w.r.t. SMO, or 0.01% w.r.t. STO. This is because the o-SMO1 is coherently strained by the STO, as already mentioned.
Along the c axis, the strain in o-SMO2 is 4.9% w.r.t bulk SMO, or 0.64% w.r.t. STO. Normally, these high strain levels are not maintained in multiferroic films as misfit strain is reduced by nanoscale twin domain structures 5 , 15 . It is likely that relaxation is hindered in the VAN films because the critical twin domain size is larger than the nanopillar size (which is 10's of nm in our films (Fig. 1)).
In the 100 nm VAN films, the in-plane and out-of-plane strain values determined from the X-ray data (discussed above) were fully confirmed by lattice strain calculations from high-resolution STEM. A continuously increasing in-plane compressive strain up to~5.0% w.r.t bulk SMO (0.64% w.r.t. STO) was measured from STEM images. Details of these measurements and calculations are shown in the Supplementary Fig. 2 and accompanying discussion.
The origin of the large in-plane compression in o-SMO2 is linked to how the BSO nanopillars strain the SMO in-plane. In plain films, only the substrate influences the SMO lattice parameters whereas in the VAN films, it is well known that the pillars can strongly influence both the vertical and in-plane strain states, and with increasing effect with film thickness 20,38 .
During growth the BSO nanopillars give a nano-pressurechamber effect because of the relatively faster growth of BSO compared to SMO. The faster growth rate is confirmed by the BSO nanopillars being taller than SMO in the HR-TEM image of Fig. 1a, and also by the fact that the pillars are connected together in-plane to form a maze-like structure, as shown in Fig. 1d. Hence, they must overgrow the SMO, impinging on it, and squeezing it in-plane. Also, upon cooling the pillar shrinkage can lead to a further in-plane compression of the SMO matrix 21 . This is because the BSO is stiffer than the SMO 21,23-26 .
A schematic of the 3D strain effect in the VAN films, showing how it emerges with thickness is shown in Fig. 5. Figure 5a-c illustrates the thin VAN film, with Fig. 5a showing a 3D sketch of the film microstructure, Fig. 5b showing that the SMO is 45°r otated in-plane, and Fig. 5c showing a schematic of the crystal structure matching in a film cross-section. We label the thin VAN film as being in the 'a-region', corresponding to the o-SMO1 phase. The thick VAN film is shown in Fig. 5d Since 55.5085 Å > 54.2451 Å, the SMO is estimated to be stretched by~2% at the interface with BSO. In fact, we observe an extension of just under 1% more than that in the plain film. It is likely less than calculated because the strain is also accommodated by stoichiometry modification.
Overall, there is a decrease of the cell volume in o-SMO2 compared to o-SMO1 (240.7 Å 3 , cf. 227.7 Å 3 ). This is mainly dominated by the decrease of a/b lattice parameters (d 110 strain −3.6%). Hence the Mn-O bond length will be decreased in line with the in-plane compression and this will modify the in-plane magnetic interactions of Mn moments and spin-order states 10 out-of-plane bond angle increases and the in-plane Mn-O bond length decreases.
We recall that there was a~3 times stronger volume magnetisation signal in the 100 nm VAN film (containing o-SMO1 and o-SMO2) cf. the 20 nm VAN film (containing o-SMO1 and possibly some o-SMO2, the latter too weak to be observed in XRD). This proves that o-SMO2 is FM (Fig. 3). We now consider whether the o-SMO1 phase in the VAN films could be weakly FM and FE. Considering that, the strain levels of o-SMO1 in the 20 nm and 100 nm VAN films are very similar to the plain SMO films (Supplementary Table 1) which are non-magnetic and PE, it is likely that o-SMO1 has the same non-magnetic and PE properties. From the volume magnetisation values, the critical thickness of o-SMO1 is estimated to be~15 nm and so the top~5 nm of the 20 nm VAN film would be the o-SMO2. This top layer would give the FM observed for the 20 nm VAN film in Fig. 3. The reason that no FE was measured in this top layer is because it would be too thin for the FE to be measured.
To quantitatively understand the effect of the strain on the Mn-O bond length and Mn-O-Mn bond angles and hence to explain the origin of the ferroelectricity in o-SMO2, DFT calculations were performed for the o-SMO1 and o-SMO2 structures. Fixed-cell geometry optimisations were carried out using experimentally measured lattice parameters tabulated in Supplementary Table 2. It is known that DFT systematically underestimates/overestimates the lattice constants but only a relative comparison here is needed in order to reveal the effect of strain on the ionic co-ordinates. The calculated bond lengths (Supplementary Table 2), show a significant change of the crystal structure in o-SMO2 compared to o-SMO1.
For o-SMO1, the average Mn-O bond in-plane bond length is calculated to be 2.038 Å. However, for o-SMO2 the calculated value is lower, at 1.984 Å, i.e. by −2.65%. This is in close agreement with the experimentally measured difference from Supplementary Table 1. The calculations also show that the out-of-plane Mn-O-Mn bond angles increase moderately from 143.722°in o-SMO1 to 147.361°in o-SMO2 (i.e. a change of~3.6°). These changes in bond lengths and angles are much higher than those achieved by physical pressure methods previously (e.g. 0.8°and 1.1%, respectively, at 10 GPa in TbMnO 3 which has a highest reported P of all RMnO 3 (at 5 K)) 18 .
In RMnO 3 , it is well known that the bond angles and lengths (modified by changing R to induce chemical pressure or by using physical pressure 16 ) are critical to the magnetic properties. The magnetic spin ordering is changed from A-AFM to E-AFM when the temperature is reduced or the structural distortion increases, causing J 1 and J 2 to be increased 18 . The large pseudocubic strain of −3.6 % in o-SMO2 is expected to destroy the A-AFM ordering present in bulk SMO, changing it to E-AFM 17,18 . Furthermore, theoretical calculations show that under a large compressive strain of~−4 % 17 the E-AFM structure is not stable and FM ordering emerges with highly enhanced polarisation. The predicted FM ordering temperature (T C,FM ) is~80 K 17 .
Here, the reduction of the Mn-O bond length and the increase of the Mn-O-Mn bond angle of o-SMO2 are consistent with the proposed mechanism of Mn-Mn interactions with both J 1 and J 2 being enhanced. These enhanced interactions explain the ferromagnetism and the RT FE polarisation in the 100 nm thick SMO: BSO VAN film. Hence, our experimental results for the o-SMO2 phase match the theory very well both for the level of in-plane strain (−3.6%) and for the T C,FM~9 0 K and enhanced FE polarisation.
Moreover, under in-plane compressive strain, the staggered d 3x2-r2 /d 3y2-r2 type orbital ordering in RMnO 3 which has a small GdFeO 3 -type distortion (with A-AFM) 13 changes to a mixture with d x2-z2 /d y2-z2 states 17 . This enhances the asymmetric hopping of e g electrons between the Mn and subsequently enhances the net FE polarisation 17 . Therefore, in o-SMO2 (in the 100 nm VAN SMO:BSO film) the main origin of enhanced ferromagnetism and RT FE polarisation in 100 nm thick SMO:BSO is 3D strain exerted by the BSO nanopillars.
On a final note, it is possible that further tuning of strain in o-RMnO 3 phases using different VAN compositions could yield even higher temperature multiferroicity. Also, it is possible that investigating 'asymmetric hopping' of Mn-e g electrons in VAN structures containing other multiferroic materials, e.g. BiFeO 3 films with mixture of R-and T-phase on STO or the h-RMnO 3 structure, could lead to RT multiferroicity.
In this study, by 3D strain engineering of SmMnO 3 in nanocomposite films, we report spin-driven ferroelectricity at room temperature with clear high T C,FM ferromagnetic behaviour. This compares to bulk SmMnO 3 , which has a ground state of paraelectric and A-type antiferromagnetism with a T N of~60 K. The net switching polarisation (2P R ) and piezoresponse amplitude (d 33 ) are 3.9 μC cm −2 and 6.7 pm V −1 , respectively. This compares with previous reports of GdMnO 3 films 5 being with FE T C,FE of 75 K and TbMnO 3 films having electric polarisation of~1.8 μC cm −2 (but at very low temperatures 5 K, external high pressures 5.2 GPa and high magnetic field 8T) 18 . In addition to the enhanced ferroelectricity, the ferromagnetic transition temperature (T C,FM ) and saturation moment (M S ) of SmMnO 3 were~90 K and 1.02 μ B Mn −1 at 10 K, respectively. The enhanced ferroelectricity of SmMnO 3 was only present in the thicker (100 nm) nanocomposite films where the inplane compression is much larger than in the thinner films. As determined from DFT calculations, the large in-plane compression leads to the change of Mn-O bond angle and length, which indicates an enhanced exchange interaction, consistent with the experimentally observed ferromagnetism and room temperature ferroelectricity.
Methods
Sample preparation. Self-assembled VAN thin films of (SmMnO 3 ) 0.5 :((Bi, Sm) 2 O 3 ) 0.5 were grown on both (001) SrTiO 3 (STO) and Nb-doped SrTiO 3 (Nb: STO) substrates by pulsed laser deposition (PLD). Film thicknesses of 20 nm and 100 nm were grown. As a reference, a SmMnO 3 film of 100 nm thickness was also grown. The targets were prepared by mixing appropriate ratios of high purity Bi 2 O 3 , Mn 2 O 3 and Sm 2 O 3 powders with 10% Bi excess to give the Bi:Sm ratio of 1:1 in the (Bi,Sm) 2 O 3 . The targets were sintered from 900°C (VAN films) -1000°C (plain SMO). A deposition temperature of 650°C was used for all films. The laser pulse rate was 2 Hz and the laser fluence was 1.5 J cm −2 . The oxygen pressure was fixed at 100 mTorr during the deposition.
XRD analysis. To confirm the phase and the crystalline quality of the thin films, detail high-resolution 2θ-ω XRD scans were carried out on a Panalytical Empyrean high-resolution XRD system at room temperature. To explore the 3D strain state, asymmetric RSMs around (113) of STO were also carried out. The a and b lattice parameters were determined by peak fitting of RSM scans using the Epitaxy® software package. For the o-SMO1 phase, the c parameters were also calculated from 2 theta-omega scans, were cross-checked against the RSM values and were found to be the same within ±0.01 Å. Details of structural information can be found in the Supplementary Discussion on XRD.
HRTEM analysis. Detailed structural properties of the films were investigated by HRTEM (JEOL 2010 microscope) operating at 200 kV and a JEOL 4000 EX microscope operating at 400 kV) and a FEI TitanTM G2 80-200 STEM, with a Cs probe corrector, operated at 200 kV. EDS was used for the element distribution mapping.
Magnetic measurements. Detail magnetic measurements were carried out using a superconducting quantum interference device (SQUID) magnetometer (Quantum Design, MPMS). The magnetic moment was converted from emu cc −1 to μ B Mn −1 by using the unit cell volume obtained from XRD.
Ferroelectric measurements. Atomic force microscopy was used to determine the surface structure of the films. PFM measurements were performed using an Agilent 5500 Scanning Probe Microscope (Agilent SPM 5500) in a PFM mode with 3 (MAC-3) lock-in amplifiers (LIAs) at RT. For the PFM measurements, an Olympus Pt-coated tip (Asylum Research AC240TM) was used at an excitation frequency of NATURE COMMUNICATIONS | https://doi.org/10.1038/s41467-020-16101-2 ARTICLE NATURE COMMUNICATIONS | (2020) 11:2207 | https://doi.org/10.1038/s41467-020-16101-2 | www.nature.com/naturecommunications 15 kHz with an alternating current (AC) voltage VAC of 2 V, polarised by a direct current (DC) bias of VDC of ±5 V. The inverse (also called converse) piezoelectric effect was used to induce longitudinal (thickness) film displacements on a local scale and beneath the Pt surface electrodes (of~250 μm in diameter). The PFM displacement was influenced little by the electrostatic force between the sample and the cantilever assembly, since they were at the same electrical (whole cantilever body was electrically screened by the tip) and contact (Pt-tip and Pt-patch) potential.
To characterise the FE properties, Polarization-Electric field (P-E) hysteresis loops were measured using a Radiant precision LC analyzer at room temperature.
Further switching of polarisation was tested by using the PUND pulse method. In the PUND measurements, a series of 5 pulses were applied at 300 kV cm −1 with pulse sequence of different pulse times (1 msec, 0.1 msec and 0.01 msec) to capture both switching and non-switching polarisations. The required switching electric field of 500 kV cm −1 was relatively low (e.g. 9 V applied on the SMO 0.5 :BSO 0.5 nanocomposite films of 100 nm thickness). A delay of 1000 ms was set between the pulses. The initial pulse was applied to preset the film to the polarisation stage and no measurement was made at this point. A second pulse (Pulse 1) was applied to switch the sign of polarisation. After a 1000 ms pulse elapse, the film was left to relax, allowing the non-remnant polarisation to be dissipated. A third pulse (Pulse 2) was then applied and the polarisation was measured without pre-switching. Fourth and fifth pulses (Pulse 3 and 4) were then applied, similar to Pulse 1 and 2 but with the opposite polarity. To calculate the maximum net switching polarisation (2P R ) the full area of the electrode was considered, Since the VAN film is made up of two phases, the calculated polarisation value represents a lower bound. The non-FE BSO phase forms around half the volume of the film and so the polarisation values could be up to around two times larger than has been estimated.
PUND measurements at the nanoscale were also conducted with PFM using the nano-PUND (also termed AFM-PUND) method 30,31 . The measurements were done by applying switching and non-switching voltages-via the back (substrate) electrode of the sample and a conductive AFM tip connected to a trans-impedance amplifier with the input at virtual zero. In this method, the triangular sweeps were applied with either positive and/or negative voltage, in pairs. SHG experimental setup. Far-field Reflection SHG polarimetry was performed with 800 nm fundamental laser beam generated from Empower 45 Nd:YLF Pumped Solstice Ace Ti:Sapphire femtosecond laser system (pulse width of 95 fs and repetition rate of 1 kHz). The schematic figure for the far-field reflection SHG is shown in Supplementary Fig. 7. The 800 nm laser beam is radiated onto the thin film sample with the sample tilt angle of θ (0 and 45°). The tilt angle θ is defined as the angle between the sample normal and the wavevector of incident beam. The inplane orientation of the sample is defined by ψ (0 or 90°). The polarisation state of the light is defined by angle φ. φ is rotated from 0°to 360°by λ/2 (half-lambda) wave plate. The second harmonic electric field (400 nm) reflected from the sample is separated into vertical (s out ) and horizontal (p out ) components by polarised beam-splitter, then the SHG intensity of each component is measured by a photomultiplier tube (PMT). To ensure that there is no SHG signal from the substrate, the SHG signal from substrate was removed by defocusing the fundamental beam.
Density functional theory. The plane wave pseudopotential code CASTEP was used for DFT calculations. A 6×6×6 Monkhorst-Pack grid for k-points and plane wave cut-off energy of 700 eV were used. The PBEsol exchange correlation functional was used since it gives equilibrium lattice constants close to experimentally measured values. The pseudopotentials were used to treat the valence electrons for 2s 2p states in O, 3s 3p 3d 4s states in Mn and 5s 5p 6s 5d state in Sm. The A-type AFM arrangement of spins was used for Mn atoms. A value of U = 4 eV was used to correct the self-interaction error for d electrons in Mn. The 4f states in Sm were included in the core, as otherwise, they could cause instabilities during selfconsistent cycles. Since we are only interested in the change in bond angle and bond length with applied strain, neither the choice of U value nor the omission of Sm 4f electrons would significantly affect the results.
Data availability
All the experimental/calculation data that support the findings of this study are available from the corresponding authors upon reasonable request. All the codes used for this study are available from the corresponding authors upon reasonable request. | 8,185.8 | 2020-05-05T00:00:00.000 | [
"Materials Science"
] |
CRISPR-Knockout of CSE Gene Improves Saccharification Efficiency by Reducing Lignin Content in Hybrid Poplar
Caffeoyl shikimate esterase (CSE) has been shown to play an important role in lignin biosynthesis in plants and is, therefore, a promising target for generating improved lignocellulosic biomass crops for sustainable biofuel production. Populus spp. has two CSE genes (CSE1 and CSE2) and, thus, the hybrid poplar (Populus alba × P. glandulosa) investigated in this study has four CSE genes. Here, we present transgenic hybrid poplars with knockouts of each CSE gene achieved by CRISPR/Cas9. To knockout the CSE genes of the hybrid poplar, we designed three single guide RNAs (sg1–sg3), and produced three different transgenic poplars with either CSE1 (CSE1-sg2), CSE2 (CSE2-sg3), or both genes (CSE1/2-sg1) mutated. CSE1-sg2 and CSE2-sg3 poplars showed up to 29.1% reduction in lignin deposition with irregularly shaped xylem vessels. However, CSE1-sg2 and CSE2-sg3 poplars were morphologically indistinguishable from WT and showed no significant differences in growth in a long-term living modified organism (LMO) field-test covering four seasons. Gene expression analysis revealed that many lignin biosynthetic genes were downregulated in CSE1-sg2 and CSE2-sg3 poplars. Indeed, the CSE1-sg2 and CSE2-sg3 poplars had up to 25% higher saccharification efficiency than the WT control. Our results demonstrate that precise editing of CSE by CRISPR/Cas9 technology can improve lignocellulosic biomass without a growth penalty.
Introduction
Plant lignocellulosic biomass (i.e., wood) is an important renewable and sustainable feedstock for the production of both biomaterials and biofuels [1,2]. The production of biofuels from biomass is gaining more attention due to the growing global climate crisis [3,4].
Polysaccharides in biomass are fermented into ethanol or other compounds by optimized microorganisms after saccharification [5]. However, biomass does not easily decompose due to the complex chemical and physical structure of the plant cell wall, which is referred to as biomass recalcitrance [6][7][8][9]. One of the major causes of biomass recalcitrance is the presence of lignin, a phenolic polymer that provides strength and hydrophobicity to the secondary cell wall. Lignin impedes the efficient enzymatic degradation of cellulose and hemicellulose into fermentable sugars by immobilizing hydrolytic enzymes and physically restricting access to the polysaccharide substrate [6,8,10,11].
Lignin is a heterogeneous polymer comprising three types of monomers synthesized in the phenylpropanoid pathway starting with the aromatic amino acid, phenylalanine. 2 of 15 After deamination of phenylalanine by phenylalanine ammonia-lyase (PAL), the resulting cinnamic acid undergoes a series of aromatic ring and propene tail modifications resulting in three hydroxycinnamoyl alcohols with different degrees of methoxylation, namely p-coumaryl, coniferyl, and sinapyl alcohols. Once incorporated into the polymer, these monolignols produce p-hydroxyphenyl (H), guaiacyl (G), and syringyl (S) units, respectively [12,13].
A number of pretreatment methods have been developed to lower biomass recalcitrance, but pretreatment is still a relatively expensive step in the manufacturing process of biofuels [14,15]. Thus, bioengineering of trees that produce less lignin but maintain normal growth would reduce processing costs and the carbon footprint of biofuel production [7,[16][17][18][19].
Recently, Vanholme et al. [15] demonstrated that caffeoyl shikimate esterase (CSE) catalyzes the conversion of caffeoyl shikimate into caffeate in Arabidopsis, which bypasses the second hydroxycinnamoyl-CoA shikimate/quinate hydroxycinnamoyltransferase (HCT) reaction with 4-coumarate:CoA ligase (4CL) in a lignin biosynthetic pathway [20]. Loss of function of CSE by T-DNA insertion in Arabidopsis resulted in a reduction of lignin levels by up to 36% with preferential accumulation of H units (30-fold) [15]. A similar phenotype was reported in a CSE loss-of-function mutant of Medicago truncatula generated by transposon insertion [21]. Saleme et al. [22] later demonstrated that downregulation of CSE by RNAi silencing resulted in a reduction in lignin deposition (up to 25%) with increased levels of H units (two-fold) in the lignin polymer and a higher cellulose content in hybrid poplar (Populus tremula × P. alba). Recently, LkCSE was successfully cloned from the gymnosperm tree species, Larix kaempferi, and was shown to be able to convert caffeoyl shikimate to caffeate and shikimate by in vitro assays using recombinant LkCSE protein [23].
In both Arabidopsis and hybrid poplar, saccharification efficiency can be dramatically increased by mutation of CSE due to the reduction of lignin deposition. However, the overall plant growth was not severely inhibited [15,22]. These results suggest that CSE is not only important for lignin biosynthesis but is also a promising target for generating improved lignocellulosic biomass crops for biofuel production [15,22].
In this study, we functionally characterized transgenic CSE-knockout hybrid poplars generated by clustered regularly interspaced short palindromic repeat (CRISPR)/CRISPRassociated protein 9 (Cas9) technology. CRISPR/Cas9 technology is based on the Cas9 nuclease and single-guide RNA (sgRNA) for target DNA sequence recognition, and can be utilized to make gene-specific insertion or deletion (indel) mutations [24]. CRISPR/Cas9 has been widely used for genome editing in plants due to its great efficiency and simplicity [25][26][27][28]. We designed sgRNAs for CSEs in the hybrid poplar (Populus alba × P. glandulosa, clone BH) and produced transgenic CSE-CRISPR poplar knockouts of either CSE1 (i.e., CSE1-sg2) or CSE2 (i.e., CSE2-sg3), or both genes; mutation of either CSE1 or CSE2 resulted in a reduction in lignin deposition by up to 29.1% and significantly increased saccharification efficiency (up to 25%). We will discuss the significance of using this approach to improve woody biomass feedstock for biofuel production.
Predicted CSE Protein and CSE Gene Expression in Transgenic CSE-CRISPR Hybrid Poplars
To visualize the functional significance of the CRISPR/Cas9-induced mutations in each line, we prepared a schematic diagram of the predicted CSE proteins by querying the gene edited sequences of each transgenic line using the ORF finder program of NCBI (https://www.ncbi.nlm.nih.gov/orffinder/20210817) ( Figure 2a). In line #2 of the CSE1/2-sg1 poplar, three CSE genes (i.e., PaCSE1, PaCSE2 and PgCSE2) were edited in the 1st exon as per our experimental design; thus, N-terminal deleted proteins (PaCSE1 and PaCSE2; 309 amino acids) or one amino acid-deleted protein were predicted (PgCSE2). However, PgCSE1 remained intact (326 amino acids) with no gene editing ( Figure 2a). Indeed, CSE1-sg2 poplars (lines 1, 16 and 28) that were targeted for knockout of CSE1 had nonsense mutations in both CSE1 genes (PaCSE1 and PgCSE1) in the 2nd exon, resulting in predicted C-terminal truncated CSE1 (PaCSE1 and PgCSE1) proteins with only 146 and 154 amino acids, respectively, while the other two CSE2 proteins (PaCSE2 and PgCSE2) were intact ( Figure 2a). CSE2-sg3 poplars (lines 4, 17, and 19) that were targeted for CSE2 knockout had mutations only of the two CSE2 genes but not the other two CSE1 genes, as expected ( Figure 2a). Among these lines, line 19 had nonsense mutations in the second exon of the two CSE2 genes (PaCSE2 and PgCSE2), which would result in C-terminal truncated CSE2 (PaCSE2 and PgCSE2) proteins with only 143 and 174 amino acids, respectively.
To quantify the expressions of CSE genes in the CSE1-sg2 and CSE2-sg3 poplars, quantitative real-time PCR (RT-qPCR) was performed using primers amplifying the C-terminal region after target sites of sg2 and sg3 ( Figure 1a). As an internal quantitative control, the PtrACTIN7 (Potri. 001G309500) gene was used. Expression of CSE1 and CSE2 in CSE1-sg2 and CSE2-sg3 poplars, respectively, was considerably reduced compared to the expression of these genes in BH poplar (control) (Figure 2b,c). However, as expected, there is no significant change in PagCSE1 expression in CSE2-sg3 poplar and vice versa (Figure 2b,c). This result is consistent to our previous report of PDS-CRISPR poplar study [29], and can be explained by nonsense-mediated mRNA decay, a surveillance pathway present in all eukaryotes, which eliminates mRNA transcripts containing premature stop codons, reducing gene expression errors [30].
CSE-CRISPR Hybrid Poplars Have Reduced Lignin Deposition
We measured the Klason lignin contents of CSE-CRISPR poplars together with that of the control BH poplar (three-month-old grown in pot), using cell wall materials obtained from stem tissues. As shown in Figure 3a, both CSE1-sg2 and CSE2-sg3 poplars had Klason lignin deposition that was reduced by up to 16 wt% compared to BH. All three lines of CSE1-sg2 poplars (lines 1, 16, and 28) had a similar reduction in lignin content. However
CSE-CRISPR Hybrid Poplars Have Reduced Lignin Deposition
We measured the Klason lignin contents of CSE-CRISPR poplars together with that of the control BH poplar (three-month-old grown in pot), using cell wall materials obtained from stem tissues. As shown in Figure 3a, both CSE1-sg2 and CSE2-sg3 poplars had Klason lignin deposition that was reduced by up to 16 wt% compared to BH. All three lines of CSE1-sg2 poplars (lines 1, 16, and 28) had a similar reduction in lignin content. However, among CSE2-sg3 poplars, only line 19 showed a clear reduction in lignin. Interestingly, CSE1/2-sg1 poplars had no changes in lignin content compared to BH ( Figure 3a). We attributed these results to the gene editing results in each CSE-CRISPR poplar line, To quantify the compositional changes of the cell wall components, we performed cell wall analysis using line 16 of CSE1-sg2 and line 19 of CSE2-sg3 poplar grown in LMO field for 8 month (Figure 3b). Our results showed the significant reduction of total lignin contents in CSE-CRISPR poplars up to 29.1% compared to BH poplars. This reduction in lignin content is higher than the results in Figure 3a, which may result from different growth conditions (e.g., three months in pots vs. eight months in LMO fields). Interestingly, both cellulose and hemicellulose contents were slightly increased in the CSE-CRISPR poplars, consistently to the previous report [22]. However, there were no significant changes in the contents of the extractives. poplar for further in-depth analyses.
To quantify the compositional changes of the cell wall components, we performed cell wall analysis using line 16 of CSE1-sg2 and line 19 of CSE2-sg3 poplar grown in LMO field for 8 month (Figure 3b). Our results showed the significant reduction of total lignin contents in CSE-CRISPR poplars up to 29.1% compared to BH poplars. This reduction in lignin content is higher than the results in Figure 3a, which may result from different growth conditions (e.g., three months in pots vs. eight months in LMO fields). Interestingly, both cellulose and hemicellulose contents were slightly increased in the CSE-CRISPR poplars, consistently to the previous report [22]. However, there were no significant changes in the contents of the extractives. . Asterisks indicate significant differences compared to BH using the unpaired Student's t-test (* p-value < 0.05, *** p-value < 0.001). Eight-month-old LMO field grown stem tissues were used to analyze the composition of cell wall components (n = 3, error bar = S.E.). Asterisks indicate significant differences compared to BH using the unpaired Student's t-test (* p-value < 0.05, *** p-value < 0.001).
CSE-CRISPR Hybrid Poplars Have Collapsed Xylem Vessels with Decreased S-Lignin Content
Because both CSE1-sg2 and CSE2-sg3 poplars showed a significant reduction in lignin content, we examined secondary xylem formation by stem cross-sections. Both CSE1-sg2 and CSE2-sg3 poplars (line 16 and line 19, respectively) exhibited collapses of xylem vessel cells (e.g., irregularly shaped xylem) (Figure 4), which is commonly found in plants that have defective accumulation of secondary wall components (such as cellulose, lignin and xylan) (for a review, [31]). On the contrary, BH poplars showed normal xylem vessel development (Figure 4). This result is consistent with the reduced lignin content in CSE1-sg2 and CSE2-sg3 poplars shown in Figure 3.
Because both CSE1-sg2 and CSE2-sg3 poplars showed a significant reduction in lignin content, we examined secondary xylem formation by stem cross-sections. Both CSE1-sg2 and CSE2-sg3 poplars (line 16 and line 19, respectively) exhibited collapses of xylem vessel cells (e.g., irregularly shaped xylem) (Figure 4), which is commonly found in plants that have defective accumulation of secondary wall components (such as cellulose, lignin and xylan) (for a review, [31]). On the contrary, BH poplars showed normal xylem vessel development (Figure 4). This result is consistent with the reduced lignin content in CSE1-sg2 and CSE2-sg3 poplars shown in Figure 3.
Indeed, Wiesner (also known as phloroglucinol-HCl) and Mäule staining of both CSE1-sg2 and CSE2-sg3 poplars revealed weaker red coloration than observed in BH poplars, suggesting a decrease in lignin deposition and S-lignin content, respectively (Figure 4b,c).
Coordinated Expression Changes of Genes Involved in Lignin Biosynthesis
Next, we examined the expression of genes involved in the lignin biosynthetic pathway ( Figure 5). As expected, genes upstream of CSE showed relatively stable expression levels compared to downstream genes, except PtrC4H1 and PtrC4H2 genes (Figure 5a,b). For example, expression of the downstream genes PtrCCoAOMT1, and PtrCCR2 was significantly suppressed in CSE-CRISPR poplars compared to BH control poplars (Figure 5b).
Both PtrMYB152 and PtrMYB92 have been shown to regulate secondary cell wall thickening and increase total lignin content in poplars [32,33]. Interestingly, expression of both transcription factor genes was significantly repressed in our CSE-CRISPR poplars Indeed, Wiesner (also known as phloroglucinol-HCl) and Mäule staining of both CSE1-sg2 and CSE2-sg3 poplars revealed weaker red coloration than observed in BH poplars, suggesting a decrease in lignin deposition and S-lignin content, respectively (Figure 4b,c).
Coordinated Expression Changes of Genes Involved in Lignin Biosynthesis
Next, we examined the expression of genes involved in the lignin biosynthetic pathway ( Figure 5). As expected, genes upstream of CSE showed relatively stable expression levels compared to downstream genes, except PtrC4H1 and PtrC4H2 genes (Figure 5a,b). For example, expression of the downstream genes PtrCCoAOMT1, and PtrCCR2 was significantly suppressed in CSE-CRISPR poplars compared to BH control poplars (Figure 5b).
Both PtrMYB152 and PtrMYB92 have been shown to regulate secondary cell wall thickening and increase total lignin content in poplars [32,33]. Interestingly, expression of both transcription factor genes was significantly repressed in our CSE-CRISPR poplars (Figure 5c), which may also have contributed to the reduction in total lignin content of the CSE-CRISPR poplars.
Enhanced Saccharification Efficiency of CSE-CRISPR Transgenic Poplars with Normal Growth Performance
Saccharification efficiency of wood materials from CSE-CRISPR poplars was measured by quantifying the amount of glucose released at different incubation times after hot water or alkali (1% NaOH) pretreatment (Figure 6a). We found a significant increase (>25% at 72 h) in glucose release from NaOH-treated CSE-CRISPR poplars (CSE1-sg2 #16) compared to BH poplars (Figure 6a). These results suggest that biomass recalcitrance was reduced and thus glucose release was improved in CSE-CRISPR poplars, most likely due to decreased lignin content and increased fermentable sugars, as shown in Figure 3. Asterisks indicate significant differences compared to BH using the unpaired Student's t-test (* p-value < 0.05, ** p-value < 0.01, *** p-value < 0.001). Error bars indicate the standard errors of three independent experiments.
Enhanced Saccharification Efficiency of CSE-CRISPR Transgenic Poplars with Normal Growth Performance
Saccharification efficiency of wood materials from CSE-CRISPR poplars was measured by quantifying the amount of glucose released at different incubation times after hot and PtrMYB152 (Potri.017G130300.1). Relative expression (log 2 scale) were determined by RT-qPCR using the PtrACTIN7 gene as a quantitative control. Total RNA was extracted from the stems of 4-month-old poplars. Asterisks indicate significant differences compared to BH using the unpaired Student's t-test (* p-value < 0.05, ** p-value < 0.01, *** p-value < 0.001). Error bars indicate the standard errors of three independent experiments. mutant (cse-2) exhibited a 40% reduction in plant growth. Furthermore, loss of function of CSE in transposon insertion lines of M. truncatula resulted in severe dwarfing and altered development [21]. We therefore investigated the overall growth phenotypes (e.g., stem height and diameter growth) of both CSE1-sg2 and CSE2-sg3 poplars compared to BH poplars. Interestingly, we detected no significant differences in growth among CSE1-sg2 and CSE2-sg3 poplars and BH poplars in a living modified organism (LMO) field test conducted over a year covering all four seasons (Figure 6b).
Discussion
Lignin is essential for the growth and development of terrestrial plants as it contributes to the creation of a very strong secondary cell wall. At the same time, lignin makes it difficult to process plant biomass into fermentable sugars [6,34]. Not only does CSE play an essential role in plant lignin biosynthesis, it is also an excellent target for producing improved biomass crops for sustainable biofuel production [15,22]. Here, we described the generation and functional characterization of transgenic hybrid poplars with knockouts of each CSE gene by CRISPR/Cas9 technology. Previously, Vanholme et al. [15] reported that an Arabidopsis CSE loss-of-function mutant (cse-2) exhibited a 40% reduction in plant growth. Furthermore, loss of function of CSE in transposon insertion lines of M. truncatula resulted in severe dwarfing and altered development [21]. We therefore investigated the overall growth phenotypes (e.g., stem height and diameter growth) of both CSE1-sg2 and CSE2-sg3 poplars compared to BH poplars. Interestingly, we detected no significant differences in growth among CSE1-sg2 and CSE2-sg3 poplars and BH poplars in a living modified organism (LMO) field test conducted over a year covering all four seasons (Figure 6b).
Discussion
Lignin is essential for the growth and development of terrestrial plants as it contributes to the creation of a very strong secondary cell wall. At the same time, lignin makes it difficult to process plant biomass into fermentable sugars [6,34]. Not only does CSE play an essential role in plant lignin biosynthesis, it is also an excellent target for producing improved biomass crops for sustainable biofuel production [15,22]. Here, we described the generation and functional characterization of transgenic hybrid poplars with knockouts of each CSE gene by CRISPR/Cas9 technology.
CSE-Knockout Reduces Lignin Deposition in Poplar Stems
We generated three different transgenic hybrid poplars with mutations of either CSE1 (CSE1-sg2), CSE2 (CSE2-sg3), or both genes (CSE1/2-sg1). However, we did not observe any phenotypic changes in CSE1/2-sg1 poplars (both CSE1 and CSE2 mutated), most likely due to targeting of the N-terminus of the CSE1/2 protein. In fact, CSE1/2-sg1 poplars are expected to have an intact PgCSE1 protein, PgCSE2, with a single amino-acid deletion, and PgCSE1 and PaCSE2 proteins with the 17 N-terminal amino acids deleted, which could all potentially function properly (Figure 2a). In fact, we performed in-depth analyses on five additional lines, that are two biallelic (#11, #12) and three homo lines (#30, #32, #34). However, all those lines showed similar growth performances with no significant changes of lignin deposition compared to BH poplars (data not shown). Thus, we focused on characterizing CSE1-sg2 and CSE2-sg3 poplars with mutations of CSE1 and CSE2, respectively ( Figure 2). Consistent with previous reports, CSE1-sg2 and CSE2-sg3 poplars had up to 29.1% reduced lignin deposition (Figure 3) [15,21,22]. In our analysis of stem anatomy (Figure 4b), we found both CSE1-sg2 and CSE2-sg3 poplars had collapsed xylem vessel formation with reduced Wiesner staining; as this stain reacts with O-4-linked coniferyl and sinapyl aldehydes in lignified cells [35], this further confirmed a reduction in lignin content. In addition, a decrease in S-lignin content was revealed by Mäule staining (Figure 4c), which specifically stains S units red [36][37][38]. This result is consistent with the previous finding that CSE proteins function after the branch where G and S unit biosynthesis diverges from that of H units in the lignin pathway [15,22].
CSE1-sg2 and CSE2-sg3 Poplars Exhibit Normal Growth Performance Based on a Long-Term Field Test
CSE loss-of-function mutants of Arabidopsis and M. truncatula displayed severe dwarfing and altered development [15,21]. However, hpCSE lines (CSE-RNAi silencing) of hybrid poplar did not have drastically altered plant growth or development even though these lines had up to 25% reduced lignin deposition [22]. The mild phenotype in the hpCSE lines is likely due to residual expression of both PtxaCSE paralogues [22]. However, because RNAi silencing simultaneously downregulates both CSE genes, it was difficult for these researchers to investigate the individual roles of each of the two genes.
In the hybrid poplar used in this study (Populus alba × P. glandulosa, clone BH), both PagCSE1 (indicating PaCSE1 and PgCSE1, together) and PagCSE2 genes were strongly and preferentially expressed in mature developing xylem (MDX) tissue, whereas much lower transcript levels were detected in shoot apical meristem with leaf primordia (SL), intermediate or mature stem-derived cambium (IC or MC), and leaves without veins (ML) [39]. Therefore, if one of the two CSE genes is unavailable, it is very likely that the other can function as a paralog for lignin biosynthesis. Indeed, our CSE1-sg2 and CSE2-sg3 poplars grew like control poplars, as demonstrated in our long-term LMO field test covering all four seasons (Figure 6b). This result can be explained by the fact that unlike in Arabidopsis and M. truncatula, only one of the two CSE genes was knocked out in the CSE1-sg2 and CSE2-sg3 poplars, respectively. Furthermore, PagCSE1 and PagCSE2 appear to be functional paralogs in our hybrid poplar.
CSE-Knockout Improves the Saccharification Efficiency of Poplar Stems
It has been well documented that lignin is a major impediment to the conversion of plant biomass into fermentable sugars [6,34]. To produce economically feasible biofuels, many efforts have been made to reduce the recalcitrance of biomass feedstock due to lignin [40][41][42][43]. Reducing CSE function has been proven to produce better biomass feedstock by reducing the recalcitrance of Arabidopsis and hybrid poplar to high saccharification [15,22]. Very recently, de Vries et al. (2021) [44] reported CRISPR-Cas9 editing of CSE in Populus tremula × P. alba, an approach very similar to that used in this study. However, in their study, CRISPR-Cas9-generated cse1 and cse2 single mutants had no significant phenotype and a wild-type lignin level; only cse1 cse2 double mutants showed a reduction in lignin (35%) with a severe growth penalty. The cse1 cse2 double mutants had a four-fold increase in cellulose-to-glucose conversion upon limited saccharification [44].
Unlike the report of de Vries et al. (2021) [44], our CSE1-sg2 and CSE2-sg3 poplars had significantly reduced lignin levels (up to 29.1%) and thus showed a dramatic increase in saccharification efficiency (Figure 6a). It is not clear why the results are different at this point, but perhaps the different species and the different target sites of CRISPR might also be the reason. Additionally, because the hpCSE line had no growth penalty with a 25% reduction in lignin [22], no phenotypic effect is likely as long as the amount of lignin remains above a certain threshold.
Although the saccharification efficiency of the CSE1-sg2 and CSE2-sg3 poplars was lower than that of cse1 cse2 double mutant poplars [44], there was no associated growth penalty and, thus, CSE1-sg2 and CSE2-sg3 transgenic poplars can be directly utilized as efficient biomass feedstock for biorefineries.
Plant Materials and Growth Conditions
Hybrid poplars (Populus alba × P. glandulosa, clone BH) were used as both wild-type controls and transgenic plants in this study. Plants were acclimated in soil and grown in a growth room (16 h light; light intensity, 150 µmol m −2 s −1 ; 24 • C) or in an LMO field at the Forest Bioresources Department of the National Institute of Forest Science, Republic of Korea (latitude 37.2 N, longitude 126.9 E).
Growth Measurements
Stem height was measured using a scale bar from the top of the plant to the soil level, and stem diameter was measured using digital calipers (Mitutoyo, Kawasaki, Japan) at 3 cm above soil level. Three biological replicates per line were analyzed.
CSE-CRISPR/Cas9 Vector Construction and Plant Transformation
Single guide RNAs (sgRNAs) targeting CSE genes were designed by Cas-Designer in the CRISPR RGEN Tools (http://www.rgenome.net/cas-designer/20210817) using full-length cDNA sequences of CSE genes (i.e., PaCSE1, PgCSE1, PaCSE2 and PgCSE2) and the Populus alba × P. tremula var. glandulosa (Poplar 84K) genome as a reference sequence. Target sequences were selected with a low expected number of mismatches and high out-of-frame score ( Figure S2a). Finally, three single guide RNAs (sg1-sg3) were selected for knockout of CSE1, CSE2, or both genes, and each guide RNA length was set to 20 bp excluding the protospacer adjacent motif (PAM) sequence ( Figure S2b). The binary vector pHAtC (GenBank: KU213971.1) and AarI-mediated sgRNA cloning system [45] were used for Agrobacterium-mediated transformation of the hybrid poplar. In brief, the annealed target sgRNA sequence was inserted between the AtU6 promoter and sgRNA scaffold after AarI-digestion and then circularized by T4 DNA ligase (New England Biolabs, Ipswich). Vector construct was then introduced into Agrobacterium tumefaciens strain GV3101, which was used to transform poplar using the stem node transformationregeneration method [46,47]. All constructs used in this study were verified by DNA sequencing (Macrogen http://dna.macrogen.com/kor/20210818).
Genotyping of Regenerated Transgenic Hybrid Poplars by Targeted Deep Sequencing
Genotyping of the mutated sequences in transgenic hybrid poplars was performed using the Illumina MiniSeq platform (KAIST Biocore Center, Daejeon, Korea). In brief, genomic DNA was extracted from shoot tissue of regenerated transgenic hybrid poplars using the DNeasy Plant Mini Kit (Qiagen, Hilden, Germany). The target region was amplified using nested PCR primer pairs containing adapter sequences. Then, amplicons were labelled with an index sequence (Illumina, Seoul, Korea) using index PCR primer pairs, and targeted deep sequencing was conducted using an Illumina MiniSeq (KAIST Biocore Center, Daejeon, Korea). The resulting deep sequencing data were analyzed using Cas-Analyzer (www.rgenome.net/cas-analyzer/20210817). Primer pairs used in this study are listed in Table S1.
Histological Analysis
Cross sections of poplar stems were prepared by hand-cutting and stained with 0.05% toluidine blue O or 2% phloroglucinol/HCl for 1 min, as described previously [48]. Mäule staining was performed following the method of Mitra and Loqué [49]. In brief, stem cross sections were incubated for 2 min in 1 mL of 0.5% (w/v) potassium permanganate. Sections were then rinsed with distilled water 3-4 times until the solution remained clear. Then, 1 mL of 3% HCl was added to remove the deep brown color of the stained sections. The 3% HCl solution was removed and 1 mL of 14.8 M ammonium hydroxide solution was added immediately. Sections were observed using a digital camera-equipped microscope (CHB-213; Olympus, Tokyo, Japan).
RNA Extraction and RT-qPCR
For RNA extraction of hybrid poplars, the cetyltrimethylammonium bromide (CTAB) method was used because of the high amounts of polysaccharides and polyphenols in poplars, as described previously [48,50]. One microgram of total RNA was reverse transcribed using Superscript III reverse transcriptase (Invitrogen, Carlsbad, CA, USA) in a 20 µL reaction volume. Subsequently, RT-PCR was performed using 1 µL of the reaction product as a template. Quantitative real-time PCR was performed using an CFX96 Touch ™ Real-Time PCR platform (BioRad) with iQ TM SYBR ® Green Supermix (BioRad, Hercules, CA, USA). Poplar ACTIN7 (Potri.001G309500) was used as the internal quantitative control, and relative expression level was calculated by the 2 −∆∆CT method [51]. All primer sequences were designed using Primer3 software (http://fokker.wi.mit.edu/20210807). Sequences are provided in Table S1.
Measurement of Klason Lignin Content
Klason lignin (i.e., acid insoluble lignin) contents of transgenic poplars grown for 3 months in soil were measured [52]. Stem tissues were dried at 65 • C for 1 week and ground to a fine powder. Ground materials (~100 mg) were placed in glass screw-cap tubes and 1 mL of 72% (v/v) sulfuric acid was added followed by thorough mixing. Tubes were placed in a water bath set at 45 ± 3 • C and incubated for 90 ± 5 min until all samples were hydrolyzed. Acid was diluted to a 4% concentration by adding 28 mL deionized water. Samples were mixed by inversion several times to eliminate phase separation. Sealed samples were autoclaved for 1 h at 121 • C and slowly cooled down to room temperature before removing the caps of the tubes. The autoclaved hydrolysis solution was vacuumfiltered through pre-weighed filter paper. The filter paper was dried at 105 • C to obtain acid insoluble residue until a constant weight was achieved. The filter paper was allowed to cool down to room temperature and the weight of the filter paper and dry residue were recorded.
Cell Wall Composition Analysis
The main stems of 8-month-old LMO field-grown hybrid poplars were used for cell wall composition analysis. Stem tissues were dried (65 • C/2 weeks) and ground to a fine powder. To determine extractives amounts [53], 50 mL of acetone was added to 700 mg of samples followed by a 2-hr incubation at 65 • C with shaking. After vacuum filtration and washing (5 mL of 10% (v/v) acetone three times), the filter paper was dried in an oven at 65 • C until a constant weight was obtained, which was then recorded. To extract hemicellulose [54], 4 mL of 10% (w/v) NaOH was added to 200 mg of the collected extractive-free samples above followed by a 3 h incubation at 50 • C with shaking. After vacuum filtration and washing (5 mL of distilled water three times), samples were dried (65 • C) until a constant weight was obtained, and the final weight of residue was recorded. Lignin content was determined using the Klason lignin method [52]. Cellulose content was obtained by calculating the difference between the initial samples (100%) and the percentages of the three other components.
Saccharification Efficiency of Transgenic Poplar
Saccharification efficiency was measured as described previously [50] with determination of reducing sugar content by the method of Yang et al. [55] with slight modifications. Briefly, for pretreatment, ground materials (~2 mg) were transferred into 2-mL screw-cap tubes and incubated with 200 µL of distilled water or 180 µL of NaOH (1%, w/v) at 30 • C for 30 min and then autoclaved at 120 • C for 60 min. After cooling to room temperature, 20 µL of 2.5 N HCl was used to neutralize the 1% NaOH-treated sample. After pretreatment, 300 µL of 0.1 M sodium acetate buffer (pH 5.0) containing 40 µg of tetracycline, 10 mg cellulose, and 1 mg ß-glucosidase was added. After 24, 48, and 72 h of incubation at 37 • C with shaking (180 rpm), samples were centrifuged (15,000× g for 3 min) and 5 µL of the supernatant was collected to measure reducing sugar content using the DNS (3,5-dinitrosalicylate) assay [56]. DNS reactions were performed by mixing 5 µL of the sample and 5 µL of water with 90 µL of DNS reagent in a PCR tube, followed by incubation at 95 • C for 6 min. Reducing sugar content was quantified by measuring the absorbance at λ 550 nm with glucose solution standards.
Statistical Analysis
All experiments were performed in triplicate and repeated at least three times. The number of used plants is indicated for each result presented. Statistical analyses were performed and graphs were generated using SigmaPlot v12.0 (Systat Software, Inc., Chicago, IL, USA). In addition, the significance of differences was calculated using Student's t-test. | 7,022.4 | 2021-09-01T00:00:00.000 | [
"Environmental Science",
"Biology",
"Materials Science"
] |
Automatically Select Emotion for Response via Personality-affected Emotion Transition
To provide consistent emotional interaction with users, dialog systems should be capable to automatically select appropriate emotions for responses like humans. However, most existing works focus on rendering specified emotions in responses or empathetically respond to the emotion of users, yet the individual difference in emotion expression is overlooked. This may lead to inconsistent emotional expressions and disinterest users. To tackle this issue, we propose to equip the dialog system with personality and enable it to automatically select emotions in responses by simulating the emotion transition of humans in conversation. In detail, the emotion of the dialog system is transitioned from its preceding emotion in context. The transition is triggered by the preceding dialog context and affected by the specified personality trait. To achieve this, we first model the emotion transition in the dialog system as the variation between the preceding emotion and the response emotion in the Valence-Arousal-Dominance (VAD) emotion space. Then, we design neural networks to encode the preceding dialog context and the specified personality traits to compose the variation. Finally, the emotion for response is selected from the sum of the preceding emotion and the variation. We construct a dialog dataset with emotion and personality labels and conduct emotion prediction tasks for evaluation. Experimental results validate the effectiveness of the personality-affected emotion transition.
Introduction
Emotional intelligence can be considered a mental ability to reason validly with emotional information, and the action of emotions to enhance thought . Hence, to create dialog systems with emotional intelligence during communication, 1 Our dataset is released at: github.com/preke/PELD it is necessary to enable the machine to understand the emotion of users, select appropriate response emotions and express in conversation.
Existing works either focus on rendering specified emotions in responses (Zhou et al., 2018;Colombo et al., 2019), or understanding the emotion of users and respond empathetically (Zandie and Mahoor, 2020;Zhong et al., 2020;Lin et al., 2019); but how to automatically select the emotion for response is seldom discussed. Wei et al. (2019) proposes to learn appropriate emotional responses from massive anonymous online dialogues. However, trained on conversations from different speakers, the dialog system ignores the individual difference of expressing emotions. This may lead to inconsistent emotional interactions and disinterest users as they may feel they are still talking to rigid machines. In a dialog system, automatically selecting the emotion for response is to decide an emotion to be expressed facilitating the emotional response generation. Emotion selection can be modeled as the emotion transition (Thornton and Tamir, 2017), which refers to how the preceding emotion changes to the next, of the dialog system reacting to the dialog context. To achieve it like humans, it requires long-term patterns of thought, and behavior associated with an individual (Ball, 2000). Mehrabian (1996a) shows that the personality, e.g., the big-five personality model (Costa and McCrae, 1992) also can be represented as temperament in the Valence-Arousal-Dominance (VAD) space for emotions (Mehrabian, 1996b). 2 The finding suggests that different personalities make different impacts on emotional expressions. Inspired by these works, we propose a personality-affected emotion transition model to endow personality to the dialog system, enabling it to select emotions that react to the dialog context affected by its given personality.
In our method, we model the emotion transition of the dialog system as the variation in the VAD space from its preceding emotion to the next emotion in the response to users. We first obtain the preceding emotion of the dialog system from the dialog context and project it into the VAD space as an emotion vector. Simultaneously, we endow a personality trait, a 5-dimension vector representing the strength of each dimension in the big-five personality traits, to the dialog system. Then, we design neural networks to encode the dialog context and the personality traits into the VAD space to compose the variation of emotion. Finally, the emotion for response is selected based on the sum of the preceding emotion and the variation.
To facilitate related researches, we construct the Personality EmotionLines Dataset (PELD), which includes 6,510 dialogue triples of daily conversations with emotion labels and annotated personality traits. The emotion labels and personality annotations are adopted from other researches (Poria et al., 2018;Zahiri and Choi, 2017;Jiang et al., 2019) analyzing the script of a famous TV series Friends 3 . We conduct emotion prediction tasks on the PELD dataset to evaluate the effectiveness of our method. The results suggest that the personality-affected emotion transition does contribute to better accuracy in emotion selection. To summary, our contributions are as follows: • We raise the problem of automatically select the emotion for response in conversation and propose a new perspective to solve it through personality-affected emotion transition.
• We construct a dialog script dataset with emotion and personality labels and analyze the patterns of emotion transitions in our dataset to facilitate related researches.
• We evaluate the effectiveness of our proposed method on emotion prediction tasks and analyze the effects of personality and emotion transition respectively.
Related Works
Our research is related to the emotional dialog systems, and the personality influence on emotion ex-3 https://en.wikipedia.org/wiki/Friends pression in psychology and Human-Computer Interaction (HCI). So, we review existing works in the two aspects as follows.
Emotional Dialog Systems
The concept of the emotional dialog system first occurred in (Colby, 1975), where a rule-based emotion simulation chatbot was proposed. Microsoft introduced the Xiaoice (Zhou et al., 2020), an empathetic social chatbot that is able to recognize users' emotional needs, in 2014. Related researches become popular recently since Zhou et al. (2018) proposed the Emotional Chatting Machine to exploit the deep learning approach in building a largescale emotionally aware conversational bot. Most existing works focus on incorporating specified emotion factors into neural response generation. Shantala et al. (2018) trains emotional embeddings based on context and then integrated them into response generation. Colombo et al. (2019) controls the emotional response generation with both categorical emotion representations and continuous word representations in VAD space (Mohammad, 2018). Moreover, Asghar et al. (2018) proposes an affectively diverse beam search for decoding. Besides, reinforcement learning is also adopted to encourage response generation models to render specified emotions. Li et al. (2019) combines reinforcement learning with emotional editing constraints to generate meaningful and customizable emotional replies. (Sun et al., 2018) also uses an emotion tag to partially rewarding the model to express specified emotion. However, it is impractical to always specify response emotions for dialog systems in real application scenarios. To simulate the emotional interaction among humans, Wei et al. (2019) designs an emotion selector to learn the proper emotion for responses from massive dialogue pairs. But the emotional expression is subjective, for the same post, different users may have different emotions in their responses. So, the pattern learned only from online dialogues ignores the user information and turns to be impractical.
Personality Effects on Emotions
Emotion is a complex psychological experience of an individual's state of mind as interacting with people or environmental influences (Han et al., 2012). The Pleasure-Arousal-Dominance (PAD) (Mehrabian, 1996b) or Valence-Arousal-Dominance (VAD) emotion temperament model shows three nearly orthogonal dimensions providing a comprehensive description of emotional states. Based on this, several psychologists studied the relationship between human emotional factors and personality factors. However, most of them are rule-based models (Johns and Silverman, 2001) and probabilistic models (André et al., 1999). Mehrabian (1996a) utilized the five factors of personality (Costa and McCrae, 1992) to represent the VAD temperament model through linear regression analysis. This finding is widely used to design robots having non-verbal emotional interaction with users (Han et al., 2012;Masuyama et al., 2018), where the pre-defined personalities of robots affect their propensity of simulated emotion transitions.
To integrate the analysis above into Artificial Intelligence, some researchers in HCI borrow the idea and design facial emotional expressions for humanoid robots. Ball (2000) utilizes models of emotions and personality encoded as Bayesian networks to generate empathetic behaviors or speech responses to users in conversation. Han et al. (2012) employed five factors of personality to a 2D (pleasure-arousal) scaling model to represent a robotic emotional model. Masuyama et al. (2018) introduces an emotion-affected associative memory model for robots expressing emotions. While in NLP, though the VAD space is adopted to model emotions in some researches (Mohammad, 2018;Colombo et al., 2019;Asghar et al., 2018), the personality influence on emotion in dialogues is still an open problem.
Problem Definition
We research on enabling the dialog system to automatically select emotions for response through the personality-affected emotion transition.
Formally, a dyadic emotional conversation between the user and the dialog system contains the dialog context C = {U 1 , U 2 , ..., U n−1 } including all the preceding n − 1 utterances from both the user and the dialog system, the preceding emotion E i expressed in U i ∈ C which is the last utterance from the dialog system, and the response emotion E r for the dialog system to facilitate generating the next emotional response U n to the user. We specify a personality trait P n to the dialog system and enable it to select response emotion E r through the personality-affected emotion transition model where E r is transitioned from E i . The transition is triggered by the preceding dialog context C and affected by the specified personality trait P n . In the following content, we will introduce how we model this process in detail.
Emotions in the VAD space
Assuming in the problem above, emotions in all emotional utterances can be categorized into the six basic emotions: Anger, Disgust, Fear, Joy, Sadness, and Surprise (Ekman and Davidson, 1994). We project the basic emotions into the Valence-Arousal-Dominance (VAD) space as Table 1 refer to the analysis result in (Russell and Mehrabian, 1977) 4 . The VAD space indicates emotion intensity in three different dimensions, where the valence measures the positivity/negativity, arousal the excitement/calmness, and dominance the powerfulness/weakness. As for the utterances with no explicit emotion, we use the Neutral with (0.00, 0.00, 0.00) as the VAD vector.
Personalities in the VAD space
Meanwhile, the big-five personality traits (OCEAN, shown in Table 2) are widely used for psychological analysis. Mehrabian (1996a) (2)
Personality-affected Emotion Transition
Based on the problem definition and the preliminaries above, we design the Personality-affected Emotion Transition model as illustrated in Figure 1. Our model mainly include three modules: the personality effect on emotions in the left lower part, the context encoding in the right lower part, and the emotion transition in the top half in Figure 1. We will introduce these three modules in detail as follow.
Personality Effect on Emotions
In our model, the personality of the dialog system is specified as a 5-dimensional vector P n = [O, C, E, A, N ] representing the strength in Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism, respectively. The temperament of personality in the VAD space (shown in Equation 2) is widely used as weighting parameters for emotion transition of robots in HCI works (Han et al., 2012;Masuyama et al., 2018). However, the numeric coefficients in Equation 2 are summarized from analysis of questionnaire results from 72 participants (Mehrabian, 1996a), which are not suitable to directly adopted
Factor Description
Openness Openminded, imaginative, and sensitive.
Conscientiousness
Scrupulous, well-organized. Extraversion The tendency to experience positive emotions. Agreeableness Trusting, sympathetic, and cooperative. Neuroticism The tendency to experience psychological distress. as hyper-parameters in the model design. Hence, we choose to adopt the analysis results in Equation 2 as prior knowledge and learn suitable coefficients for personality by neural networks. First, we still calculate P V , P A , P D from the personality P n by Equation 2; then we use P V , P A , P D as initialized input for an adaptation layer A p to learn the weighting parameters P V , P A , P D that suitable for the training data.
Context Encoding
The dialog context acts as a set of parameters that may influence a person to speak an utterance while expressing a certain emotion (Poria et al., 2018).
In the VAD space, the emotion transition is regarded as the variation from one point (the preceding emotion) to another point (the next emotion). Thus, we generate the emotion transition variations ∆V, ∆A, ∆D from the semantic representations of the preceding dialog context C.
We fine-tune the pre-trained RoBERTa 5 encoder, a famous pre-trained language model whose performance is widely validated in many natural language understanding tasks, to first extract the semantic representations E n (U 1 ), ..., E n (U n−1 ) of all n − 1 utterances in C. Then, we concatenate the semantic representations of utterances to obtain the overall context semantics R c . Finally, ∆V, ∆A, ∆D are calculated by feeding R c into an affective encoder E a , which extract the affective information from R c in the aspect of V, A, D, respectively. Figure 2: A triple example in PELD. The dyadic conversation between Ross and Monica (two main roles in Friends, P n is the personality of Ross. The dialog system is set as Ross and talk with the user set as Monica in this example.
Emotion Transition
After we obtain the weighting parameters P V , P A , P D and the emotion transition variation ∆V, ∆A, ∆D, the emotion for response is generated by the sum of the VAD vectors of the preceding emotion and the weighted variation, as shown in Equation 4.
where the V i , A i , D i are the VAD vectors of E i , and the V r , A r , D r are the emotion transition results in the VAD space. To alleviate the errors of using the numeric value in calculated VAD vectors, we add a linear layer F c to transform V r , A r , D r into a probability distribution on the discrete emotion categories. The output E r is the emotion with the largest probability.
Dataset Construction & Statistics
To facilitate related researches, we construct the Personality EmotionLines Dataset (PELD), an emotional dialog dataset with personality traits for speakers. As labeling online conversation on social media with speakers' personalities is timeconsuming and may cause privacy issues, we turn to research on the dialogue script of a famous TV series Friends. This classic script is widely analyzed in many dialog researches (Li et al., 2016;Li and Choi, 2020;Jiang et al., 2019).
In PELD, each sample is represented as a dialog triple (C = {U 1 , U 2 , U 3 }, {E i , E r }, P n ), shown in Figure 2) are emotions expressed in U 1 and U 3 , respectively. The utterances and their emotion labels are mainly adopted from the dialogues in the MELD (Poria et al., 2018) and the EmoryNLP dataset (Zahiri and Choi, 2017), two famous datasets analyzing emotional expressions in Friends. To keep consistency, each dialog triple in PELD is constructed within the same dialogue in the original datasets. The personality traits in our dataset are adopted from the personality annotations in 711 different dialogues (Jiang et al., 2019). Refer to the annotations, a role may exhibit different aspects of its personality in different dialogues. We only keep the personality traits of the six main roles in Friends for confidence as these annotations are most frequent. For each of the main roles, we average their annotated personality traits in all the dialogues by P n = 1 K K i=1 P i for simplification, where K is the number of annotations. The averaged results are shown in Table 3.
We split the PELD into Train, Valid, and Test set with portion around 8:1:1. The total number of utterances in PELD (10,648) is less than the sum of the original MELD (13,708) and the EmoryNLP (9,489), as not all dialogues are suitable to construct triples including main roles. The overall statistics of the dataset is shown in Table 4. Similar to existing emotional conversation datasets (Li et al., 2017;Busso et al., 2008), PELD also suffers the emotion imbalance issue. Utterances labeled as Neutral are the majority, while Fear and Disgust only take a small portion. Though it reflects the real emotion distribution in daily conversation, it also brings challenges to machine learning models to identify and generate emotions. We tried several automatic methods for data augmentation like synonym substitution, backtranslation, or the EDA proposed in (Wei and Zou, 2019). But most of the synthetic samples are either odd or the same as the original samples. The reason might be there are limited options for short sentences as utterances in conversation to replace synonyms, add or delete words. Another way to alleviate the imbalance issue is to expand the granularity of emotion to sentiment. As mentioned in 3.2, in the VAD space, the Valence dimension of emotions measures the positivity and negativity, we can categorize the emotions into sentiments according to the values of Valance; i.e., positive emotions: Joy and Surprise; negative emotions: Anger, Disgust, Fear, and Sadness. Thus, the distribution of sentiments in PELD is also shown in Table 4. Besides, dialog triples of six main roles (each triple corresponds to a main role with the personality trait) are averagely distributed in all train, valid, and test sets in PELD.
Emotion Transitions in PELD
After constructed PELD, we further explore the dataset in the aspect of emotion transitions. As the triples in PELD are constructed for analyzing the emotion transitions between E i in U 1 and the E r in U 3 . Table 5 shows the emotion and sentiment distributions in the U 1 and U 3 , respectively. Be-sides, we also count the sentiments of emotions in U 1 and U 3 denoted as S 1 and S 3 . We can see that for both emotion and sentiment, the distributions in U 1 and U 3 are similar, which means the transition of emotions and sentiments are equitable in PELD triples. Besides, the proportions of all emotions and sentiments are also similar to the overall statistics of PELD, which suggests that the emotions and sentiments in PELD are also average distributed in the triples.
Since emotion transitions are affected by the personality traits as discussed above, we exhibit the emotion transition patterns for different roles with different personality traits in Figure 3. Although the emotion transitions are also correlated to the dialog context, we can still find patterns through these transition matrixes 6 .
In general, among the six transition matrixes, all the first columns are in deeper colors, which indicates most transitions occur from other emotions to Neutral as it is the majority emotion in PELD. Besides, blocks with deeper color also more likely to occur around or in the diagonals of the transition matrixes; it suggests the preceding emotions tend to transition to the same or similar emotions. As for individual roles, 0.59 of the Anger from Rachel remains the same in dialog triples, while for other roles, most Anger emotions are transferred to Neutral and Anger. Besides, most Surprise from Ross transfers to the Neutral, Joy, and Surprise, but most Surprises of the other five roles tend to transfer to only Surprise and Neutral.
Moreover, to highlight the individual differences of emotion transitions among the six main roles in detail, we also show the standard deviations (Std) of each row in the emotion transition matrixes of the six main roles, as shown in Figure 4. The red bar chart shows the Std of the infinite norms of rows in the emotion transition matrix, which indicates the diversity of the most probable emotions from the same emotion in emotion transfers of different roles. While the blue bar chart shows the Std of the L2-norms, which generally describes the difference in how different roles transfer from one emotion to other emotions.
Both charts show similar patterns of emotion transitions. Anger, Surprise, and Disgust vary the most in different roles, while people are more common when process Neutral and Joy emotions in conversation. Besides, negative emotions (Anger, Sadness, Fear, and Disgust) are relatively higher than positive emotions and Neutral on average. So, we can infer that the personality traits influence more in the emotion transfers from negative emotions.
Evaluation Tasks
To validate the effectiveness of our proposed emotion generation model, we set two evaluation tasks: Emotion Prediction and Sentiment Prediction on PELD. Emotion Prediction requires the model to predict the emotion in the upcoming utterance based on the preceding dialog context in a dyadic conversation scenario; while Sentiment Prediction has the same setting except to predict the sentiment in the upcoming utterance.
For both tasks, we evaluate the prediction per- formance by F-scores of single emotion or sentiment. Besides, the overall performance is also measured from two aspects with the macro averaged (m-avg) and the weighted averaged (w-avg) F-scores. A higher m-avg indicates the model performs relatively better predicting all categories, while a higher w-avg indicates the model predicts emotions or sentiments with larger proportions in the dataset better.
Ablation Study Setting
Although plenty methods Ghosal et al., 2020Ghosal et al., , 2019 has been proposed to analyze emotions in dialogues of Friends, most of their targets are to recognize the emotions of utterances in conversation. Compared with emotion recognition, the problem setting of selecting emotion is different and it is more difficult to select the appropriate emotion in response without knowing the response content. So, instead of comparing with other emotion recognition models, we turn to conduct ablation studies to evaluate the effectiveness of different parts of our model design.
The ablation study compares the performances of the following models: is widely validated in many downstream tasks. We here use pre-trained RoBERTa, corresponding to the E n in our model, to encode the preceding dialog context to obtain the semantic representation as input, then directly predict the emotion for response through a classification head.
RoBERTa-P:
We concatenate the personality vector of the speakers with the dialog context representation by RoBERTa as the feature, then predict the response emotion. This method is to evaluate whether personality influences the expression of emotions.
PET-VAD: As emotions can be represented by both discrete category labels or vectors in the VAD spaces. PET-VAD is set to compare the different usages of emotion VAD vectors in our model. During training, PET-VAD regressions the VAD vectors of target emotions by minimizing the Mean Squared Error (MSE) between generated vectors and the VAD vectors of ground truth emotions. The prediction output of PET-VAD is the closest neighbor emotions of generated VAD vectors measured by MSE.
PET-CLS: This is our method Personality-affected Emotion Transition with a classifier after obtaining the VAD vector of generated emotion. PET-CLS predicts emotions in the upcoming utterances as described in Section 3.
For RoBERTa, RoBERTa-P, and PET-CLS directly outputting discrete emotions, we adopt the Focal loss (Lin et al., 2017) to relieve the imbalanced emotion prediction.
Results and Analysis
In this section, we report and analyze the experimental results on the Test set of PELD in our ablation study. All results are chosen by the best performance on the Valid set within 50 epochs training.
Results for Emotion Prediction
The results on the Emotion Prediction task are reported in Table 6. First of all, as a seven-classes prediction task also suffered from the imbalance issue, the overall performance is moderately low, which also indicates the difficulty of the task. As for the averaged F-scores, PET-CLS improves both the wavg and m-avg by a large margin from all other methods, which verifies our personality-affected emotion transition method.
In detail, all models perform better on emotions with larger portions (Neutral and Joy), as they are more probable to occur in the response emotion. Moreover, PET-VAD and PET-CLS achieve moderately higher F-score on the minority emotions (Anger, Sadness, Disgust, Fear, and Surprise), which shows that the emotion transition process is more important generating these minority emotions. It also verifies the finding in Section 4.2. On the other hand, although PET-VAD is based on the designed personality-affected emotion transition, most single emotion F-scores of PET-VAD are lower than RoBERTa or RoBERTa-P. We discuss the possible reasons as follows. One reason might be that the imbalance emotion issue cannot be alleviated in directly regression the emotion VAD vectors. Another reason might be that the value of emotion VAD vectors in Table 1 are estimated rather than precisely calculated, and the distance among different emotions in the theoretical VAD space is not similar to those in the emotion distribution in daily conversation.
Results for Sentiment Prediction
As predicting the emotions for the upcoming responses is difficult due to the multiple imbalanced categories, we also report the results on the Sentiment Prediction task in Table 7. Besides, different from the analysis above, which categorizes emotions by their portions in PELD, sentiment is another aspect of emotion analysis. As the sentiments are not directly described in the VAD spaces, we only report the results for RoBERTa, RoBERTa-P, and the PET-CLS. Besides, we only change the output size of PET-CLS from 7 (for emotions) to (3 for sentiments) and preserve the emotion transition process in this task.
In general, we can see that the prediction Fscores of sentiments are higher than emotion predictions. Besides, the prediction of negative emotions is much easier than predicting positive emotions in all three methods. It may because although the numbers of sentiments are similar, the categories of negative emotions (Anger, Sadness, Fear, and Disgust) are more than positive emotions (Joy and Surprise). Equipped with our model design, PET-CLS outperforms both RoBERTa and RoBERTa-P excepted for the neutral sentiment. It suggests that the personality-affected emotion transitions also facilitate sentiment prediction. However, only concatenating the personality vectors with context representation, RoBERTa-P improves the F-scores of Neutral but decreases the Positive and Negative. Hence, direct concatenation limits the effect of personality information in sentiment prediction.
Conclusion and Future Work
In this work, we raise the problem of automatically selecting the emotion for response considering the individual differences in conversation and propose a new perspective to solve it through personalityaffected emotion transition. Besides, we construct a dialog script dataset PELD with emotion and personality labels to facilitate related researches. We also validate our personality-affected emotion transition model in emotion prediction experiments.
Facial expressions, voices, gestures, and environment information are also vital in emotional interaction, but they are not captured in the purely text-based dialog systems. Besides, as seen from statistics in PELD, the most common emotion in the dialog scripts is still Neutral. One possible reason is that other subtle affective information is not captured in the text. Therefore, our future works will continue to investigate the personality effects on emotions in the multi-modality scenario.
Acknowledgement
This work is supported by the Hong Kong RGC Collaborative Research Fund with project code C6030-18G and Hong Kong Red Swastika Society Tai Po Secondary School with project code P20-0021. | 6,280.4 | 2021-06-30T00:00:00.000 | [
"Computer Science"
] |
Mission-Critical Connectivity Enhanced by IAB in Beyond 5G: Interplay of Sidelink, Directional Unicasting, and Multicasting
Guaranteeing operational connectivity in emergency situations by means of prompt on-demand network re-configuration is crucial to carry out effective public protection and disaster relief actions. This article analyzes network configuration options enhanced by the integrated access backhaul (IAB) feature for 5G-Advanced and beyond mission-critical services. We specifically aim to investigate the possible interplay of sidelink communications, directional unicast transmissions, and multicasting. To this end, we offer a practical methodology based on a fluid model approximation to capture the time dynamics of mission-critical services in their transient phase. Simulation results show that the interplay of multicasting, unicasting, and sidelink is a highly effective solution for mission-critical communications. In contrast, standalone unicast, multicast, and sidelink transmissions are unable to support such services. A further emerging aspect is that, in a mixed unicast-multicast-sidelink configuration, optimal results are obtained when multicast exploits most of the available resources, specifically more than 70%.
I. INTRODUCTION
I N EMERGENCY situations, caused by natural or manmade events, telecommunication networks could be severely affected and no longer available, whereas guaranteeing operational connectivity would be essential to carry out effective public protection and rescue actions. To fulfill the connectivity requirement of the current fifth generation (5G) network, emergency scenarios call for a prompt ondemand network re-configuration, which may also benefit from ad-hoc deployed cells-on-wheels and/or cells-on-wings base stations (BSs), offering great flexibility in establishing temporary networks.
Recently, mission-critical public safety communications have shifted from voice-only to broadband low-latency services. Exemplary use cases include the remote operation of drones and robotics, the use of haptic sensors in firefighters' personal protective equipment for faster and safer search-and-rescue in a burning building with heavy smoke, and connected ambulance with remote assistance from a medical specialist [1].
Furthermore, the nature of public safety mission-critical communications is mostly group-oriented [2]. Indeed, first responders typically work in groups and need to coordinate their operations. Thus, providing the same content to a set of users through multicasting represents a means of achieving effective group communications, in terms of both network resource utilization and quality of service provision.
Driven by the rising demand for reliable, responsive, and broadband connectivity to improve safety, the 3rd Generation Partnership Project (3GPP) has started since 2016 to include functionalities for the delivery of mission-critical applications over mobile cellular networks [3]. Among the solutions that can be effectively exploited in public safety scenarios, currently included in the 5G New Radio (NR) standard, is the use of the millimeter wave (mmWave) spectrum to improve coverage. MmWaves bring the benefit of increasing the signal strength for target users, thus providing Gbps communications by implementing proper beamforming techniques [4].
A key role in mission-critical communications delivery will also be played by integrated access backhaul (IAB) [5], which combines wireless backhauling and access capabilities by means of multi-hop network relaying. As a result, IAB enables extended coverage, improved network capacity, and enhanced reliability, making it particularly suitable for mission-critical scenarios where continuous and robust connectivity is vital. Furthermore, sidelink (SL) transmissions enable proximity communications among neighboring devices to provide coverage in areas where the network infrastructure is unavailable or damaged. Finally, 5G will support group communications via the multicast and broadcast services (MBS) system architecture [6], delivering real-time updates, emergency alerts, live video streaming, and other group-oriented services with improved efficiency and scalability.
In this article, we take as a reference a mission-critical scenario in which the BS is temporarily blocked/unavailable. Thus, IAB nodes need to be used to enhance coverage and provide service to all users in the area of interest with the required content. In order to take advantage of all the above-discussed solutions to cope with the emergency, we investigate the interplay of sidelink communications, directional unicast transmissions, and multicasting and provide a tool for analyzing the performance of all possible network configuration options. Specifically, we mathematically characterize the system behavior through a fluid-based model that captures the time dynamics of arrival/service/departure processes in their transient phase, which is the inherent nature of mission-critical services. The proposed model works as a means for the emergency management team to determine the network configuration option that better suits the mission-critical situation being under control.
The remainder of this work is organized as follows. Section II presents the background of the work and the main related work in the field. In Section III, the contributions of the work are detailed. Section IV introduces the system model, while in Section V, we present the proposed model for network configuration option analysis. Simulative results are discussed in Section VI. Finally, Section VII concludes this work.
II. BACKGROUND AND RELATED WORK
A large body of literature has sought to provide techniques to mitigate the inherent limitations of current communication systems in meeting mission-critical service requirements. Along this line, a reliability analysis of mmWave access has been provided in [7], where a comprehensive methodology to model the softwarized 5G radio access network (RAN) managing high-rate mission-critical traffic has been developed. The framework also analyzes the corresponding impact of critical session transfers on other user sessions. The authors state that both high-rate critical sessions and the high velocity of the target user lead to a significant degradation of the other user sessions, which can be mitigated by proper usage of multi-connectivity in mmWave networks and by splitting the fallback traffic across multiple microwave technologies.
A different approach to reliability support in 5G networks has been reported in [8], wherein the Raft protocol has been used to achieve an ultra-reliable and low-latency consensus for mission-critical distributed Industrial Internet of Things (IIoT). Within this study, the "reliability gain" concept has been proposed to mathematically characterize the relationship between consensus reliability and communication link reliability. It has also been highlighted that the consensus latency is contradictory to reliability.
Decentralized event-triggered scheduling and fusion of information for mission-critical Internet of Things (IoT) sensors have been designed in [9]. Instead of a high-complexity Kalman filter, a fixed gain remote state estimator and a novel algorithm for the design of the fixed filtering gain via minimizing the remote state estimation mean square error have been offered when considering perfect symbol level synchronization. More recently, in [10], the study in [9] has been extended by taking into account more complex remote state estimation with asynchronous mission-critical IoT sensors.
An analysis of mission-critical service under a different perspective has been proposed in [11], where the impact of the device-and application-related parameters on the latency and the reliability performance of public safety applications has been examined. Similarly, the effects of heterogeneous user and device mobility on the performance of mission-critical machine-type communications within a multi-connectivity 5G network have been examined in [12]. According to [12], alternative connectivity options, such as device-to-device (D2D) links and drone-based access, may contribute to fulfilling the requirements of mission-critical machine-type communication applications.
To this end, a considerable body of literature has also been focused on using D2D communication for mission-critical applications to improve cellular network coverage. For example, in [13], edge-based mission-critical IoT applications reckon on collaborative D2D links between the IoT devices in the presence of mobility. Similarly, in [14], [15], user clustering and D2D communications among the closest users have been considered as effective strategies to connect users and preserve their energy in the disaster region. In [16], the authors developed a cognitive approach to meet the required reliability and throughput in D2D transmissions while taking into account the interference constraints imposed by primary and inter-cell users. In [17], improvements in end-to-end latency and energy efficiency when using NR sidelink communication for mission-critical scenarios compared to LTE, LTE sidelink, and NR transmissions have been demonstrated.
Similarly, several studies on the exploitation of multicasting for mission-critical scenarios have been conducted, including but not limited to [1], [18], [19], [20], where different clustering methods and propagation models have been developed. In [21], a comparison among multicast broadcast single frequency network (MBSFN), single-cell point-tomultipoint (SC-PTM), and unicast transmission modes in mission-critical use cases has been presented from a resource use perspective. As per the results, SC-PTM might be considered as the best option for locally restricted and small-scale emergencies, whereas MBSFN might be preferable for emergencies during massive events or those affecting a large region.
A further feature of 5G NR that can be used for missioncritical applications is the IAB [22], [23]. IAB can also be used for its potential to wirelessly connect several unmanned aerial vehicles carrying BS (UAV-BS) and easily integrate them into an existing mobile network. In [24], a UAV-BS has been integrated into the mobile network using the 5G IAB technology to provide temporary coverage in a disaster area. Thanks to UAVs' excellent mobility and high flexibility, UAV-BSs are supposed to bring fast connectivity for missioncritical communications [25], [26].
A summary of the discussed related works is provided in Table 1, highlighting that mission-critical communications do not consider the possibility of exploiting different transmission modes. Such options are necessary to improve the overall system reliability, flexibility, and coverage, especially in emergency situations.
Although the revised studies have gone some way towards enhancing the reliability, latency, security, and other crucial aspects of mission-critical services by exploiting approaches that facilitate cellular connectivity, no works have focused on providing an effective tool that can guide the selection of the network configuration to promptly and efficiently provide coverage in areas affected by critical situations. In this work, we fill this gap by mathematically characterizing the time dynamics of mission-critical services in their transient phase delivered by means of the possible interplay of unicast, multicast, and NR sidelink transmissions.
III. FOCUS AND CONTRIBUTIONS OF THIS WORK
We consider a mission-critical scenario with multiple users coexisting in an indoor, outdoor, or mixed indoor/outdoor environment. An illustrative example of the scenario under analysis is shown in Fig. 1. We assume that the 5G NR BS is temporarily blocked or unavailable, thus, an IAB node needs to be activated to provide coverage to users. A set of IAB nodes are located within the area of interest, and their positions are chosen in such a way as to provide coverage to a high number of users. We underline that IAB node positioning is a problem itself, as discussed in [27], which is not the focus of our work.
The connection from the BS to the user may go through a chain of IAB nodes, i.e., by multi-hopping. Further, IAB nodes may be fixed nodes (i.e., cell tower BSs, rooftopmounted UAVs, static vehicles) and/or on-demand deployed mobile nodes (i.e., flying UAVs, moving vehicular devices). 1 In our study, we split the BS-to-user connection into (i) the backhaul connection from the BS to the "edge" IAB node and (ii) the access connection from the "edge" IAB node to the users. Specifically, we focus on the access connection under the assumption that the backhaul connection from the BS to the "edge" IAB node remains unchanged.
For the first time, this article seeks to analyze the operation of the network during emergency situations through the interplay of sidelink, directional unicasting, and multicasting technologies. More specifically, we investigate the following network configuration options for the access connection: We note that, in this work, the concepts of service and delivery methods and of transmission modes are different. Specifically, we assume that all users require the same mission-critical service/content, while multicast, unicast, and sidelink transmission options for access connection can be utilized to deliver the given service/content to users.
The main contributions of our work can be summarized as follows: • We investigate the synergies resulting from a joint usage of unicast, multicast, and sidelink transmissions, as well as their individual performance in the case of missioncritical services. • We propose a novel analytical model based on a fluid approximation to mathematically characterize the system behavior. • We derive the closed-form expressions for the number of users involved in a mission-critical situation and capture the temporal dynamics of arrival/departure/transition processes in their transient phase. • We implement an extensive performance evaluation campaign to investigate the impact of the input parameters on meaningful performance metrics. • We offer an analysis of the time-dependent behavior of user requests in their transient phases for missioncritical scenarios, that work as a means for a proper transmission configuration option. • We demonstrate with the achieved results that the combination of multicasting, unicasting, and sidelink operations presents a remarkably effective solution for mission-critical communications. The best outcome is achieved by maximizing the utilization of multicast resources, which comprise the predominant portion (exceeding 70%) in a mixed unicast-multicast-sidelink configuration.
IV. SYSTEM MODELING
This section details the system model underlying our proposal aimed at analyzing the network configuration options in mission-critical scenarios. Notations used throughout this work are summarized in Table 2.
A. DEPLOYMENT
We consider a mixture of outdoor and indoor environments within an area of N m x M m. More specifically, the BS is located outdoors in the origin of the coordinate system, i.e., at (0, 0, h BS ), where h BS is the BS height. The location of the first IAB node is fixed and set to (x IAB1 , y IAB1 , h IAB1 ) outdoors, where h IAB1 is the height of IAB 1. By analogy, the height of IAB 2 (also known as "edge" IAB) is h IAB2 . It is located outdoors at d m distance far from the building placed at the reference point with coordinates (x R , y R , h EU ), generated according to a uniform distribution. The indoor users are uniformly distributed in a circle of radius R around the reference point.
B. PROPAGATION AND BLOCKAGE MODELS
The basic outdoor path loss in decibel scale at three-dimensional (3D) distance d 3d for urban micro (UMi) Street Canyon model in case outdoor propagation between the BS and the IAB 1, as well as between the IAB 1 and the IAB 2 (see Fig. 1) reads as in [29]: where f c is the carrier frequency in GHz, d 3d is the 3D distance between the transmitter and the receiver (i.e., BS-IAB 1 and IAB 1-IAB 2 distances). The coefficients β and ζ account for line-of-sight (LoS)/non-line-of-sight (nLoS) states as well as for LoS blocked and LoS non-blocked channel conditions. In detail, 3GPP recommends ζ = 2.1 and ζ = 3.19 for LoS and nLoS states [29], whereas the value of β depends on the carrier frequency. In non-blocked conditions for the lower part of mmWave band (i.e., 28 − 78 GHz), β is 32.4 dB. The blockage attenuation in the blocked state is added on top, resulting in an additional loss in the range of 15 − 25 dB [30], [31]. We employ the UMi 3GPP path loss model considering outdoor-to-indoor (O2I) penetration loss, as described below, to model the propagation path between the outdoor transmitter (i.e., edge IAB) and the receiver (i.e., user) located inside the building. The path loss incorporating O2I building penetration loss is modeled as in [32]: where PL b (d 3d ) is the basic outdoor UMi path loss as per (1), PL tw is the building penetration loss through the external wall, PL in is the inside loss dependent on the depth into the building, and σ P is the standard deviation for the penetration loss.
Path loss through external wall, PL tw , in the case of lowloss (σ P = 4.4 dB) and high-loss model (σ P = 6.5 dB), respectively, is given by: while indoor loss, PL in , can be calculated as follows where d 2D-in is minimum of two independently generated uniformly distributed variables between 0 and 25 m UMi-Street Canyon model: The LoS probability for the two-dimensional distance d 2d , p L (d 2d ), is derived for UMi Street path loss model: Further, the propagation loss in the case of indoor multihop relaying is assumed to follow the indoor 3GPP model (InH -office) [29]: β + 17.3 log 10 d 3d + 20 log 10 f c , LoS, β + 31.9 log 10 d 3d + 20 log 10 f c , nLoS.
In the case of Indoor -Mixed office, the LoS probability for the 2D distance, d 2d , for indoor is written as: The general formulation of Signal-to-Noise Ratio (SNR) is given by: where P t is the transmit power, N 0 is the power spectral density of noise per 1 Hz, and W snr is the bandwidth in Hz, M I M S are interference and shadow fading margins, whereas PL(d 3d ) represents the path loss (that depends on the environment).
C. ANTENNA MODEL
We now introduce the antenna model that is formulated as follows. The main antenna lobe is assumed to be symmetric w.r.t. the antenna boresight axis. The transmit antenna gain G T (α) can then be simply provided as [33], [34]: where D 0 represents the maximum directivity along the boresight, ρ(α) is the directivity function of the angular deviation from the boresight direction, whereas α ∈ [0, π]. The total directivity is defined by ρ(0) = 1.
D. NETWORK TRAFFIC DYNAMICS
In a mission-critical scenario, we can identify two types of nodes -Waiting users depart from the system with the rate θ x ; -Served users depart from the system with the rate θ y ; -Search & rescue operators depart from the system with the rate θ z . We assume that wireless connectivity to the users via both unicast and multicast communications is provided by the "edge" IAB node, while sidelinks can be established with search & rescue operators. Radio resources are split among the three considered transmission modes (more insights will be given in the following section).
Unicast users equally share in time and/or frequency radio resources exploitable for unicasting. Therefore, the transition from waiting users to served users by means of unicast transmissions corresponds to: where x(t) is the actual number of waiting users, C u 0 is the unicast downlink capacity, whereas B represents the content size.
In the case of multicasting, users are organized in one multicast group that is assigned with all channel resources devoted to the multicast transmission. The transition from waiting users to served users by means of multicasting is, therefore, given by where C m 0 is the multicast downlink capacity. Finally, in the case of sidelink transmissions, the transition from waiting users to served users can be performed by search & rescue operators only and is as follows: where C d 0 is the sidelink downlink capacity, whereas z(t) is the actual number of search & rescue operators.
Note: Capacities (in Mbps) for unicast, multicast, and sidelink transmissions can be obtained as: where W is the available bandwidth in MHz. As users are activated at arbitrary locations, the data rate between the users and the BS constitutes a random variable whose distribution can be obtained from the distribution of distances, taking into account the upper limit of achievable data rate and parameters related to the signal propagation [35]. In the case of multicasting, the data rate of the group is determined by the user with the worst channel conditions. In the case of sidelink connections, direct links can either perform or not. Therefore, a fixed data rate value is achieved within a fixed distance.
V. ANALYZING NETWORK TRAFFIC
This section presents our developed fluid approximation for characterizing the number of waiting users, served users, and search & rescue operators (such as policemen, ambulance personnel, and fire brigades) by capturing the time dynamics of arrival/departure/transition processes in their transient phases. Waiting users can receive the required content by means of either a unicast, multicast, or sidelink transmission. For a schematic representation of the processes underlying the proposed model, we refer readers to Fig. 2.
Users and search & rescue operators are considered as continuous fluids. The number of waiting users at time t corresponds to x(t), the number of served users at time t is denoted as y(t), whereas we refer to the number of search & rescue operators at time t as z(t). We underline that x(t), y(t), and z(t) are any non-negative real numbers, not compulsory integers.
For the sake of simplicity, we omit the index t. The evolution of x, y, and z is then defined by the solution to the Cauchy problem.
In the considered mission-critical situation, waiting users and served users are assumed to be present in the area of interest, while search & rescue operators arrive after an emergency. The formulation of the Cauchy problem in such a scenario to obtain the thought solution is given by where a, b, and c are weight parameters used to perform resource splitting among transmission modes.
We first define z as Taking into account initial condition z(0) = 0, we obtain C z : Therefore, the Cauchy problem (17) can be transformed into: under initial conditions: We then solve the problem at hand by expressing the variable y from the first equation of (20): We further derive y from the equation (21) as We substitute the expression derived from equation for y as per (21) and y as per (22) into the second equation of (20): This means that the characteristic equation of the nonhomogeneous differential equation (23) is given by: The roots of (24) are as follows: where The solution of the non-homogeneous differential equation (23), therefore, is given as: where A represents the particular solution of the nonhomogeneous differential equation (23) and is as follows Therefore, the corresponding solution to the system (20) for y is given as: Taking into account initial conditions x(0) = N x , y(0) = N y , we obtain the following system of equations: Therefore, C 1 and C 2 are given as:
VI. PERFORMANCE EVALUATION
This section discusses the results obtained from the link-level evaluation of the proposed analytical fluid-based framework for mission-critical scenarios.
In the following, we will first describe the reference scenarios for our performance evaluation campaign and the main simulation parameters implemented in the MATLAB environment. Then, we verify the proposed analytical model through computer simulations and analyze the behavior of the configuration options introduced in Section III.
A. SCENARIOS AND PARAMETERS
In the simulated mission-critical scenario, illustrated in Fig. 1, two IAB nodes are located outdoors. The fixed IAB connects to the 5G NR BS the edge IAB vehicular that serves multiple users existing in the indoor environment. This results in a mixed indoor/outdoor propagation scenario.
The distribution of the users inside the building is dependent on a reference point representing the center of the building, which is uniformly distributed around the area of interest. Users are uniformly distributed around the reference point within a radius of 20 m. In order to establish a reliable connection, the edge IAB vehicle location is adjusted to the building location at 50 m distance from the reference point.
Regarding the access connection, all options (i.e., unicast, multicast, sidelink, and mixed modes) discussed in Section V are analyzed. Our system operates within the 5G frequency range 2 (FR2). We use MATLAB Antenna Toolbox to model antenna directivity patterns for uniform rectangular arrays with isotropic elements, Chebyshev tapering, and no steering. We assume the fixed transmit power of 23 dBm coming from the BS and IABs, while the transmit power at users is set to 10 dBm. We assume the SNR threshold of −9.478 dB at the receiver side, corresponding to the lowest modulation and coding scheme (MCS 0).
We note that waiting users can change their state to served users after receiving a video with instructions on how to proceed within the emergency scenario. We assume a video duration of 30 seconds, with a resolution of 1280x720 pixels and H.264 encoding, resulting in a content size of 80 MB.
We evaluate the following exemplary scenarios: • Scenario 1. We assume that search & rescue operators arrive in the system with a rate of λ z = 5.5 during time interval (0, t], while waiting users and served users do not arrive to the system. At the time the critical event occurs (t = 0), we assume that N x = 30 waiting users and N y = 0 served users are in the area of interest. Waiting users and served users cannot leave the system that is the departure rates θ x = 0 and θ y = 0, respectively. Search & rescue operators may leave the system with the rate θ z = 0.1. The transition from served user to waiting user cannot happen, i.e., μ x = 0. • Scenario 2. Differently from the one previously described, in this scenario waiting users and served users may leave the system. In addition, served users may transit back to waiting users due to content loss or the need for additional instructions. The departure rates are defined as θ x = 0.2 for waiting users and θ y = 0.7 for served users. This means that served users are more likely to leave the system since they receive instructions on how to behave in an emergency situation. The transition from served user to waiting user can be performed with μ x = 0.4. • Scenario 3. We test a situation where the departure rate for waiting users is set to θ x = 0, indicating that no waiting users are able to leave the system, i.e., users may leave the system only after receiving the content. The departure rate for served users is defined as θ y = 0.7. This scenario introduces additional complexity as it requires sending instructions multiple times, resulting in the transition from served users to waiting users occurring at a rate of μ x = 0.9. The rest of the settings is based on Scenario 1. We provide the results in terms of the following metrics: • number of waiting users, served users, and search & rescue operators; • delivered content size [Gbit], calculated as the total amount of bits delivered to the users (by means of unicast, multicast, and/or sidelink transmissions); • actual data rate [Gbits] for unicast, multicast, and/or sidelink transmissions. Modeling parameters adopted in the simulations are reported in Table 3.
B. UNDERSTANDING ANALYTICAL RESULTS
We recall that we aim to analyze mission-critical and emergency situations characterized by system operation in its temporary phase. For this reason, we consider the evolution of users requesting service within the content lifetime. Importantly, we cannot invoke the system's functioning in a steady state (i.e., unchanged across time) since the content lifespan cannot be considered long enough to achieve the stationary state.
We start by validating the proposed analytical framework (A) through computer simulations (S) for Scenario 1. Table 4 provides insights into the time required to serve all users depending on the configuration option (i.e., standalone transmission or mixed modes) and under different resource sharing ratios. The first aspect we wish to highlight is the close match observed between simulation results and analytical results. This assessment confirms the applicability of the proposed method in capturing the time dynamics of missioncritical services, particularly during the non-static transient phase.
First, we compare configurations 1-3, which represent different standalone modes of operation, i.e., unicast, multicast, and sidelink modes, which exhibit service times of 34.9, 10.3, and 11.3 seconds, respectively. These results can be explained by the fact that users inside the building are situated close to each other. Thus, one multicast transmission can cover them with a relatively strong link. We recall that multicast directional communications are characterized by wider beams compared to unicast and sidelink ones. Differently, simultaneous unicast transmissions share the power budget of the antenna, whereas sidelink transmissions are usually performed in proximity with reduced transmit power due to the hardware on the devices.
However, it is important to note that one technology alone may not always guarantee the best performance. For example, while multicasting can be more efficient in serving users located in close proximity to each other, there may be scenarios where unicast and sidelink transmissions are necessary to ensure optimal performance. Therefore, the possibility to combine multicasting, unicast, and sidelink operations needs to be considered in case this allows to achieve fast and efficient transmissions, taking into account the specific characteristics and requirements of the users and their locations.
In this vein, we also analyze mixed configurations, starting with those involving two technologies (Configurations 4, 5, and 6). In general, we can state that the distribution of resources among transmission modes significantly impacts the overall time required to serve users. Specifically, Configuration 4 (Mixed Unicast-Multicast mode) demonstrates that the service time decreases as the resource sharing ratio shifts towards multicasting. The lowest service time is 6.08 seconds at 20%/80% resource sharing ratio (multicast occupies 80% of the total 100% resources). Configuration 5 (Mixed Unicast-Sidelink mode) exhibits a similar trend when the sidelink technology is coupled with multicasting. The lowest time achieved corresponds to 11.7 seconds at 10%/90% resource sharing, and as the resource sharing ratio shifts towards sidelink, the time gradually increases. Configuration 6 (Mixed Multicast-Sidelink mode) shows that the lowest time is achieved at 70%/30% resource sharing, corresponding to 5.09 seconds. Thus, we can infer that the interplay of two technologies facilitates the efficient offloading of traffic from the "partner" mode.
Finally, we consider the combined exploitation of all considered technologies by analyzing Configuration 7 (Mixed Unicast-Multicast-Sidelink mode), demonstrating a more complex relationship between resource sharing ratios and service time. The lowest service time is achieved at 5%/80%/15% resource sharing ratio, resulting in a time of 5.15 seconds, with a slight improvement at 1%/80%/19% resource sharing ratio of 0.13 seconds. Based on these comparisons, one may note that the most efficient configuration in terms of service time depends on the specific resource sharing ratios and modes. However, considering the lowest times achieved in the given table, Configuration 7 shows the best overall performance, making it the preferred solution for mission-critical services. Furthermore, the observed trends in the results validate the following findings. The utilization of multicasting in combination with unicast and sidelink transmissions demonstrates its ability to facilitate efficient content delivery. The system exhibits improved performance when a higher proportion of multicast resources is allocated compared to unicast and sidelink resources. This highlights the advantage of prioritizing multicast to enhance overall system efficiency and optimize content dissemination. Thus, for the above-discussed reasons, from now on, we concentrate on Configuration 7 only.
In Fig. 3, we show results in terms of the metrics of interest achieved in Scenario 1 with 1/%80/%19% resource sharing ratio. We point out that the slope of the green curves (representing served users) in subfigures (a) and (b) indicates the speed at which users acquire the content. Specifically, the steeper the slope upward in subfigures (a) and (b) (i.e., the higher the value), the better. Moreover, a considerable difference between green and orange curves (served and waiting users) in subfigures (a) shows fast delivery speed and fewer users waiting to be served. The results confirm that for the considered settings and input parameters, the mix of the three transmission modes guarantees a fast content dissemination due to the ability to capture the channel conditions of = 0.01, b = 0.8, and c = 0.19 ((a), (b), and (c)), a = 0.05, b = 0.8, and c = 0.15 ((d), (e), and (f)).
users, thus assuring mission-critical service delivery without considerable delay, which is essential in the case of disasters.
We then proceed to investigate Scenario 2 for Configuration 7 in Fig. 4 for 1%/80%/19% and 5%/80%/ 15% resource sharing ratios, respectively. The trend of the curves in all subfigures differs from Scenario 1 due to different input parameters. Since waiting users and served users have the possibility to leave the system and/or change their state, the number of served users and the delivered content will not continuously increase as in Fig. 4, but will change over time. Initially, they will grow at a lower rate w.r.t. Scenario 1 and then slowly decrease due to the joint effect of state transition and departure process of both waiting and served users. The trend for the unicast data rate (see Fig. 4(c) and (f)) is the opposite: it initially drops as the number of users grows and then gradually increases because of the reduction in the number of users in the system and, consequently, of the more likely establishment of unicast transmissions due to users' sparsity. It is important to highlight that the unicast data rate in case of lower resource ratio assigned to unicasting (see the dark green curve in Fig. 4(c) w.r.t. (f)) is higher due to the lower number of simultaneous unicast transmissions, leading to higher per-transmission power. On the other hand, the sidelink data rate shows a constant growth, as the number of search & rescue operators (see blue curve in Fig. 4(a) and (d)), while the multicast data rate remains constant throughout the emergency situation. Fig. 5 shows that in Scenario 3, there is a slower decrease in the number of waiting users compared to Scenario 2. This is because waiting users cannot leave the system without receiving the instructions. In this case, the number of waiting users can only decrease when they change their state to served users. Similarly, the number of served users decreases at a lower rate due to the possibility of acquiring the content multiple times. These factors impact both the amount of delivered content and the data rate values.
In conclusion, our analysis and simulations have demonstrated the effectiveness of a mixed transmission approach combining unicast, multicast, and sidelink modes for mission-critical service delivery. While standalone modes can be efficient in certain scenarios, a combination of these technologies is necessary to ensure fast and efficient content dissemination, considering the specific characteristics and requirements of the users and their locations.
VII. CONCLUSION AND FUTURE WORK
This article has analyzed a mission-critical situation in which the BS is temporarily unavailable or unreachable, and the network topology is enhanced by IAB nodes.
We have focused on the processes that tend to be in the non-static transient phase, usually referred to as short-term operating time intervals that occur in the transition between various steady-state conditions (constant or unchanged across time). Transient analysis, by definition, includes time-varying loads, i.e., loads that are a function of time, and depends on arrival, transition, and departure processes. The general forms for such processes are presented in systems of equations as per (17), and include the topological randomness of user deployment, assuming that user coordinates are random variables that may be parameterized in reality by a mobility pattern.
In such a scenario, we have investigated the interplay of sidelink communications, directional unicast transmissions, and multicasting. Notably, this study combines a mathematical model for analyzing network configuration options for mission-critical communications with extensive link-level simulations. We have deduced from the findings that the proposed model allows the emergency management team to select the network configuration that best fits the mission-critical circumstance under control. The model also guarantees exemplary functional connectivity in emergencies, which is critical for successful public protection and disaster relief efforts.
Specifically, among the technologies examined, multicasting consistently emerged as an efficient solution in all configuration options. Given the same settings and input parameters, multicast facilitates unicast and sidelink transmissions by delivering the content more efficiently. Allocating a higher proportion of multicast resources compared to unicast and sidelink resources resulted in improved system performance. This emphasizes the advantage of prioritizing multicast to enhance system efficiency and optimize content dissemination.
Looking ahead, aligning with the trends in 6G-oriented technology to ensure functional connectivity in emergencies is crucial. Reconfigurable Intelligent Surfaces (RIS) [37], [38], ubiquitous connectivity, integrated sensing and communication (ISAC), and integrated artificial intelligence (AI) and communications are expected to play a critical role [39] and could potentially be integrated into 5G-Advanced and beyond. These advancements, in conjunction with the IAB feature and the interplay of sidelink communications, directional unicast transmissions, and multicasting, have the potential to revolutionize mission-critical communications in the 6G era. This holds promise for enhancing system efficiency, optimizing content dissemination, and enabling successful public protection and disaster relief efforts in the future. | 8,416.8 | 2023-01-01T00:00:00.000 | [
"Engineering",
"Computer Science",
"Environmental Science"
] |
Banking infrastructure and the Paycheck Protection Program during the Covid-19 pandemic
ABSTRACT In response to the Covid-19 pandemic, the US federal government distributed US$800 billion in Paycheck Protection Program (PPP) loans to small businesses to preserve employment. Since PPP funding was transmitted through private banks, the characteristics of the regional banking market may have unevenly affected the programme’s reach. This paper examines how variations in market concentration and the presence of community banks contributed to PPP disbursement in US counties. It finds that greater regional banking market concentration correlates with fewer PPP loans, but this negative relationship is mitigated by a greater presence of community banks in highly concentrated markets.
INTRODUCTION
This paper examines how regional banking sector characteristics affected the disbursement of federally guaranteed loans to small businesses, the Paycheck Protection Program (PPP), during the Covid-19 pandemic. The programme is 'the most ambitious and creative fiscal policy response to the Pandemic Recession' (Hubbard & Strain, 2020, p. 1), designed to preserve employment in response to stay-in-shelter orders in the United States. One of its most creative features is that the federal government used the existing private banking infrastructure to distribute over US$800 billion to millions of small businesses in a short period.
Researchers consider the financial subsystem as one of the important determinants of how well regions withstand and recover from an economic shock (Martin & Sunley, 2020). They conjecture that a financial sector promotes regional economic resilience by providing necessary credit for households and businesses to ease a cash crunch. Nonetheless, empirical evidence for the role of the financial sector in economic resilience has been rare in regional studies. The finance literature, however, has discovered that characteristics in regional banking markets contribute to small businesses' loan access during economic recessions. This paper uses findings from the finance literature and tests how PPP's distributional outcomes were determined by multidimensional characteristics of the regional banking sector. This paper takes advantage of the unique opportunity offered by the Covid-19-driven economic shock to study the role of financial sectors. Unlike endogenous economic crises, the pandemic drove unexpected and sudden disruptions to small business operations, while structural changes to the banking sector were kept minimal. Furthermore, while exogenous shocks driven by natural disasters are geographically limited, the impacts of the pandemic were global, which allows one to observe the impacts of the shock in all regions.
Using 3100 US counties, significant impacts of banking market concentration and the presence of community banks in disbursing PPP loans are found. Specifically, higher market concentration correlated with fewer PPP loans per business, but a strong presence of community banks mitigated the negative association. In highly concentrated markets, in particular, a greater presence of community banks even increased the number of PPP loans.
This paper contributes to the literature as follows. First, it contributes to the emerging, but limited, literature on the role of the regional financial subsystem in building economic resilience during and after economic crises. Second, it contributes to the recent finance literature that considers both market concentration and the types of players in the market. Finally, the paper contributes to a growing literature on PPP where the structural environment of a regional banking sector has been overlooked. This paper directly deals with the structural factors using traditional measures of pre-pandemic characteristics of a regional banking sector to make it comparable with previous studies.
Community banks and relationship lending
The discussion on banks' behaviour in small business lending centres on types of lenders: community banks and national banks. In the literature, community banks are often used interchangeably with small banks or local banks. Asset size often defines community banks, but the asset threshold varies in the literature from US$1 billion to US$10 billion at the charter level. Local banks are commonly defined as banks owned or operating within a limited geographical area or by the share of deposits that stay in the local market (Cortés, 2014). To avoid varying definitions and identify community banks more systematically, the Federal Deposit Insurance Corporation (FDIC) provides several standards to defines community banks using indexed asset thresholds, geographical footprint, business plan and number of branches (FDIC, 2020). At the minimum, community banks are 'small' in asset size and 'local' in the geographical scope of business operation. 1 Community banks are deemed to play a vital role in regional economies. They locally acquire deposits and devote a large share of their resources to local businesses (Rogers, 2012;Strahan, 2008). The theoretical underpinning between community banks and small business loans is relationship-lending. The conventional theory posits that big national banks use quantifiable, verifiable and comparable 'hard information' to assess loan risks. Thus, these 'transaction lenders' tend to make loans to already-established firms with externally audited financial statements. Small businesses, however, are often 'opaque' in that they lack an audited financial statement. This is where community banks come into play. They exploit 'soft' information such as the bank's intimate knowledge about business owners' reputations as well as local economic conditions. Evidence shows that those 'relationship lenders' tend to lend to geographically close small businesses, suggesting that spatial proximity facilitates the transmission of soft information (Cotugno et al., 2013;Hakenes et al., 2015;Granja et al., 2018).
Nevertheless, it is the big national banks (assets of more than US$10 billion) that made 58% of the US$645 billion small business loans in 2019 (US Small Business Administration (SBA), 2020b). They actively pursue small business clients using various lending technologies and risk management systems to compensate for lacking hard information (Berger & Black, 2011;Berger et al., 2014;De la Torre et al., 2010). Thus, banks' characteristics alone do not fully explain their lending behaviour because the finance literature shows that the overall banking market environment also shapes their behaviour. For instance, the same community bank may behave differently when they face competition and when they have market power.
Market concentration
The concentration of market power has received considerable attention because the US banking market has been concentrated in a smaller number of big national banks over the years. The FDIC reported a 68% decline in the number of commercial banks between 1986 and 2019 (Brown, 2019). Assets have become concentrated as well. In 2020, the 12 largest banks held 60% of all domestic assets. Regional markets follow the same pattern: 78% of all regional banking markets were highly concentrated in 2017 (Meyer, 2018).
Policymakers are concerned with how the structural changes impact small businesses' loan access. The traditional view posits that market concentration creates an unfavourable environment for small businesses where banks with fewer competitors charge higher interest rates and make fewer loans to small businesses (Berger & Hannan, 1989;Berger et al., 2004;Cetorelli & Strahan, 2004;Hannan & Berger, 1991). This is particularly concerning because, unlike their large counterparts, small businesses heavily rely on commercial loans with no access to the capital market.
Challenging this traditional view, the 'investment theory', however, argues that market concentration benefits small businesses. It suggests that market power enables banks to invest their resources in long-term relationships with small businesses, whereas slim margins from a competitive environment disincentivize banks to develop such relationships (Francis et al., 2008). Furthermore, the investment theory postulates that banks with market power can use economies of scale to diversify risks that may arise from small business loans.
Both views assume a linear relationship between market power and small business loan supply. Recent studies, however, show that the market structure does not necessarily determine banks' lending behaviour. For instance, banks that came to hold bigger market power by acquiring another bank do not always reduce small business loans. When an acquirer bank specializes in small business lending before a merger, it increased, rather than decreased, small business loans even after the merger (Avery & Samolyk, 2004;Peek & Rosengren, 1998;Strahan & Weston, 1998). A similar pattern has been found in Italy (Presbitero & Zazzaro, 2011), Germany (Elsas, 2005) and Mexico (Canales & Nanda, 2012).
Credit access in times of crises
Martin and Sunley (2020) suggest regional subsystems that determine economic resilience: business, finance, governance and the labour market. During economic crises, the impact of economic shocks on businesses depends on the regional financial subsystem. Nonetheless, studies have not directly addressed the role of financial subsystems in times of crisis, but two strands of research may guide one to understand the link. Banks' ability to supply small business loans diminishes during economic downturns because economic contraction not only stresses the financial conditions of businesses but also banks' financial health. According to this view, 'the impairment of the bank-credit channel', compared with the unaffected, banks financially damaged by economic shocks significantly reduced small business loan supply and increased loan interest rates in the post-crisis period (Adams & Amel, 2005;Chava & Purnanandam, 2011).
Nonetheless, the loan accessibility of businesses depends on a couple of other factors. First, banks favour their relationship borrowers. Bolton et al. (2016) propose a theoretical model to predict the small business loan supply during an economic downturn based on the behavioural differences between relationship-and transactionlenders. They suggest that firms with low cash are prepared to take higher interest rates in normal times, expecting a continued relationship with their bank during a credit crunch. In a recession, relationship lenders offer better rates for their long-time clients to prevent them from defaulting for their own interest (Cotugno et al., 2013;Sette & Gobbi, 2015).
In addition to a bank-firm relationship, the financial literature also highlights the impact of a larger structural market environment on crisis lending. Evidence shows that greater market concentration leads to diminished credit flow with a higher interest rate to small businesses after the 2008 economic recession (Chen et al., 2017). However, Cubillas and Suárez (2018) counterargue that banks with market power may increase, not decrease, credit supply with a higher interest rate (Degryse et al., 2018;Hasan et al., 2019;Zhao & Jones-Evans, 2017). Cash-stripped borrowers that had a relationship with a failed bank need to find an alternative lender quickly during an economic downturn. Thus, survived banks with increased market power supply cash with a higher interest rate for desperate non-relationship borrowers. The mixed evidence in the literature may stem from different behavioural assumptions and institutional settings in different countries. To this date, no clear consensus has emerged.
Another line of research on small business crisis lending comes from the natural disaster literature. Unlike endogenous economic crises, natural disasters are exogenous external shocks that less likely affect banks' financial health and more likely damage local small businesses. Most studies find that geographically close local banks increase loans to local businesses after natural disasters, which in turn leads to better economic outcomes in the region (Cortés, 2014;Ivanov et al., 2020;Koetter et al., 2020). In addition, prior relationships between lenders and borrowers help ease the lending restrictions after a natural disaster (Berg & Schrader, 2012).
In summary, previous studies help one to understand the behaviours of lenders and borrowers during endogenous economic downturns and after local disasters. Nevertheless, how those studies can help predict outcomes of PPP is unclear because of the unique characteristics of government-backed PPP 'loans' and the nature of the Covid-19 pandemic that is different from previous shocks. The details of PPP design are important to understand in predicting how regional banking infrastructure contributed to the disbursement of PPP funds.
THE PAYCHECK PROTECTION PROGRAM (PPP) DURING THE COVID-19 PANDEMIC
On 13 March 2020, a national emergency was declared concerning the spread of Covid-19. Aggressive policy actions ensued. On 19 March, the State of California issued a shelter-in-place order to preserve public health and safety from Covid-19. By 6 April, 42 states and Washington, DC ordered non-essential businesses to close temporarily. Within a week after the national emergency declaration, the number of initial unemployment insurance benefits claims jumped from 251,416 to 2.9 million. It further increased to 3 million and 6 million claims in the subsequent weeks (US Department of Labour). 2 Congress responded to the negative economic impacts by enacting the Coronavirus Aid, Relief, and Economic Security (CARES) Act. The Act included PPP designed to prevent large-scale unemployment by aiding small businesses with payroll, rent or other direct operating costs. 3 The first draw of the programme began on 3 April and ended on 8 August 2020. 4 The second draw was implemented in 2021 between January and May. In total, PPP made 12 million loans and distributed US $800 billion to small businesses through 5467 financial institutions (SBA, 2021).
Important design features of PPP make PPP 'loans' distinctive from commercial loans. PPP 'loans' subsidize operating revenue losses of small businesses during the pandemic. The loan amount is based on pre-pandemic payroll costs. The interest rate is fixed at 1% with no collateral, personal guarantee or credit score requirements. The loans are forgivable if employees and wages are maintained, loans are spent on payroll costs and eligible expenses, and at least 60% of the amount is spent on payroll costs. These features of PPP eliminate firm-level heterogeneity in capital structure and financial health that determine the underwriting conditions of commercial loans. The PPP 'loans' can be characterized as conditional federal grants.
One of the innovative features of PPP is the use of existing private banking infrastructure in distributing PPP funds. All existing SBA-certified lenders were eligible to delegate authority 'to speedily process PPP loans', according to the US Department of Treasury. 5 Lenders charge fees from 1% to 5% of the loan amount depending on the loan size. Since all loans are guaranteed by the SBA, lenders are incentivized to participate in PPP to make fee revenues with minimal loan risks.
A growing number of microlevel studies finds that an existing bank-firm relationship significantly increased the likelihood of PPP loan approvals because banks helped alleviate their clients' insufficient information about the 86 Soomi Lee programme (Amiram & Rabetti, 2020;Bartik et al., 2020;Granja et al., 2020;Humphries et al., 2020;James et al., 2020;Li & Strahan, 2020). The evidence is consistent with Bolton et al. (2016) in which banks prioritize their relationship clients during an economic downturn because they have an economic interest in the long-term survival of their borrowers. A few studies in the early stage of PPP implementation have suggested that PPP was ineffective in preserving employment (Autor et al., 2020;Bartik et al., 2020;Chetty et al., 2020;Granja et al., 2020). Yet these studies use surveys or administrative data from private payroll-processing firms with questionable sample representativeness. More recent evidence shows that PPP helped small businesses (Bartik et al., 2020;Cororaton & Rosen, 2021;Hubbard & Strain, 2020) and regional economies (Barrios et al., 2020;Doniger & Kay, 2021;Faulkender et al., 2020;James et al., 2020;Li & Strahan, 2020;Mitchell, 2020).
Nevertheless, the impact of a larger structural environment of the regional banking sector on the distribution of PPP loans has not been well understood. Existing studies mainly focus on microlevel analyses using loan-level data and overlook the structural factor in region-level analyses. The finance literature demonstrates that the market structure is regionally heterogeneous, and it shapes banks' lending behaviour to small businesses. Furthermore, the recent literature suggests that it is not only the market structure but also the types of players in the market that determines small business credit access. Thus, how federal emergency grants channel through the regional banking infrastructure to ease small businesses' cash flow would depend on multidimensional characteristics of the regional banking sector. No prior studies have directly this important aspect examined. To fill the gap, the paper provides county-level analyses to examine how the regional banking market's structural characteristics contribute to the programme's reach.
Although PPP 'loans' differ from commercial loans in extraordinary circumstances, we may predict how market characteristics affect PPP loan disbursement from the previous literature. First, market concentration may be unfavourable to small businesses. When banks enjoy market power, they might prefer clients with a bigger loan to extract bigger fees for themselves since the PPP loan amount is predetermined by pre-Covid operating expenses. On the contrary, in a competitive market, banks pursue clients to earn more fee revenues with minimal loan risks, which would result in a larger number of PPP loans.
Second, community banks have a stronger incentive to actively participate in PPP. For big banks, PPP fees are only a small fraction of their overall revenues, whereas for smaller banks, the fees can be a substantial fraction of their overall revenues. In the early stage of the implementation of PPP, National Public Radio reported a Maryland-based small community bank making one year's worth of loans in just 10 days by participating in PPP. Studies on crisis-lending to small businesses point out that during an economic crisis or a natural disaster, small and local community banks redirect resources to help local and small businesses (Degryse et al., 2018;Hasan et al., 2019;Zhao & Jones-Evans, 2017). Thus, we would expect that a larger presence of community banks would increase the number of PPP loans in the region.
DATA AND METHOD
Cross-sectional data consisting of 3100 US counties in all 50 states and the District of Columbia are used. The data do not include 149 counties with no full-service bank office in 2020. The data were culled from several sources. One of the major sources of the data is the PPP Loan Level Data published by the SBA. The 5.2 million loan information from the first draw of the programme from April and August 2020 is used. The second draw distributed in 2021 was excluded because the main interest of the paper is the immediate PPP's reach to small businesses in response to the pandemic. The number of approved loans by county using businesses' zip code information provided by the SBA is aggregated.
The key variables that capture dimensions of regional banking market characteristics were constructed by using the Summary of Deposit (30 March 2020) and the Community Banking Study Reference Data (2020), both of which came from the FDIC. Counties' economic and industry characteristics were collected from the Bureau of Economic Analysis (BEA) and the US Bureau of Labour Statistics (US Census). 6 Demographic variables were from the US Census; Covid-19-related statistics were from The New York Times (2020). For detailed data descriptions, sources and summary statistics, see Appendix A in the supplemental data online.
Dependent variable
The dependent variable is the county's number of approved PPP loans per 100 businesses between April and August 2020. The number of loans instead of the amount of loan was used for the following reason. The amount of PPP loan is based on the pre-Covid 12month average payroll costs with the 1% fixed interest rate. In the PPP context, where speedy programme reach is the goal, loan count is more important than the predetermined loan amount.
The PPP loan data were obtained from SBA's PPP Loan Level Data. The released files include 5.2 million loan information, but do not provide borrowers' county location. It was identified by using the reported zip code. The crosswalk file from the Housing and Urban Development was used to match zip code and county location. Unmatched observations due to entry errors were manually searched and entered in the data using other identifying information of the business address. The number of loans was then aggregated by county and divided by the number of business establishments. The business establishment data were collected from the Quarterly Census of Employment and Wages (QCED) in the first quarter Banking infrastructure and the Paycheck Protection Program during the in 2020, the closest period to the national emergency declaration in March 2020.
Key independent variables: the structure of the regional banking sector The first key independent variable is banking market concentration, which is measured by the Herfindahl-Hirschman index (HHI). The formula of the market concentration for county i is defined as follows: where j indicates the bank; k is the number of banks in county i; and S is the market share of j's deposit in county i's total deposit. The index was computed using the statement of deposit (SOD) in June 2020 from the FDIC. The maximum value is 100 if one bank controls 100% of the deposit in i. The second key independent variable is the presence of community banks (CBratio). The traditional HHI measure treats all banks equally (Berger et al., 2004), but the nature of market competition may differ depending on the types of players in the market, for example, big national banks versus community banks. Thus, the level of community bank presence is included to capture different dynamics by taking the ratio between the number of community bank branches to all branches.
When defining community banks, the FDIC's community bank designation made by asset size, geographical footprint, business plan and the number of branches was followed (FDIC, 2020). The FDIC excludes banks with no loans or no core deposits, foreign assets more than 10% of total assets and more than 50% of assets in specialty banks (e.g., credit card specialists). The assent threshold in 2019 was US$1.65 billion. Banks with total assets greater than the threshold also can be designated as a community bank depending on financial ratios, the number of branches, the geographical scope of business operation and other criteria. In 2019, the FDIC identified 4750 community banks that account for 91.8% of the total bank organizations.
As a next step, the total number of all bank branches in each county for the first quarter of 2020 was counted using SOD. Community bank branches were then identified and the ratio of the number of FDIC-designated community banks' office branches to the number of all bank branches in the market was taken. The ratio captures the fraction of branches operated by community banks in each county. The values range from 0 to 1. Higher values indicate a greater presence of community banks.
Note that the FDIC measures the presence of community banks in two ways. One is based on office branches and the other on deposits (FDIC, 2020). The former was used because banks' available deposits do not determine PPP loan amounts since PPP is SBA backed. Further, it took only a few weeks to make 5.7 million loans during the first draw of the programme in 2020 (SBA, 2020a), indicating the importance of the available branch offices channelling the federal government funding to local businesses. Thus, the branch-based measure is more appropriate to capture the presence of community banks in the context of PPP.
The third characteristic of a regional banking sector is the number of full-service bank branches per business. Market concentration (HHI) and community bank presence (CBratio) characterize the structure of the county's banking market. Yet, those indicators do not capture the density of bank brancheshow many bank branches are accessible for small businesses. The total number of fullservice bank branches per 1000 businesses was used in the empirical model, regardless of community bank status.
Control variables
Several control variables are included such as per capita income and total population. Since the economic damage was induced by the Covid-19 pandemic, the number of cumulative confirmed Covid-19 cases by the end of March 2020 was controlled for. Since the economic impacts differed by type of business, it is necessary to control for counties' industry structure. Industries are categorized by the North American Industry Classification System (NAICS). The percentages of jobs in the goodsproducing industry (manufacturing, mining and construction), leisure and hospitality, and trade (retail and wholesale) were controlled for. All continuous variables are logtransformed. An indicator variable for metropolitan areas and state dummy variables are included in all models.
Empirical model
The main estimator is the least absolute deviation regression model. While the standard ordinary least squares (OLS) regression model minimizes the sum of squared errors and estimate conditional mean functions, the quantile regression (QR) model asymmetrically weights absolute residuals to estimate conditional median functions. It also estimates a full range of other conditional quantile functions without strict parametric assumptions and allows more robust, efficient and accurate estimates than OLS (Koenker & Bassett, 1978). The quantile regression model is as follows: where y i is the number of PPP loans in county i; a q is an intercept; x ′ i l q is the vectors of control variables and their coefficients at the q th quantile; 51 j=1 S j is state fixed effects; and e i is the error term. The key variables of interest are (1) the degrees of banking market concentration, (HHI i ); (2) the degrees of community bank's presence (CBratio i ); and (3) their interaction term (HHI i × CBratio i ). The current literature is ambiguous that market concentration could either decrease or increase small business lending, and therefore b 1 , 0 or b 1 . 0.
Community banks are considered to increase small business lending especially in disaster cases, b 2 > 0. The impact of HHI and CBratio depends on the value of the other variable. Therefore, the impact of HHI i is b 1 + b 3 (CBratio i ) and the impact of CBratio i is b 2 + b 3 (HHI i ). Table 1 presents the results. The first two columns show OLS estimates. The QR estimates are presented in columns (3) to (7) at 0.1, 0.25, 0.5, 0.75 and 0.9 percentiles. QR at the median (q ¼ 0.5) serves as the baseline, which is referred to here as an LAD model.
RESULTS
Among the three characteristics of the regional banking market, the density of bank branch offices (Branch Density) is statistically significant with similar magnitudes in all specifications and at all quantiles. The coefficients hover around 1, which indicates that a 1% increase in branch density leads to a one PPP loan increase per 100 businesses.
Column (1) does not include the interaction term between HHI and CBratio. Consistent with the traditional view of market concentration, the negative and statistically significant coefficient of the HHI index indicates that greater banking market concentration is correlated with fewer PPP loans per 100 businesses. The coefficient of CBratio is positive, as expected, but is not statistically significant.
When the interaction term between HHI and CBratio is specified in column (2), the coefficient of CBratio becomes negative and statistically significant, while the coefficient of HHI remains negative and statistically significant with doubled magnitude. The negative coefficient of CBratio is the opposite of the theoretical expectation, but the interpretation should account for the positive and significant interaction term between HHI and CBratio. This pattern, significant effects of HHI, CBratio and their interaction term, remain in almost all quantiles. The only exception is the coefficient of CBratio at q ¼ 0.1, which is significant at the 10% significance level.
Considerable differences in the size of coefficients are observed between OLS estimates (column 2) and the LAD estimates (column 5). The OLS estimates seem inflated compared with the LAD estimates. The coefficient is 1.5 times larger for HHI than the LAD estimate, more than two times larger for CBratio and the interaction term.
In QR models, the magnitude of the coefficient for HHI is similar to all quantiles (around 2.0-2.5), although at the highest percentile (q ¼ 0.9), the effect of HHI is substantially larger (around 3.5). For CBratio, the coefficients vary across quantiles. Its magnitude is smaller at lower quantiles and larger at higher percentiles. It ranges from −5.953 at q ¼ 0.1 to −21.878 at q ¼ 0.9. The interaction effect between HHI and CBratio becomes substantially larger at higher quantiles.
The marginal effects are computed based on the LAD model (q ¼ 0.5). The marginal effect of HHI: It ranges from −2.038 to −0.251 since the CBratio runs from 0 to 0.693. It suggests that the negative effect of market concentration deteriorates with a greater presence of community banks. The interaction effect is presented in Figure 1. The x-axis indicates the degrees of banking market concentration; the y-axis is the predicted number of PPP loans per 100 businesses. The three lines show the effect of HHI on the number of PPP loans at three different levels of CBratio: smaller (CBratio q ¼ 0.25), median (CBratio q ¼ 0.5) and larger (CBratio q ¼ 0.75) share of community bank branches.
The three downward slopes indicate that greater market concentration is associated with fewer PPP loans, supporting the traditional view. Nevertheless, the negative effect of HHI is more pronounced at the lower CBratio (Q1, long-dashed line). At the median level of the community bank ratio (Q2, solid line), the slope is less steep than the long-dashed line. At the third quartile of the community bank ratio (Q3, short-dashed line), the slope is flatter than other slopes. In sum, the findings support the traditional view that greater market concentration decreases small business loans, but the negative effect is diminished by a greater presence of community banks in the county. Figure 2 shows how the effect of market concentration (HHI) changes at different values of community bank presence (CBratio). The y-axis indicates the marginal effect of HHI on the number of PPP loans. At the minimum value of CBratio (no community bank presence), a 1% increase in HHI decreases about two loans per 100 businesses. As the community bank ratios increase, the negative effect becomes smaller, as the line moves upward.
At a CBratio > 0.55, the coefficient of HHI is no longer statistically significant. It translates that when community bank branches make up more than 73% of all bank branches in county i, the degrees of market concentration have no impact on the number of PPP loans. Those counties consist of slightly more than half of all counties in the sample.
The range runs from −3.483 to 3.941 when the interaction with HHI is accounted for. Figure 3 demonstrates the average marginal effects of CBratio at different levels of HHI.
The effects of CBratio depend on the degrees of market concentration. In competitive markets, the average effect of CBratio is negative and statistically significant until the HHI value reaches approximately 2.15. It implies that in a highly competitive market, a greater presence of Banking infrastructure and the Paycheck Protection Program during the Covid-19 pandemic 89 Table 1. Effects of banking structure and Paycheck Protection Program (PPP) loans in US counties. Ordinary least squares (OLS) estimates Quantile regression estimates Banking infrastructure and the Paycheck Protection Program during the Covid-19 pandemic 91 community banks significantly reduces PPP loans. These markets make up less than 0.35% of all counties. CBratio has no statistically significant impact on the number of PPP loans around the middle level of market concentration, which includes approximately 68% of the counties. In highly concentrated markets where HHI > 3.6, the CBratio significantly increases the number of PPP loans. These markets consist of 32% of the counties. In sum, the evidence suggests that structural characteristics of a regional banking sector unevenly affect the number of PPP loans transmitted from the federal government to small businesses. Market concentration reduces the number of PPP loans, but the negative impact is suppressed by a greater presence of community banks in the region. This negative effect of market concentration is expected and consistent with the traditional view (Berger et al., 2004;Cetorelli & Strahan, 2004) and the prediction in Bolton et al. (2016). In a competitive market, banks compete for PPP clients and processing fees, whereas in a concentrated market, banks with market power can prioritize bigger loans for bigger fee revenues.
However, in this concentrated market, a greater presence of community banks mitigates the negative impact of market concentration on the number of PPP loans because even with the market power, community banks are more likely to lend geographically closer local businesses than national banks, consistent with the previous literature showing spatial proximity between banks and firms increases small business lending in times of economic crisis and natural disasters (Cortés, 2014;Degryse et al., 2018;Hasan et al., 2019;Ivanov et al., 2020;Koetter et al., 2020;Zhao & Jones-Evans, 2017). That is why we see stronger mitigating impacts of community banks' role only in a highly competitive market but not in a competitive market where all banks competitively go after clients.
The insignificant coefficient of the metropolitan indicator suggests no discernible effect. A larger population is associated with fewer PPP loans. The magnitude tends to increase with the quantile. Covid-19 infection rates have no statistically significant impact, consistent with previous studies (Granja et al., 2020;James et al., 2020). James et al. (2020) point out that the PPP loan application rate was the highest, while the approval rates were the lowest in the areas highly impacted by Covid-19. It implies that banks perceive PPP loans in heavily impacted areas as risky. Thus, more confirmed Covid-19 cases are not necessarily associated with more PPP loans.
The last three variables are the percentage of jobs in three NAICS categories to account for the county's industry mix. The share of jobs in goods-producing industries, leisure and hospitality, and trade were included. No robust impact is found in all quantiles. For lower quantiles, a bigger share of jobs in trade is associated with more PPP loans. For higher quantiles, a bigger share of jobs in hospitality and the goods-producing industry is associated with fewer PPP loans. In the LAD model, a bigger Note: The effects of community bank presence depend on the degrees of banking market concentration. The effect is negative and statistically significant in competitive markets until the Herfindahl-Hirschman index (HHI) reaches approximately 2.15. The effect has no statistically significant impact on the number of PPP loans around the middle level of market concentration. In highly concentrated markets where HHI > 3.6, the CBratio significantly increases the number of PPP loans. Banking infrastructure and the Paycheck Protection Program during the Covid-19 pandemic
REGIONAL STUDIES
share of hospitality jobs leads to fewer PPP loans, while a bigger share of trade jobs leads to more PPP jobs. Table 2 presents sensitivity analyses conducted in two different ways. First, an alternative measure for the market concentration index was used. For the baseline estimates, the market concentration index based on the local deposit share was used. Instead, an alternative HHI based on the share of bank branches in column (2) in Table 2 was used.
Results with alternative variables and specifications
The baseline estimates (LAD estimates in Table 1) are reported in column (1) for comparison purposes. Using the alternative market concentration measure statistically and substantially does not change the baseline results.
Second, additional demographic variables were controlled for in the model because these variables may affect the county's business profiles and the degrees of impact of the Covid-19. The added variables are educational attainment (the percentage of the population aged 25 and older with a bachelor's degree), size of the older population (percentage of the population aged 65 and over), the percentage of population under 18, the percentage of the non-Hispanic white population, the share of the non-Hispanic black population, and finally the share of the Hispanic population. The results are shown in column (3). The coefficients of key variables became slightly smaller, but the general pattern stays the same. Third, instead of the number of PPP loans per 100 businesses, the number of PPP loans per 1000 population was used. The results remain similar to the baseline estimates.
CONCLUSIONS
This paper examines how the existing market characteristics of the regional financial subsystem contribute to emergency credit access for small businesses. In particular, it examines how market concentration and the presence of community banks in the regional market determine the disbursement of PPP loans designed to ease the negative economic impacts of the Covid-19 pandemic. The analysis shows an interplay between different dimensions of financial market characteristics. Currently, 75% of the counties have a highly concentrated market by the Department of Justice's standards. 7 The findings show that the negative impact of HHI on small business loans exists, but community banks play a critical role in suppressing the negative effect. Community banks' moderating effect is especially pronounced in a highly concentrated market where a greater community bank presence significantly increases the number of PPP loans. The paper provides nuanced evidence that goes beyond the simple understanding of the role that community banks play in regional economies.
This paper contributes to the literature on regional economic resilience by providing empirical evidence on how multidimensional regional financial market characteristics influence credit access for small businesses in economic crisis. More than three-fourths of the county-level financial markets in the US are highly concentrated and mostly 'stuck' (Meyer, 2018). The findings suggests that in those concentrated markets, the presence of community banks is particularly important for small businesses, although recent development of financial technologies may substitute credit access for small businesses in markets with less competition (Erel & Liebersohn, 2020;Hannan, 2003).
Finally, the paper demonstrates the importance of channels through which federal policies are implemented. The federal government's fiscal and monetary policies significantly improve regional economic resilience after an economic shock while the impacts of state and local government policies are limited (Wolman et al., 2017). Studies have shown that PPP helped mitigate negative impacts of shelter-in-place orders due to Covid-19, which suggests that regional economic outcomes may depend on heterogeneous regional banking market characteristics. Thus, it is imperative to be aware of not only what federal assistance is implemented but also how it is transmitted to the target. This paper has limitations and raises future research questions. First, the interpretations of the empirical results are limited to correlations because the cross-sectional analysis makes it difficult to identify causality, although we can reasonably rule out a reverse causation and a third factor affecting both sides of the equation. It could be useful to examine whether similar correlations can be found in other parts of the world. Second, existing studies report small, short-run effects of PPP that dampened unemployment (Autor et al., 2020;Chetty et al., 2020;Faulkender et al., 2020;Granja et al., 2020;Hubbard & Strain, 2020;Li & Strahan, 2020). It remains to be seen how the financial market structure and community banks ultimately affected regional economies through PPP loans such as jobs and business survival after the pandemic. | 8,834 | 2022-05-03T00:00:00.000 | [
"Business",
"Economics"
] |
Color image compression based on spatial and magnitude signal decomposition
Received Dec 21, 2020 Revised Mar 3, 2021 Accepted Mar 15, 2021 In this paper, a simple color image compression system has been proposed using image signal decomposition. Where, the RGB image color band is converted to the less correlated YUV color model and the pixel value (magnitude) in each band is decomposed into 2-values; most and least significant. According to the importance of the most significant value (MSV) that influenced by any simply modification happened, an adaptive lossless image compression system is proposed using bit plane (BP) slicing, delta pulse code modulation (Delta PCM), adaptive quadtree (QT) partitioning followed by an adaptive shift encoder. On the other hand, a lossy compression system is introduced to handle the least significant value (LSV), it is based on an adaptive, error bounded coding system, and it uses the DCT compression scheme. The performance of the developed compression system was analyzed and compared with those attained from the universal standard JPEG, and the results of applying the proposed system indicated its performance is comparable or better than that of the JPEG standards.
INTRODUCTION
Compression is a key mechanism applied in signal processing and has large significance because huge amounts of data are commonly transferred over a communication channel of a network [1]. Image compression (IC) is one of the techniques lay under image processing. It has many applications and plays an important role in the efficient transmission and storage of images. IC is a method used for reducing the size of the digital image in order to minimize the amount of space required to store it [1]. The two fundamental elements of compression are redundancy and irrelevancy reduction. Redundancy reduction aims to remove the duplication from the signal source (image/video). Irrelevancy reduction omits parts of the signal that will not be noticed by the signal receiver namely the human visual system (HVS) [2], [3]. According to the differences between the original image (uncompressed) and the reconstructed (decompressed) image, image compression is classified into lossless, exact retrieval, and lossy, approximate retrieval [4]- [7]. Signal decomposition is the extraction and separation of signal components from composite signals, which should preferably be related to semantic units. One important role of signal decomposition is improving the performance of compression algorithms [8]. IC techniques still experiencing many improvements under the discipline of transform coding. The transform coding schemes achieve higher compression ratios for lossy compression, but it suffers from some artifacts at high-compression ratios [9]. Discrete cosine transform [10], it had an advantage in potential smaller dimension, better processing time, and compatibility with encoded data [11], its selection as the standard for joint photographic experts group (JPEG) is one of the major reasons for its popularity [12]. DCT helps to separate the image parts with respect to the image visual quality (i.e., high, low & middle-frequency components) [11], [13]. JPEG 2000 is an image coding algorithm; it is a modified variant of JPEG that uses a discrete wavelet transform (DWT) instead of DCT [10].
The main application of wavelet theory lies in the design of filters for sub-band coding. The basic concept behind the wavelet transform is the hierarchical decomposition of input image signal into a series of successively lower resolution reference signals and their associated detail signals [14]. Bit plane (BP) slicing is a simple and fast technique that highlights the contribution made by a specific bit and each biplane is a binary image [15]. It is a separations technique in which the image is sliced into different binary planes or layers according to bit position that efficiently analyzing the relative importance played by each bit of the image [16], [17], Al-Mahmood and Al-Rubaye [16], combined between bit plane slicing and adaptive predictive coding for lossless compression of natural and medical images, they utilized the spatial domain efficiently after discarding the lowest order bits namely, exploiting only the highest order bits in which the most significant bit corresponds to the last layer7 used adaptive predictive coding, while the other layers used run-length coding. Albahadily et al. [18], presented lossless grayscale image compression using the bit plane technique and modified run-length encoding (RLE). Al-Timimi [19] introduced a hybrid technique for lossless image compression of natural and medical images; it is based on integrating the bit plane slicing and Wavelet transform along with a mixed polynomial of the linear and nonlinear base. Many research groups have developed different image-coding schemes and tried to modify these schemes to further reduce bit rate [20].
The main problem is the need for reducing the size of the digital image in order to minimize; i) the amount of space required to store it, ii) the cost and time required to transfer it over the internet. Generally, spatial domain refers to the aggregate of pixels composing an image, so a digital color image consists of 3spatial bands (red, green, and blue), each band pixel is represented using 8-bits (1 byte). Therefore, this paper's target is to improve the image compression system using spatial signal magnitude (value) decomposition and transform coding supported by an efficient entropy encoder. Where each image pixels decomposed into 2-values; most and least significant. According to the importance of the most significant value (MSV) that is influenced by any simple modification happened, an adaptive lossless image compression system has been proposed based on BP slicing, delta pulse code modulation, adaptive quadtree (QT) partitioning followed by an adaptive shift encoder. On another hand, the lossy compression system has been designed to handle the least significant value (LSV) based on an adaptive, error bounded, DCT based compression system. So, that the main contribution of this work is to improve an image compression system that utilizes the signal decomposition, where the proposed system is composed of two subsystems: lossless that compressed the MSV and lossy that compressed the LSV; the rest of the paper is structured as: Section 2 is dedicated to explaining the proposed compression system and method. The established proposed system is tested using some commonly used images, and the test results are discussed in section 3, finally, section 4 concludes the contribution of this paper.
THE PROPOSED SYSTEM AND METHOD
The proposed compression system consists of three main consecutive steps; where the layout of the proposed compression system illustrated in Figure
Lossless MSV encoder (LMSVE)
In this section, LMSVE has received the MSV 2-dimensional arrays for Y, U, and V bands, as illustrated in Figure 2. LMSVE consists of the following processes:
Value separation
The task of this process is to isolate MSV into two positive and negative matrices, PMSV and NMSV respectively, to facilitate the coding process, in case of negative values occurrences NMSV will be discarded. The constructed matrices will be passed to the next process (BP slicing).
Bit plan slicing (BPS)
The minimum number of bits required to code 2 n distinct quantities is n, each bit represented in a layer so there are n-1 bit layers to represent 2 n positive distinct quantities. While the adopted layered representation of the negative -2 m was m bit layers the m layer represent the sign bit. The first step applied in this process is the number base conversion, so the decimal positive or negative number converted to its binary equivalent. Later each bit is split to its corresponding layer number. When the layer is empty from 1's (all the content is 0's), it is marked as an empty to avoid encoding and gain storage space.
Delta pulse code modulation (Delta PCM)
Delta PCM is a special case from differential pulse code modulation applied to the sequence that consists of 0 or 1 values. It detects a change in the value from zero to one and vice versa (edges). The benefits of this process are to reduce the crowded bit plane and to make the bit plane closer to a sparse matrix. That increases the empty regions and facilitates the quadtree partitioning in the next process. The direction of calculating the delta must be zig-zag (inverted from line-to-line) as shown in Figure 3. Figure 3 (a) illustrates the direction, while Figure 3(b) explains the way of calculating the delta PCM for 3x3 window. So, Delta PCM will be applied to all non-blank plans that go through this process, in addition to counting the number of 1's in each resulted delta bitplane.
Adaptive quad tree
It is well known that the purpose of using a quadtree with compression is to isolate empty areas in digital images and avoid encoding and storing them. To be a useful QT it must start with a block size appropriate to the size of the empty areas in the image, to avoid continuous division in the case of small areas or miss them in the case of large areas. Equations (3) and (4) are used to regulate the initial block size selected for the QT partitioning. The regulation is done using the rate of 1's that occurred in the entered delta bitplane see equations. The inputs to this process are delta bitplanes corresponding with each one the sum of all non-zero values (count the number of 1's) resulted from the previous process. The output from this process is the buffer of non-empty (2x2) blocks (i.e. series of 1 or 0 bits) beside the QT partitioning sequence for each entered delta bitplane.
Where: = Referred to the entered delta bitplane x, y = is the row and column index of the delta bitplane respectively h, w = is the height and width of the delta bitplane respectively B_S = is the Initial block size used in QT = is real increment {0.2,0.3}
Adaptive shift encoder
This process is responsible for performing the final step of the lossless MSV encoder (LMSVE), by encoding the resulted data from the previous step (buffers and partitioning sequences). An adaptive shift optimizer is the first process that must perform before applying the shift encoding. The following steps summarize the shift coding optimizer and an adaptive shift coding process (steps 1, 2, 3 belongs to the shift coding optimizer): Step 1: convert each adjacent 4-bits from the buffer to its decimal equivalents and this process is applied to all bits in the buffer and store the results in a 1-d integer number entitled NUM(), so the range of the value in the buffer is [1]- [15]. − Step 2: compute the histogram for NUM(), and apply the histogram packing, so the new histogram includes only the non-zero occurrences number entitled Ph(). − Step 3: compute the following equations: = ℎ(12) + ℎ(13) + ℎ(14) (9) 9 = ℎ(9) + ℎ(10) + ℎ(11) + = ℎ(10) + ℎ(11) + Compute the total required bits (Tb) for the following possible shift coding combinations: The number of bits NB (codeword length) required to store each value from NUM() is determined according to the following equations which are all relative to the sequence of shift coding: Where Val is the integer value stored in NUM(). For more illustration about the behaviour of the four shifts coding schemes adopted in this paper, Figure 4 selected (18) to determine how many bits need to encode each Val, where NB is the number of bits required. The steps from 1 to 4 are applied on the buffer but the QT partitioning sequence compressed using traditional shift coding after collecting each adjacent 8-bits.
Lossy LSV encoder (LLSVE)
In this section, LLSVE has received the LSV 2-dimensional arrays for Y, U, and V bands, as illustrated in Figure 5. LLSVE consists of the following processes:
Band partitioning
This process is responsible for dividing the received LSV 2-D array into non overlapped blocks of size n (BLK nxn), to perform the following processes on blocks level.
DCT transform
For all block entry apply the well-known DCT transform [21], to separate the block into parts with respect to the importance i.e., high, low & middle-frequency components.
Quantization
This process is responsible for increase the compression gain by eliminating insignificant data. Due to the distinct importance of the DCT coefficients, ununiformed scaler quantization had been applied according by (21), (22) Where: = is the DCT coefficients of the block indexed i, j = is the quantized DCT coefficients of the block indexed i, j i, j = is the upper left corner index of the block, started at row i, and column j n = is the size of the block = is the quantization matrix relative to each DCT block coefficients q = is the quantization value of the DC coefficient that set to 2n 0 = is the quantization value of the AC coefficient ∈ {2,3} = is the scaling factor ∈{0.1,0.12,...,0.2}
Insulation window
It is known that the large and effective values of the DCT coefficients' are centered in the northern left quarter of the block, so to improve the shift encoder and increase the compression gain, the large value coefficients will be moved to their own buffer. This process is done by opening a window with a size of (m x m), m is an integer ranged [10..16] relative to the block size (n), that started in the upper left corner of the block and checks each quantized DCT coefficient's if they are greater than S (i.e., a predefined value) they will be isolated to buffer. All the quantized DCT coefficient blocks will be passed through the insolation window according to algorithm (1) steps in Figure 6.
Zigzag arrangement
This process is applied to convert the DCT coefficient blocks from 2-d arrays to one dimension buffer, to increase the benefit of spatial correlation as illustrated in Figure 7
Map to positive
The 1-d buffer of all the quantized DCT coefficients' after separating the highest values from it will pass to this process, which responsible for converting all the numbers to positive to avoid coding complexity according by (23) [24,25]: Where, Xi is the i th element value registered in the buffer.
Shift encoder
This process will receive the three buffers: Vbuf ( ) which contains the highest quantized DCT coefficients' with their masks buffer (Vmask ( )) that's need to recombine the values and the lowest quantized DCT coefficients' after converted to positive (buf"()). Due to that separation, the shift encoder will process each buffer separately to increase the compression gain and the output for each buffer will be appended to the compressed data.
Decoding system
This system is responsible to reconstruct the decompressed image. The compressed data from LLSVS appended to that of LMSVS will be passed to a decoding system that consisted of two sub decoding systems: the decoding subsystem of the LMSVS compressed data and the decoding subsystem of the LLSVS compressed data. Each decoding subsystem consists of the same processes proposed in the encoding subsystem but reverse order. So, the resulted data from the two decoding subsystems will be recombined to reconstruct the new Y, U, and V bands that will be transformed back to the RGB color model to extract the decompressed image.
RESULTS AND DISCUSSION
The proposed system was implemented using Embarcadero RAD Studio 2010 (Delphi 2010) programming language. The tests were conducted under the environment: Windows-8 operating system, laptop computer Lenovo (processor: Intel (R) Core(TM) i5-3337U, CPU 1.8 GHz and 4 GB RAM). The tests were applied on the well-known Lena and Baboon image samples (whose specifications are: size=256x256 pixel, color depth=24 bit), as shown in Figure 8. In an image compression scheme, the image compression algorithm should attain a trade-off between compression ratio and image quality [26], so to assess the difference between the reconstructed image and the original images, the error measure (i.e. peak signal to noise ratio PSNR measured in dB) was used. Beside this fidelity measure, some complementary measure was used to describe the performance of the system; the compression ratio (CR) was used to describe the compression gain. Table 1 lists the considered control parameters (including their names and default values); these values were selected after making comprehensive tests. Table 2 Illustrates the effect of Stp on the number of bytes required to encode the LMSVE (NBM) for all tested images. Table 2 shows that the increase of Stp value will relatively decrease NBM in most cases. Table 3 illustrates the effect of Stp, 0 and on the number of bytes required to encode the LLSVE (NBL) for the Baboon image when n=64. The Stp effects on NBM listed in Table 2 are used with NBL listed in Table 3 to compute the CR of the proposed compression system. Table 3 proved that the increase of the quantization's factors ( 0 and a) will case increasing the CR and decreasing the PSNR of the retrieved image. Table 4 illustrates the effect of (n, Stp and ) on CR and PSNR, all tests applied on Lena image when ( 0 = 2). Figure 9 shows the effect of (n and Stp) on (CR and PSNR) on the test image Baboon, where the test was done with ( 0 =2 and =0.01, ). From Figure 9, we conclude that when n=32 with different Stp values, the compression ratio is close to Lossless and the PSNR of the reconstructed image is higher than the acceptable levels (37.33-37.51), where it is effective (LMSVS) over from the effectiveness of (LLSVS). But when the value of n increases to become 64, Stp begins to become more active, affecting a noticeable increase in the compression ratio, with the slight decrease of the PSNR of the retrieved image remaining within very acceptable levels (33.04-33.49). Finally, when n=128, the influence of (LLSVS) begins to overwhelm the effect of ((LMSVS), causing an inverse relationship, with a clear difference, between the PSNR of the decompressed image and the CR, causing a significant increase in CR with a clear decrease in the PSNR, especially when Stp=(130 and 150).
From Figure 9, we conclude that when n=32 with different Stp values, the compression ratio is close to Lossless and the PSNR of the reconstructed image is higher than the acceptable levels (37.33-37.51), where it is effective (LMSVS) over from the effectiveness of (LLSVS). But when the value of n increases to became 64, Stp begins to became more active, affecting a noticeable increase in the compression ratio, with the slight decrease of the PSNR of the retrieved image remaining within very acceptable levels (33.04-33.49). Finally, when n=128, the influence of (LLSVS) begins to overwhelm the effect of (LMSVS), causing an inverse relationship, with a clear difference, between the PSNR of the decompressed image and the CR, causing a significant increase in CR with a clear decrease in the PSNR, especially when Stp=(130 and 150). The obtained results of the proposed compression system are compared to the universal standard JPEG regarding compression ratio and visual quality measures, Table 5 illustrates this comparison that applied to the Lena test image.
CONCLUSION
From the results of tests conducted on the proposed system, the following remarks were stimulated: The system is progressive as taking into consideration the importance of the most significant pixel values (MSV) (MSV take a small number of bits after encoding, knowing that they are encoded without error while least significant pixel values (LSV) are handled using lossy coding as they require a large number of bits. The increase of the control parameter Stp (that is the cut point bit) will relatively decrease the number of bits required to encoding the MSV in most cases, as shown in Table 2. The control parameter n (that is the size of the block partitioning) was tested in lossless MSV compression sub-system (LLLVS) using non overlapped scheme is plays an important role in the proposed compression system. When (n=32) makes the system biased to the LMSVS, but when (n=128) the system becomes biased to LLSVS. While, when (n=64) represented the balancing mode between the two subsystems (LMSV and LLSVS) that produce very encouraging values in term of CR and PSNR, as shown in Table 4 and Figure 9. For the future, it is possible to develop the Entropy encoder method to increase the compression ratio or using the multi-stage of transform. | 4,683 | 2021-10-01T00:00:00.000 | [
"Computer Science"
] |
The self as the locus of morality: A comparison between Charles Taylor and George Herbert Mead's theories of the moral constitution of the self
This paper provides a critical comparison of two leading exponents of the relationship between morality and selfhood: Charles Taylor and George Herbert Mead. Specifically, it seeks to provide an assessment of the contribution each approach is able to make to a social theory of morality that has the self at its heart. Ultimately, it is argued that Taylor's phenomenological account neglects the significance of interaction and social relations in his conceptualisation of the relationship between morality and self, which undermines the capacity of his framework to explain how moral understandings and dialogic moral subjectivity develop in a world of shared meaning. I then argue that Mead's pragmatist interactionist approach overcomes many of the flaws in Taylor's framework, and offers a grounded conceptualisation of the relationship between self and morality that is able to provide a basis for a properly social account of moral subjectivity.
| INTRODUCTION
This paper provides a critical comparison of two leading exponents of the relationship between morality and selfhood: Charles Taylor and George Herbert Mead.Despite their stature, and despite considerable overlap between their work, comparison between the two is surprisingly absent.As well as seeking to redress this lack, this paper provides a critical evaluation of the extent to which the work of Taylor and Mead respectively can offer a workable theory of the relationship between selfhood and morality.Both Taylor and Mead pave the way for the argument that a social conception of self is necessary to understanding the significance of morality to people's lives.This is an argument that is increasingly taken up in contemporary social theories of morality (e.g.Abbott, 2020;Hookway, 2017;Morgan, 2016), meaning that a comparison between Mead and Taylor is particularly timely.The comparison in this paper thus seeks to provide an assessment of which approach offers the soundest basis for a social theory of morality that has the self at its heart.
The lack of comparison thus far perhaps reflects Taylor's own neglect of Mead, who is barely mentioned throughout Taylor's work, even in Sources of the Self (1989) which covers themes analogous to those found in Mead's oeuvre (Joas, 2000).Where Mead is mentioned, it is by way of passing critique (Taylor, 1989(Taylor, , 1995(Taylor, , 2016).Taylor's critiques of Mead have been neither expounded nor challenged in much depth 1 , something that this paper also seeks to address.While the critiques are hardly resounding, they reveal important differences between the two approaches to morality and selfhood.I argue that the key differences between the two approaches lie in the relative significance accorded to the social, specifically social interaction, in the formation and enactment of moral selfhood.More significantly, I maintain that while Taylor's exploration of the intellectual history of Western moral sources is exemplary, the way he hinges his argument on a 'transcendental' phenomenological account of identity (1989, p. 32), rather than an interactional social ontology, has problematic consequences both for how Taylor understands the relationship between morality and selfhood, and for the use of his work for a social theory of this relationship.While Calhoun (1991, p. 233) argues that Taylor's work does nonetheless 'offer extremely valuable guidelines and first steps to this potential sociological enterprise', I argue that Taylor's neglect of social interaction in his discussions of the self, morality, and identity-interminably social phenomena as they are-leads to conceptualisations of the relationship between these phenomena that are problematic.
I begin by setting out Taylor's contribution to understanding the moral self.Taylor's moral theory has perhaps not received the attention in social theory that some feel it deserves (Calhoun, 1991), and so the first part of this paper will attempt to set out what Taylor's wide-ranging arguments are and how they relate to a social theory of morality.The second section will then explore the limitations of Taylor's framework.Specifically, I argue that Taylor's phenomenological account provides an overly intellectualist depiction of identity and neglects the fundamental significance of interaction and social relations in his conceptualisation of the relationship between morality and selfhood.As well as meaning that his framework is not adept to explain how moral understandings develop and are enacted, I argue that this neglect means Taylor is unable to sufficiently account for the intersubjective development of dialogic moral subjectivity within a world of shared meaning, something that he takes to be definitive of his moral theory.Into the final section, I address Taylor's criticisms of Mead.I argue these criticisms are misplaced, but I also aim to show that Mead in fact provides a more workable argument for the kind of individuated dialogic moral subjectivity that Taylor himself is keen to preserve.This leads me to argue that a Meadian approach overcomes many of the flaws in Taylor's framework, and offers a grounded conceptualisation of the relationship between self and morality that is able to provide a basis for a properly social account of moral subjectivity.
| TAYLOR AND THE MORALITY OF SELFHOOD
Sources of the Self (1989; hereafter referred to as Sources) provides Taylor's preeminent contribution to moral philosophy.It is in Sources that Taylor seeks to redraw the significance of selfhood to philosophic questions of morality.A work of the grandness of Sources inevitably delivers a number of significant contributions, however two principal aims can be identified.The first is to show 'how deeply flawed any account of human personhood must be which tries to address identity separately from moral subjectivity' (Calhoun, 1991, p. 233), and the second is to develop an neo-hermeneutic account of the making of modern identity, which fuses an argument for seeing intellectual historicity as being integral to the formation of the modern self with Taylor's own conception of 'philosophical anthropology' (described as 'the study of the basic categories in which man and his behaviour is to be described and explained' [Taylor, 1964, p. 4]).Taylor's (1989, p. 3) first task is thus to demonstrate that '[s]elfhood and the good, or in another way selfhood and morality, turn out to be inextricably intertwined themes'.His intention is to re-centre the significance of personhood in moral thought, an aim motivated by what he sees as the necessary task of broadening the horizons of moral philosophy beyond its proceduralist hegemony, which is restrictively concerned with 'what is right to do rather than what it is good to be' (Taylor, 1989, p. 3).This 'cramped and truncated view' has tended to ignore the 'dimension of our moral consciousness and beliefs altogether and has even seemed to dismiss them as confused and irrelevant' (Taylor, 1989, pp. 3 and 4).Contrary to this, Taylor maintains that a more complete conception of morality, and indeed a 'thicker' depiction of human agency, requires us to consider what underlies our conceptions of what matters to us and of what makes our lives meaningful.These are issues of moral concern, in a broadened sense of the term, which involve what Taylor (1989, p. 4) refers to as 'strong evaluations' in that they entail deeply moralised 'discriminations of right or wrong, better or worse'.Such strong evaluations provide the means through which we give meaningful expression to our lives and through which we understand our identity, without which understanding ourselves and our place within the world would be impossible.
Taylor here makes the link that he sees as integral to conceptualising how selfhood is inextricably tied to fundamentally moral sources, because he argues that these strong evaluations are necessarily tied to frameworks of the good.We 'cannot help', as Taylor (1989, p. 59) puts it, drawing on strongly qualified evaluations both in the course of giving meaning to our lives and in the doing of social life, in 'deliberating, judging situations, deciding how you feel about people', and these evaluations are themselves based on 'hypergoods', which provide the moral background of our strong evaluations (Taylor, 1989, p. 63).Hypergoods are 'constitutive goods' in that they provide the foundational framework upon which our more personal and everyday evaluations are made; they 'provide the standpoint from which these must be weighed, judged, decided about' (Taylor, 1989, pp. 93 and 63).Taylor (1989, p. 63) takes the notion that 'all humans are to be treated equally with respect' as an example of a hypergood in modern society.While not universally espoused or enacted, it provides an overarching framework through which we understand and articulate notions of obligation to others and often questions of what it is good to be.Such frameworks 'provide the background, explicit or implicit 518 -ABBOTT for our moral judgements' and frame the strong evaluations we necessarily make as we interpret our world and make sense of who we are and how we should be (Taylor, 1989, p. 26).
Taylor's argument up to this point is that (1) strong evaluations are an inherent facet of human social life, (2) we necessarily draw strong evaluations as we make sense of our lives, (3) these strong evaluations are necessarily tied to frameworks of the good, and (4) therefore, we necessarily draw on moral frameworks in the construction and articulation of our own selfunderstanding and in the course of doing social life.Taylor (1989, p. 32) then seeks to connect these points through his 'transcendental' 'phenomenological account of identity', which is phenomenological in the sense that it aims to explore 'how we actually make sense of our lives, and to draw the limits of the conceivable from our knowledge of what we actually do when we do so', and argues that it would be inconceivable for us to be able to make sense of our lives outside of the bounds of moral frameworks.Taylor argues that '[m]y identity is defined by the commitments and identifications which provide the frame or horizon within which I can try and determine from case to case what is good, or valuable, or what ought to be done, or what I endorse or oppose', that 'an identity is something that one ought to be true to, can fail to uphold,' and thus further still that '[m]y self-definition is understood as an answer to the question Who I am' (Taylor, 1989, pp. 27, 30 and 35).But for Taylor (1989, pp. 33 and 35), answering this question, which he conceptualises as being 'essential to human personhood', 'finds its original sense in the interchange of speakers', because these 'webs of interlocution' pass on the frameworks through which we are able to form and articulate this strong sense of identity.There is a definite social element to the argument: strong identification of self relies on being able to utilise the linguistic resources into which we are necessarily situated, and the human universe is so awash with terms of strong evaluation that moral frameworks form an integral and inescapable aspect of the linguistic resources in which we are situated as selves (Abend, 2014).But Taylor's overall intention is to tie in his phenomenological account of identity with a deeper ontological argument about how personhood and human agency are necessarily tied in with moral frameworks.Taylor explicitly seeks to move beyond what he sees as the weaker hypothesis that it is 'contingently true' that humans are socialised into moral understandings, and into the 'stronger' hypothesis that there is a moral dimension to human agency (Taylor, 1989, p. 22), which must necessarily 'appear to itself against a background of strong value' (Smith, 2002, p. 92).Taylor's (1989, p. 27) thesis is that 'living within such strongly qualified horizons is constitutive of human agency', to the extent that 'stepping outside these limits would be tantamount to stepping outside what we would recognize as integral, that is, undamaged human personhood'.He argues that his 'discussion of identity indicates […] that it belongs to the class of the inescapable, i.e., that it belongs to human agency to exist in a space of questions about strongly valued goods ' (1989, p. 31) ' (1989, p. 30), and '[w]hat this brings to light is the essential link between identity and a kind of orientation.To know who you are is to be orientated in moral space', and 'this orientation, once attained, defines where you answer from, hence your identity ' (1989, p. 28 and 29).'But then what emerges from all this is that we think of this moral orientation as essential to being a human interlocutor […] To understand our predicament in terms of finding or losing orientation in moral space is to take the space which our frameworks seeks to define as ontologically basic ' (1989, p. 29).
Taylor's argument thus becomes that the linguistically-mediated moral frameworks in which we necessarily reside not only express our identity, but also express and articulate an orientation towards the good that he sees as a basic facet of the 'ontology of the human': moral ABBOTT frameworks provide the 'background picture' which allows us to articulate 'our moral and spiritual intuitions ' (1989, p. 5 and 8).Taylor's point is that our base-level moral reactions towards death and suffering 'are almost like instincts', and that these intuitions are manifested in the broadest moral frameworks of 'all human societies ' (1989, p. 4).Overarching moral frameworks express basic moral intuitions, while at the same time providing the culturallyspecific renderings and linguistic resources through which this ontological orientation is expressed; the cultural history of our moral world provides the linguistic means through which we 'make sense' of and 'articulate these intuitions' (Taylor, 1989, pp. 30 and 8).This point represents an important tension in Taylor's work, which is to match up a notion of basic ontological moral orientations with the argument that the expression of such orientations is culturally and historically specific.
Identity is the medium through which Taylor attempts to resolve this tension.He uses it to encapsulate how features that he takes to be fundamental to human agency (including articulating questions of who we see ourselves as being, and being oriented within moral space) are linked with broad historically-shaped moral frameworks, which provide the linguistic resources through which we interpret our self-understanding in terms of our identity.Taylor's argument that moral selfhood is given expression within historically-oriented frameworks of the good thus leads him to attempt to fuse his ontological account of moral personhood with a neohermeneutic account of intellectual history, which he sees as setting the horizons within which the modern moral identity is understood (Calhoun, 1991).
The second part of Sources, then, takes us on a tour de force exploration of the intellectual history of Western thought, examining in depth how significant facets of the Western canon have conceptualised of personhood, and how this has moulded the frameworks through which modern identity and moral selfhood are understood beyond the ivory towers of intellectualism.Taylor's extensive discussion of who he sees as being particularly significant to modern conceptions of selfhood-from Plato, Augustine, Descartes, Kant, Locke, to the thought of Utilitarianism, Nietzsche, and Romanticism, and up to contemporary espousals of 'naturalism'-are too detailed to be dealt with in depth here.In general terms, Taylor breaks down his argument into three major movements in thought that he sees as being of particular significance to the moral terms through which identity is understood in modern society (Hittinger, 1990).
The first is the instantiation of 'radical' reflexiveness (Taylor, 1989, p. 131), which involves an understanding of inwardness 'as a basic ontological property like "having" arms and legs' (Hittinger, 1990, p. 114).According to Taylor, such inwardness was introduced into Western discourse initially through the work of Augustine, who sought to foster an approach to selfunderstanding that moved beyond simply being an object of one's own consideration, and into a deeper condition of attempting to experience our own experience (Calhoun, 1991).Through Descartes and Locke, this developed through the modern era into a stronger reification of the objectification of self, which ultimately led to what Taylor (1999) elsewhere depicts as the erroneous modernist view of monological consciousness, through which the disembodied rational reasoning of the autonomous subject came to be seen as both the hallmarks of personhood and the condition of moral subjectivity.
The 'second major aspect of the modern identity' is what Taylor (1989, p. 211) refers to as the 'affirmation of ordinary life' as the primary arena for understanding and cultivating the self in relation to the good.From the Reformation, 'the locus of the good life' was dislocated from 'higher activities', such as 'the supreme importance for politics' for Aristotle or the supposed 'citizen ethic' and 'aristocratic ethics of honour' that Taylor identifies in early modern Europe, and relocated into aspects of human life concerned with labour, marriage and family.Rather than being 'outranked' by 'higher activities', such facets of ordinary life became the proper locus of a good existence (1989, pp. 212 and 213).The third major feature of modern identity, Taylor argues, was engendered by the 'expressivist turn' of 19th-century philosophy and literature, which instilled the assumption that 'authentic' selfhood necessitates the discovery and articulation of our inner nature (Hittinger, 1990).The expressivist turn embedded the notion 'that each individual is different and original, and that this originality determines how he or she ought to live', and instilled 'the obligation on each of us to live up to our originality', something which Taylor sees as being 'one of the cornerstones of modern culture ' (1989, pp. 375 and 376).
These are the three major sources of the modern identity that Taylor identifies, and the intellectual historicity of these combined sources provide the hermeneutic horizons of how the modern self is articulated and understood in moral terms.While Taylor is quick to acknowledge the 'gains ' (1989, p. 61) that have been made to our moral understanding of the world and ourselves through these terms, he is deeply critical of their consequences for our modern moral outlook, which he takes to be broadly individualist, disenchanted, and couched in emotivism.In often quite moralistic and nostalgist terms, Taylor (1989, p. 508) argues that we live in a world defined by 'the loss of substance, the increasing thinness of ties and shallowness of things… A society of self-fulfillers, whose affiliations are more and more seen as revocable'.
The sources of the modern identity, as they developed vis-à-vis Enlightenment thought, has also had consequences for how morality has been conceptualised intellectually, which Taylor argues has resulted in modern understandings of morality being framed in a number of incongruous ways.He maintains that Enlightenment thought framed moral questions in terms of rationalism while discounting cultural historicities, intuitions, and moral aspects of selfhood as necessary grounds on which moral issues are located and understood.This Enlightenment stance firstly has led to modern moral understandings of morality being framed by what Taylor refers to as 'naturalism'.This is something of a catch-all phrase used by Taylor to depict a modernist tendency towards 'scientistic reductionism' (Frisina, 2002, p. 17), which encompasses the 'radical reduction' of the significance of meaning to human lives (Taylor, 1989, p. 19).Taylor argues that the naturalist outlook renders the 'issue of meaning a pseudo-question ' (1989, p. 19), and assumes that the layers of 'moral, social and religious meaning that appear to constitute human agency are really something else, something that is only properly understood from the point of view developed by modern natural science' (Smith, 2002, pp. 6 and 7).However, similarly to MacIntyre (1985), Taylor argues the Enlightenment attempt to abstract morality from the cultural and spiritual meanings, intuitions, and identities upon which it is in fact based has led to moral issues being seen as intractable in the rationalist terms set by modernist epistemologies themselves.This intractability has led, in Taylor's eyes, to the coterminous perception, also common in modern society, that morality is either beyond the bounds of what can be systematically understood or instead reflects little more than emotivism.
Taylor thus sees the proliferation of naturalism as circumscribing the epistemic horizons of how morality and selfhood are understood in the modern world in several ways.He of course sees it as being instrumental in undermining the significance of spiritualism as a moral source.But he also argues that it is inculcated in the dominance of proceduralist outlooks in philosophy, which attempt to provide principles for guiding 'correct' moral action in abstraction of circumstance and subjective meaning (Hittinger, 1990).Furthermore, and almost conversely to the rationalist explication of procedures for moral action, Taylor sees the naturalist mindset as constituting the context in which morality is posited in terms of emotivism, or reduced altogether to a facet of cognitive behaviour, which, if accessible at all, is accessible to behavioural, psychological, and even neurological analysis alone.
ABBOTT
In terms of selfhood, Taylor (1989) is deeply critical of the propensity of Enlightenment thought to characterise subjectivity in terms of detached rationality, which casts the significance of personal and cultural meaning as inferior, rather than as an inescapable facet of personhood.Taylor likewise sees the naturalist propensity to 'understand human beings as self-contained objects of scientific study' as endemic in what he quite unceremoniously lumps together as 'social scientific' approaches to selfhood (Calhoun, 1991, p. 234), in which selves are apparently understood as being socially 'introjected', or established only in relation to specific social situations, or simply as cognitive renderings (Taylor, 1995, p. 65).
It is this polymorphous combination of modernist depictions of morality and selfhood that has deemed the 'inextricably intertwined' nature of these concepts as inexplicable to modern thought (Taylor, 1989, p. 3).What all of these approaches have thoroughly 'removed from the explanandum' is the meanings and the terms of strong evaluation, circumscribed by moral frameworks, through which people make sense of their lives (1989, p. 58).It is here that Taylor (1989, p. 510) thus links back to the earlier argument of the book, as he ties in his critique of naturalism with an affirmation of a moral theory that affords a place to historically-orientated and biographically-understood moral subjectivity, which is articulated by 'the subject through languages that resonate within him or her'.Taylor (1989, p. 509) argues that the proliferation of the modern sources of identity has resulted in a distinct 'problem of the loss of meaning in our culture'.Yet, as a result of this, and as a result of how the modern identity is understood in relation to these sources, he argues that the individual self becomes the necessary locus of moral understanding: 'We are now in an age in which a publicly accessible cosmic order of meanings is an impossibility.The only way we can explore the order in which we are set with an aim to defining moral sources is through this part of personal resonance ' (1989, p. 512).What is thus needed is a moral theory in which meaning and 'making sense' of the world is taken to be ontologically basic, which leads to the articulation of one's identity in relation to linguisticallymediated frameworks as a necessary feature of personhood, but which thereby also allow the meaningfulness of moral sources to be understood and engaged with in a way that is 'inseparably indexed to a personal vision ' (1989, p. 510).However, Taylor (1989, p. 72) argues, 'what I call the exploration of order through personal resonance' is inadequately considered by most philosophic perspectives, which, due to their proceduralist emphasis, are unable to consider 'the meaning things have for us' in serious terms.
Taylor thus posits a moral theory that centralises a place for efficacious moral subjectivity, which is able to articulate moral standpoints that reflect who we understand ourselves as being within a world of shared meaning (Taylor, 1989).This is something that Taylor sees both Mead's theory of the socially emergent self (Taylor, 1995) and Habermas's communicative ethics (Taylor, 1989) as being unable to facilitate.Similarly to Taylor (1989, p. 509), Habermas, who 'borrowed a great deal from George Herbert Mead', sees the self as being 'constituted by language, hence by exchange between agents'.However, Taylor (1989, p. 510) argues, without a phenomenological account of personhood that centres interpretiveness and articulation of meaning as foundational to subjectivity, in Habermas's (and by extension Mead's) explanation 'there is no coherent place left for an exploration of the order in which we are set as a locus of moral sources'.From Taylor's (1989, p. 510) account, this 'order is only accessible through personal, hence 'subjective', resonance', in which the individual is able to-and necessarily does -articulate the meaning of 'moral sources outside the subject through languages that resonate within him or her'.
I will go on to set out why Taylor is incorrect in his diagnosis that the Meadian basis of Habermas's argument is unable to leave room for an articulating moral subjectivity.Further, 522 -ABBOTT I will argue that by basing its explanation on the social development of individuated subjectivity, rather than on a phenomenology of personhood, a Meadian approach is in fact much better placed to explain the emergence of such a moral subjectivity.
| CRITIQUING TAYLOR'S VIEW OF MORAL SELFHOOD
Critiques have been levelled against Taylor's theory of moral selfhood on a variety of grounds.Several commentators have questioned the claims made by Taylor that the relationship between moral sources and human subjectivity is ontologically essential (Frisina, 2002;Hittinger, 1990;Kerr, 2004;Smith, 2002).As Smith (2002, p. 117) has argued, from the basis of Taylor's phenomenological account of personhood and his account of the role moral sources play within human agency, 'it is not clear why moral sources must feature in an ontology of the human'.As discussed, Taylor bases his ontological claim that 'orientation in relation to the good is essential to being a functional human' upon a phenomenological account of personhood, which ultimately rests upon the argument that '[t]o lose this orientation, or not to have found it, is to not know who one is', and thus a person without such an orientation would necessarily be 'in the grips of an appalling identity crisis', and if 'the person doesn't suffer this absence of frameworks as a lack, isn't in other words in a crisis […] we should see such a person as deeply disturbed ' (1989, p. 42 and 29, 31).Taylor's resort to identity crisis, which in itself is neither empirical proof nor ontological truth, is telling of the fact that, within the framework he sets for himself, 'all Taylor can do is rely upon the phenomenological argument that we cannot imagine ourselves operating in the world without engaging in a continual evaluative process' (Frisina, 2002, p. 18).Even if we did accept the ontological stature of moral sources (something which itself is questionable), this 'would still not entitle us to say that human subjectivity is essentially constituted by moral sources' on the basis of Taylor's argument (Smith, 2002, p. 117).
There are analogous problems with Taylor's conceptualisation of identity.Taylor posits his concept of identity in reified terms, as being 'defined by the commitments and identifications' that allow me to determine 'what is good, or valuable, or what ought to be done', and as 'something that one ought to be true to ' (1989, p. 27 and 30).This is a much more concretised and much less socially dynamic picture of identity than most other dominant conceptualisations allow (Flanagan, 1990).Indeed, Taylor has been criticised for providing an overly intellectualist and overly moralised depiction of identity (Flanagan, 1990).Taylor's conceptualisation has been characterised as erroneously intellectualist in that it defines identity in terms of proverbially 'higher' intellectual and reflective faculties of the subject, notably in terms of the individual's capacity to articulate and live up to commitments that they take to be definitive of who they are (Flanagan, 1990).This relates to the latter charge that Taylor's conceptualisations of identity are overly moralistic.Taylor (1989, p. 28) explicitly defines identity through moral orientation, using his conceptualisation of identity as an 'essential link' connecting his ontology of the person with moral sources.It is hard to not read his concept of identity as being formulated in the way that it is specifically to achieve the ends of this ontological argument.Even if this were not the case, Taylor seems to exaggerate 'the role played by moral principles in constituting identity' (Smith, 2002, p. 95).Combined, the intellectualist and moralised conceptualisation of identity that Taylor presents has been accused of being decidedly 'top-down', in the sense that it assumes that identity is defined by our deliberations and articulations of who we are in relation to the values and commitments we hold (Rorty & Wong, 1990, p. 36).
ABBOTT -523
Elsewhere, this is a line of critique that Taylor (2004, p. 73) himself is keen to make against the flaws of the 'modern epistemology'.Beyond his moral philosophy, Taylor's intellectual project has cohered around philosophic critiques that challenge deeply embedded Enlightenment notions of 'monological' consciousness (Taylor, 1999(Taylor, , 2004)).In such accounts, Taylor fully advocates setting 'the primary locus of the agent's understanding in practice', which he takes to mean that 'much of our intelligent action in the world' reflects a partially articulated understanding of the social world into which we are habituated (Taylor, 1999, pp. 33 and 34).Here, Taylor (1999, p. 35) depicts firm and clear articulations based on reflective self-awareness as rare 'islands in the sea of our unformulated practical grasp of the world'.Several commentators have thus identified a disconnect between the subject presented by Taylor's Heideggerian-Wittgensteinian based analytic arguments for centring human understanding in terms of practice, and the subject presented in his moral theory which is orientated around the apprehending and articulation of moral sources in relation to an intimate discursive self-understanding (Kerr, 2004;Shapiro, 1986).The point here is that Taylor's account of the significance of moral sources to personhood 'is surprisingly cognitive and discursive', and provides little insight into inarticulate moral activity (Calhoun, 1991, pp. 261 and 262).
Furthermore, while Taylor argues that our articulations are crucial to understanding ourselves, a rounded consideration of how identities and moral selfhood are experienced and lived out in practice is left largely unexplored, which are neglected in favour of an extensive exploration of the (Western) intellectual history through which the parameters for understanding modern identity have been constituted.Joas (1996) argues that the content and course of this argument gives the impression that Taylor was interested in experience only when it provides a springboard for his contemplation of intellectual history.Indeed, when compared alongside works that cover similar themes, such as William James's work on religious experience, Dewey's work on ethics, or Mead's interactional arguments of the self, it quickly becomes evident how sparse references to experience and practice in Taylor's work are (Joas, 1996).
This highlights notable absences in Taylor's historiography, for while he ostensibly seeks to exhaustively explore the intellectual horizons that frame the modern identity, the intellectual history that Taylor describes makes only brief remarks about pragmatism.Joas (2000, p. 142) describes Taylor's 'downright spectacular' disregard of pragmatism as not simply a matter of thematic selectivity, but rather as a telling omission that represents a disregard of social and concrete experience in Taylor's moral theory, which are seemingly considered secondary to philosophic history in the constitution of modern moral life, and this has significant consequence to the systematic claims of Taylor's arguments.
Indeed, as Calhoun (1991, p. 260) reiterates, Sources 'is a book almost exclusively about those intellectual elites, written with no more than passing reference to some very important sociological factors and questions': [Taylor] presents us with a history of the transformations producing the modern self written almost entirely through "great men"; he gives little attention to how or in what degree this process influenced, reflected, or was in tension with the lives and thought of women or other men, how it may have varied systematically by social context or position, or how it was shaped by broader patterns of social change.(Calhoun, 1991, p. 233) Taylor (1989) of course acknowledges that intellectual history cannot be detached from sociocultural history and also from economic and political change.However, he argues forcibly 524 -ABBOTT that the themes he identifies as being integral to modern identity are the distinct product of shifts in intellectual history.Claims such as '[i]t is hardly an exaggeration to say that it was Augustine who introduced the inwardness of radical reflexivity ' (p. 131) or that an 'epochmaking' change occurred when 'Descartes situates the moral sources within us' (p.143) are commonplace in Taylor's historiography, which suggests an overestimation of the role of philosophers in constructing the modern identity and what most people take to be moral in the modern world.While philosophic discourse has of course had a profound impact on how the self and morality are understood, Taylor often depicts shifts in moral frameworks as being much more dependent on the thought of philosophers than seems feasible, rather than seeing such frameworks and shifts as being tied to a complex and emergent nexus of social, cultural, political, and economic relations that extends far beyond philosophic treatise.Capitalism, for example, only features occasionally in Taylor's discussion of the sources of modern individualism, and as Calhoun (1991, p. 260) illuminates: Taylor notes the role of the eighteenth-century novel, but not the rise and partial popularization of the university, the expansion of the reading public, the spread of new media, […] the introduction of democratic politics, the rise of state bureaucracies, the shrinking size of the family[…] All of these unquestionably have played a role in the transformation of moral sources and the reconstitution of selfhood.
As well as neglecting social and cultural transformations outside of intellectual history that have shaped the frameworks of modern identities and moral understandings, Taylor is similarly indifferent to the significance of direct social relationships in moulding our identities and moral sensibilities on a personal level.He scarcely acknowledges the 'power of our strongest social relations', in defining both our identity and orienting and moulding our moral perspectives (Calhoun, 1991, p. 262).It seems quite plain that our moral understandings, identities, and actions reflect 'concrete, highly immediate, and even embodied sensitivity to how our actions fit into the relationships we most value' and also that such relationships 'become moral sources' in themselves (Calhoun, 1991, p. 262).Empirical research has illustrated that while family life is 'animated by and linked to wider notions of right and wrong' (Holdsworth & Morgan, 2007, p. 405), which are drawn from 'ideas about moral obligations derived from wider culture' (Finch, 1989, p. 143), interpretations of one's own responsibility and obligations towards one's family vary considerably in reflection of the circumstances and expectations of the familial relationships, with subjective interpretations of responsibilities and obligations developing and emerging interpersonally between family members, often in relation to unfolding situational contexts, which mould assessments of what the 'proper thing to do' is in practice (Abbott, 2020).Taylor's neglect of the significance of social relationships means that his conceptual framework is ill-equipped to explore how moral understandings and identities develop and how these are lived at the level of practice.This is of great significance because, as I will go on to argue alongside Mead, it is through interactions and social relationships that selves and identities develop, and also through these that the moral sources that Taylor emphasises enter into the understanding and experience of individuals.Taylor (1989, p. 35) is of course keen to emphasise that selves are constituted in language, and he acknowledges that '[a] self can never be described without reference to those who surround it'.However, where the social and intersubjective dimensions of selfhood and morality are hinted at, these are couched in philosophic exegesis rather than well-supported social explanation, meaning that the social processes through which the self emerges, the interactional and plural formation of identities, and the intersubjective, socially emergent, and contextually-bound nature of much of our moral action is found seriously wanting.Indeed, while ostensibly about the self, several have argued that Sources in fact provides only a limited theory of self, primarily because claims of the intersubjective emergence of the self and identity hinted at in Taylor's argument are insufficiently developed (Frisina, 2002;Joas, 2000;Smith, 2002).
Running through all of the critiques addressed here is the point that Taylor looks past the basic and concrete role that social interaction and social relations play in the relationship between morality and selfhood.Despite references to intersubjectivity and language, the frameworks of identity, personhood, and historical explanation through which Taylor's arguments are formed are surprisingly ill-equipped to explore the significance of intersubjective interaction to moral selfhood, and his attempts to establish an ontological basis for an articulating moral subjectivity through these terms neglects what is in fact basic, which is that even 'the most private and personal moral endeavour is based on judgments and sentiments that have been developed through social experience and spread by social contacts' (Hayes, 1918, p. 296).I argue next that the work of Mead provides much clearer direction in this regard.
| MEAD AND A SOCIAL THEORY OF MORAL SELFHOOD
In commenting on Taylor's aforementioned neglect of pragmatism, Joas (2000, p. 143) highlights that what is interesting about this omission is that '[h]e is not ignoring a tradition that threatens his convictions, but a school of thought which could offer support, indeed inspiration for his arguments' (see also Frisina, 2002).This point, I argue, applies most fervently to the omission of Mead, not least because Mead covers virtually identical themes to those that dominate Taylor's work: the self, language, and morality.Although considered to be the forefather of theories of the social emergence of the self, throughout his work, Taylor 'refers to Mead only superficially and misleadingly' (Joas, 1997, p. xxi).In Sources Taylor restricts his consideration of Mead to a footnote which claims 'Mead is too close to behaviourism and not aware of the constitutive role of language in the definition of self and relations' (Taylor, 1989, p. 525).Joas (1997, p. xxi) describes this as 'an odd characterization, to say the least, of the figure who is considered the inaugurator of the symbolic-interactionist tradition'.
Elsewhere, Taylor is tersely scathing of Mead.He depicts Mead's theory in the following way: A person "becomes a self insofar as he can take [the] attitude of another towards himself as others act" [Mead, 1934, p. 171].In the very impoverished behaviourist ontology which Mead allowed himself, this seemed to be a brilliant way to make room for something like reflexivity while remaining within the austere bounds of a scientific approach.But what we see here is something like a theory of introjection.My self is socially constituted, through the attitudes of others, as the "me".(Taylor, 1995, p. 64) While Mead described himself as a 'behaviorist', this was prior to the term being equated with the kind of Skinnerian behaviourist psychology with which it now more generally refers (in fact, Mead (1934) vehemently opposed earlier renditions of such behaviourism in his extensive critique of Watsonism [Joas, 1996]).The argument below will show how this 526 -ABBOTT characterisation of Mead is misplaced, but it is important to register that Taylor's (1995, p. 65) real problem with Mead seems to be that Mead's theory does not yield a dialogic self: 'intro-jection… becomes necessary for Mead, because he doesn't have a place in his scheme for dialogical action, and he can't have this because the impoverished behavioral ontology only allows for organisms reacting to environments'.Taylor (1995, pp. 64 and 65) does not disagree with Mead that 'first definitions of ourselves are given by our parents and elders', but argues that the dialogic self that develops thereafter cannot be described in terms of 'an introjected identity and some unformed principle of spontaneity', which is how Taylor characterises Mead's 'me' and 'I' respectively.
Taylor goes on to soften his stance towards Mead somewhat in his recent work The Language Animal.But even here, while Taylor argues that Mead's 'self is not just an introjected dummy', he argues that Mead's challenge to monological consciousness is 'insufficiently radical' in that self-awareness is depicted as running 'alongside […] the internalization of the other's view and expectations of me', rather than seeing 'self-awareness as emerging out of a prior intersubjective take on things' (Taylor, 2016, p. 64).Here, Taylor returns to his earlier critique against Mead that '[t]aking the stance of the other is a monological act, one that is usually influence by-or, at best, coordinated with-the other but still thoroughly mine ' (1995, p. 65).Those more familiar with Mead's arguments would quickly recognise that the very basis of Mead's theory of the self is exactly what Taylor claims it is not, namely that we are practical and intersubjective before we are anything else, and it is out of this 'practical intersubjectivity' that our self-awareness develops (Joas, 1997, p. 14).
Indeed, the most basic and well-known feature of Mead's (1934, p. 135) work is his argument that the capacities associated with selfhood are developed through interaction, through a 'process of social experience and activity'.While self-consciousness is underpinned by human capacities for reflexivity, its development relies upon the individual being able to assume 'a position of reacting in himself [sic]' in the sense that the individual can respond to themselves as an object of their own consideration (Mead, 1934, p. 194).Quite unlike Taylor's depiction of him, Mead (1934, p. 69) emphasised 'the critical importance of language in the development of human experience [which] lies in this fact that the stimulus is one that can react upon the speaking individual as it reacts upon the other'.Mead's (1912) argument is that the nascent understanding of shared meanings of gestures is integral to the development of self-consciousness because being able to respond to one's own action as the other responds to it provides the basis for the individual to experience their action as an object of their own subjectivity.
It is in this communicative process, as a child begins to recognise the meanings that her own actions carry in relation to the responses of others, that what Mead (1913) refers to as the 'I' emerges: the child develops a sense of herself as a subject who acts but who can also assume a perspective towards her action, having arrived at some degree of awareness of how her action will be received.Something that Taylor's depiction of Mead's 'I' misses-which has important consequences down the line-is that for Mead, the 'I' is the 'actor in the present tense', in the process of doing the acting, doing the monitoring of action, and, as the self develops, doing the reflecting upon oneself (Crossley, 2011, p. 94).Having assumed a position of being able to react to herself, the individual gradually develops a sense of the self as an object; firstly as an object for the consideration of others, and latterly as an object of her own reflective engagement (Mead, 1934).
During early stages, the child takes on, reflects upon herself, and acts in relation to the attitudes of specific others (notably care-givers) in specific interactional settings.Yet Mead (1934, p. 152) argues that 'self-consciousness in the full sense of the term' is attained as the ABBOTT -527 individual is able to view themselves not just from the standpoint of specific others, but also from the perspective of the generalised other, understood abstractly as generalised expectations, which are carried forth into interaction.As the child's sphere of interaction increases, the sources of behavioural direction diversify from specific authoritative voices of primary caregivers into a more generalised understanding of the standards of behaviour expected by the broadly construed communities of which she is part.It is here that the 'me' aspect of the self develops as the collected attitudes of others, which the individual can take towards herself and utilise in her engagement with herself as an object; she develops a me that is formed from interactionally absorbed attitudes, and it is the collected attitudes of others that form the content of the me that is reflected on as an object by the I (Mead, 1925).
It is important to recognise, however, that the development of the socialised self does not mean that the individual is merely 'introjected'.Firstly, Mead (1934, p. 140) is clear that this process provides the basis for a self that is able 'to converse with himself as he had communicated with others'.Secondly, individuals are socialised within a plurality of relationships and contexts, meaning that the attitudes of others that comprise the 'me' are diverse, and form a generalised other that is polyvocal and situationally variable, which is experienced and reflexively engaged with from hugely stratified standpoints, meaning that the self (and the instilled modes and attitudes towards conduct that it comprises) is constructed and engaged with from a multiplicity of complex and variable perspectives (Mead, 1934).Thirdly, the encountering of a plurality of attitudes leads to a plurality of reference points for the me, and consequently, in order 'for consistent behaviour to be at all possible, these different 'me's' must be synthesized into a unitary self-image', which means the reflected on me also begins to serve 'as an element of my emerging self-image' (Joas, 1997, p. 118).In relation to this, the development of the I as the thinking, acting, reflecting phase of subjectivity, facilitates the capacity for critical and evaluative responses that allow us to assume a position that can be resistant to social pressure (Bottero, 2019).Joining a protest movement or becoming vegetarian, for example, are personal decisions that may run contrary to our habituation, and yet they are socially engendered positions arrived at through dialogue conducted via a differentiated, reflective and actuating I, with a me that is constituted through the plurality of collected attitudes of others it encounters.
Contrary to Taylor's critique, a vital aspect of Mead's argument is that the emergence of the self is productive of an 'intersubjectively mediated self-understanding', which allows the individual to reflexively engage with herself through internal dialogue, and to assume standpoints that she recognises as being her own (Habermas, 1995, p. 153).Indeed, one of the key advantages of Mead's (1925Mead's ( , 1934) ) theory lies in its capacity to explain individuation as being an inherent facet of the social constitution of the self, in which interactional engagement within a complex social context is productive of a simultaneously embedded and individuated subjectivity, which the individual is able to reflexively acknowledge and engage with in the course of her existence.
While Taylor (1989, p. 36) is intent on establishing and maintaining a place for articulating dialogic subjectivity, apart from arguing that 'achieving self-definition' occurs within 'webs of interlocution', his theory does little to expound the process through which the deep-seated sense of identity he depicts is arrived at.Mead describes how the process of individuation begins with the development of a reflexively capable mind through the taking of attitudes of others, which provide substance to the me, but which also initiates within the individual the capacity to respond to and converse with herself.The multitude of attitudes we encounter and continue to engage with in the course of our life populates the object of our self-consciousness with an expanding plurality of others and generalised perspectives, which we engage with and find our own voice in relation to (Crossley, 2011).And 'to the extent that this occurs, there arises an internal center for the self-steering of individually accountable conduct' (Habermas, 1995, p. 152).Mead's work thus allows us to account for the social emergence of a moral subjectivity that is thoroughly situated on the one hand, and individuated and reflexively engaged with by the actor in a way that is potentially transformative of their action and perspectives on the other.
This process begins with moral habituation.Similarly to Taylor, Mead (1925) takes normativity and moral evaluation to be an indispensable facet of social existence, which consequently plays an integral role in the emergence of the self.Indeed, in many ways, for Mead, emerging as a self means emerging as a 'moral' self, and he conceptualises the self and its emergence in normative terms: 'The individual possesses a self only in relation to the selves of the other members of his social group; and the structure of his self expresses or reflects the general behavior pattern of this social group to which he belongs' (Mead, 1934, p. 7).The process of emerging as a self from one's social surroundings embeds the individual within the normative expectations and values of this environment, because, from an early age, our practical activity in the social world 'depends upon the internalization of the agencies that monitor behavior, which migrate, as it were, from without to within' (Habermas, 1995, p. 152;Mead, 1925).
Initially, these attitudes tend to be the specific attitudes of specific others, most often parents and caregivers, which are primarily taught by rote.However, as this process continues, the individual becomes able to internalise more complex attitudes taken towards her by others.Instructions given to children come loaded with 'moral evaluation and emotional intonations of approval or disapproval' that the child gradually comes to be able to comprehend in relation to her developing reflexive self (Burkitt, 2008, p. 59).She comes to recognise that if she lashes out or doesn't say thank-you she will be thought of as being bad or ungrateful.This engenders a new dimension of evaluative self-judgement, notably in the form of shame and pride, as the child comes to understand how she may be judged by others (Cooley, 1902).
The emergence of the self through socialisation thus situates the actor into the taken-for-granted 'background' understanding upon which their basic moral competency is founded, which allows the individual to engage with ordinary normative parameters of their social world with an embodied habitude (Mead, 1925).However, emerging as a self in a morally-charged world leads to not just a banal habituation into normative convention, but also embeds us into deeply valued and evaluative terms and sentiments (Sayer, 2005).As children, we are not simply told how to act, nor does our social development entail the uncomplicated absorption of prescriptive rules to follow for all circumstances.We are also being taught through our interactional integration to evaluate ourselves, to take responsibility for and justify our actions and opinions, to form judgement on what it means to be a certain type of person, and to understand why certain things are personally and socially important (Burkitt, 2008).This engenders the development of moral capabilities that we take to be indicative of ordinary personhood, for example capacities of responsibility, accountability, and evaluative judgement.
While Mead (1925) argued that the internalisation of attitudes manifests in the emerging self as a pervasive mechanism of self-regulating social control, he also saw the individual's integration into patterns of normative conduct as providing the foundations on which more complex moral consciousness develops.Being able to assume the perspective of the generalised other is integral to this process because the taking on and the assumption of moral attitudes (for example in terms of evaluating one's own perspectives and values, and the assumption of responsibility for one's own conduct), as well as the capacity for moral action, depend upon the ABBOTT individual being able to take the attitudes of the generalised other towards herself and her action, as well as towards others and their action (de Waal, 2008).
Contrary to a simplistic reading of the generalised other as referring to 'the attitudes of the whole community' (Mead, 1934, p. 154), Mead's deployment of the concept in fact functions on varying levels of generality and specification in relation to the situation at hand.This is clear from his exploration of the concept through the assumption of different roles within games, institutions, and in participation in politics, each of which entails engagement and negotiation with generalised attitudes of various relevant 'communities' and 'subgroups', as well as assuming 'attitudes toward his behavior of those other individuals with whom he is involved in the given social situation' (Mead, 1934, pp. 155 and 156).Mead conceptualised the 'generalised other' as a facet of practical consciousness through which perceived attitudes of others germane to the immediate intersubjective context are engaged, as well as the medium through which our own courses of action and subjective positions are negotiated (Holdsworth & Morgan, 2007).The generalised other should thus not be seen as a single voice representing a static community, but rather as a plural and emergent process that functions between providing 'barely necessary cues' for action (Mead, 1913, p. 378) and providing a sounding-board for reflection, which allows us to negotiate the 'ongoing mixture of simultaneous values that individuals must navigate in day-to-day ethical decision-making' (Burr, 2009, p. 337).
An essential point to be made is that the moral judgements and evaluations that people routinely make, and the generalised and specific attitudes of others that are engaged with in this process, while inextricably socially-constituted, are engaged with from the perspective of an active and individuated reflective subject-the 'I' in Mead's terms.Holdsworth and Morgan's (2007) studies into decisions to move away from home revealed how their participants reflexively engaged with various generalised others -parents, friends, the neighbourhood-in relation to their own perspective on their own decisions, with an understanding of what is at stake and what action they want to take, which frequently produced action that ran contrary to their perception of the generalised attitudes of friends, family, or the community more broadly.While the moral decisions and positions that were construed reflect deeply entangled relations within which they were formed and enacted, they were nonetheless arrived at and articulated by a moral subject who recognised these positions and decisions as being their own.
Through his conceptualisation of the individuated and dialogically capable self, Mead's theory is able to facilitate an explanation not just of moral habituation, but also of individual capacities for reflexive moral rumination about what we value, and what our moral action should be, which in turn leads to the individual's articulation and enactment of standpoints that they recognise as being their own (Habermas, 1995).However, though such dialogical transformation is potentially efficacious to personal action, what is key for Mead is that the locus of moral transformation rests upon intersubjective interaction.His interactionist theory of the self firstly establishes the ground upon which the capacity for evaluative moral judgement develops within the social emergence of the self, and secondly locates the stimulation of such evaluative judgement in interaction, as being a product of intersubjective sociality, which frequently engenders junctures of action and conflicts of viewpoint that stimulate reflexive contemplation (Joas, 1990).While Taylor is keen to assert the primacy of intersubjectivity, his discussions of interaction are surprisingly meagre, which leads to his conceptualisations of moral articulation as more detached and intellective than some argue he intended (Calhoun, 1991).
Mead of course presents a view of the self that is strongly socially embedded.However his theory continues that while self-consciousness is indeed initiated by taking the perspective of the other, the multitude of attitudes that the individual encounters leads to the development of 530 -ABBOTT a more complex form of subjectivity, in which the individual comes to form standpoints upon the social world that she recognises as being her own; standpoints that can be engaged with via internal dialogues from a position of self-conscious subjectivity, which can be efficacious to the course of the individual's action.
Explaining this phenomenon seems to be the motivation behind Taylor's attempts to explicate the relationship between selfhood and morality.But his attempt to do so through a phenomenological account of identity that neglects social relations and interaction leads him quite quickly to the recourse that addressing moral questions is an 'inescapable' facet of human agency, and that the notion of an identity not defined by some 'strongly valued preference is incoherent' (Taylor, 1989, pp. 32 and 30).Mead, however, is able to explain the assumption of autonomous moral positions as being arrived at through interactional relations, as they 'constitute a context wherein we develop the capacity to make decisions and act upon them, including decisions which deviate from social norms and resist social pressures' 2 (Crossley, 2006, p. 4).In this sense, the capacity to assume a moral position, to form judgements, and justify positions that one recognises as being their own, is something that is attained through the interactional emergence of a dialogic self.
| CONCLUSION
While Taylor's intention is to leave room for a dialogic moral subject within a world of shared meaning, his description of how this occurs, based as it is on his assumptions of a phenomenological account of identity, is found wanting.Contra Taylor, Mead allows us to see that the reasons why moral evaluations and judgements are of significance to people's lives is not predicated on a transcendental view of identity, in which strong evaluations are necessitated in order to be able to address 'ontologically basic' questions of who we are.Instead, Mead depicts how moral consciousness develops socially in relation to moral habituation.Selfhood emerges within a social existence that embeds us into a world permeated by more or less formalised normative and evaluative expectations and judgements, which variably pervade practices and shared ways of understanding the world and interpreting behaviour, and thus become integral both to our participation in, and our understanding of, the social world of which we are part (Burkitt, 2008).And while this social emergence of self habituates us into the moral expectations and frames of evaluation within which it is formed, it also produces a self that relates both to itself and to others, and it is this that allows the development of a morally conscious self.A notable advantage to a Meadian approach thus resides in its capacity to simultaneously recognise the socially-constituted nature of moral consciousness and the situatedness of moral action, while providing a social explanation of individuals as being reflexively active and dialogically engaged with their own action and views, as well as the views and action of others, from the perspective of their own socially-entangled subjective self-understanding.What Mead's theory of the self offers us, and what I argue is an undervalued contribution of his work (Abbott, 2020), is a properly social account of how individuated reflective moral consciousness develops through the intersubjective constitution of the self.
. 'Our identity is what allows us to define what is important to us and what is not | 13,358.8 | 2020-09-21T00:00:00.000 | [
"Philosophy"
] |
Practical Demonstrations – Key to Efficient Explanations of Radioactivity to Pupils and Students
The radioactivity workshop and hands-on demonstrations in Milan Čopič Nuclear Training Centre enable us to effectively transfer some basic information about radioactivity, radiation and radiation effects to our young visitors. This activity has been well accepted and praised by teachers, who are aware of the subject importance for education. Keywords— radioactivity, demonstrations, workshop, pupils, students.
I. INTRODUCTION
Milan Čopič Nuclear Training Centre (ICJT), which is part of Jožef Stefan Institute (JSI), Ljubljana, was founded in 1989 to support training of Krško NPP workers. Since then, number of courses for control room staff and other technical personnel were prepared and implemented. Courses are also intended for members of technical support organisations, authorities and experts employed by Krško NPP subcontractors. ICJT also organise different courses and events in cooperation with international organisations or agencies like IAEA, ENS or EC.
Soon after successful conclusion of the initial courses, the decision has been made to expand our activities. At that time, the public opinion in Slovenia was still heavily influenced by Chernobyl accident and there were debates in media and among politicians about danger of nuclear energy and about the necessity to close Krško NPP. Explanations and clarifications related to safety of our plant provided by nuclear professionals were originally targeted to decision makers, and less to opinion makers. The information was presented and distributed within limited circles, also due to the limited interest of majority of media for, what was called at that time, "biased" opinion of nuclear experts.
Our aim was not to join those discussions, but to approach general public and to contribute to general opinion on long term basis. Since we were aware that the discussion on nuclear energy would follow into the following years, we have decided to establish nuclear technology information centre with permanent exhibition of nuclear technology. The vision was to become reliable and respected source of knowledge about nuclear technologies for general public. Since we had free basement at our premises, we were able to commission big lecture room and exhibition with posters and few mockups ( Fig. 1) without huge investments and lasting constructions.
At the beginning (in the mid-nineties), emphasise was given to the Krško NPP technology and operation, but later a part related to radioactive waste management was added to the exhibition. In the recent decade, exhibition was complemented with overview of nuclear fusion technology research. From the very beginning of the information centre operation, our most numerous and regular visitors are pupils and students from primary and secondary schools in Slovenia. In addition, other groups visit our centregroups of university students, teachers, members of different professional associations, firefighters, groups of retirees, etc. Annually, our information centre visit more than 150 groups and more than 6500 visitors. Altogether, more than 3,500 school groups and more than 180,000 pupils, students, teachers and other persons visited our information centre since 1993 [1].
The exhibition was usually short addition to the lecture for our visitors at the beginning of information centre operation. Posters with information were prepared to support lectures with some additional data or visual material, and to provide explanation of some concepts from physics or engineering which are important for understanding of NPP operation. At that time, we have discovered that is that explanations of basic concepts of radioactivity and ionising radiation have de facto disappeared from school programmes. They were either pushed in schedule somewhere at the end of school year, in parallel with final exams like a filler, or were considered optional, leaving decision on presenting these contents to individual teacher.
It was also obvious that majority of teachers are not competent to speak about these subjects and that they are prone to avoided it. Radioactivity used to be one of the subjects discussed in Physics classes, but was later added to Chemistry classes. It would work in "old" times, but after Chernobyl accident radioactivity and ionising radiation were considered result of reactor operation and considering the consequences of accident, also extremely dangerous. The other problem was that just few schools had any equipment that can be used for classroom demonstration, and even if they had the equipment, teachers did not know how to use it properly.
What we have learned is that if we want to effectively transfer messages to our visitors, especially pupils and students, and if we want them to become active subjects in EPJ Web of Conferences 225, 10001 (2020) https://doi.org/10.1051/epjconf/202022510001 ANIMMA 2019 debates and decision process related to nuclear energy in Slovenia, we have to provide them with basic information about radioactivity, radiation and radiation effects to human beings. This knowledge should serve as a basis for evaluation and judgment of problems and questions that must be resolved if we want to continue living with nuclear energy in near future.
We felt that adding or expanding existing lectures would not be productive, and we decided to add some hands-on experiments and to prepare small radioactivity workshop with practical demonstrations of ionising radiation properties, demonstration of natural background radiation and radon. We were also considering idea to prepare hands-on experiments for all our demonstrations, but it would be costlier and we also had to comply with limited time that participants spent at our site. Therefore, we came to conclusion that the most effective approach would be to combine hands-on experiments at the exhibition with practical demonstrations in radioactivity workshop and to complement demonstrations with physical background explanation.
A. Main Goals
There are two main goals behind hands-on experiments and demonstrations: first, we want to inform visitors that radioactivity is something natural and present everywhere in our environment, and second, we want to convince them that we know how to protect from excessive radiation.
The first message is the most important, since awareness of natural sources of radiation and the resulting exposure is essential for understanding of and discussion about the effects of ionising radiation in general. This is important not only when operation of existing (or planned) nuclear and radiation facilities are discussed and evaluated, but even more important when accidents occur, like the last one in Fukushima.
The importance of the second goal is related to acceptance of all radioactive sources, including nuclear or radiation facilities.
B. Approach and Design [2]
When we were designing exhibits and demonstrations our wish was to make them simple and attractive. Therefore, we decided to limit to basic information, which could be given in a short time (a few seconds!), without extensive explanation. Hands-on experiments should be self-explanatory (minimal necessary information should be written on the exhibit!) and visitors should be involved in experiment (they should "discover" information).
All instruments used should be very sensitive and must be equipped with loud acoustic indication (instrument read-outs are of minor importance), and only rate (counts per second or counts per minute) is allowed if some measurement result should be given. Abstract quantities, like dose, should be avoided. Old-fashioned detectors with End Window GM tubes and analogue displays satisfy these requirements and are used for demonstrations. For larger (and "noisier") groups of visitors, we use small web camera to project the instrument read-outs on the big TV screen.
For the hands-on experiments we use modern handheld instruments with sensitive pancake GM tubes and big digital displays as rate meters, but also with acoustic and optical (flashing light) indication.
C. Hands-on experiments
Two exhibits were originally prepared, one directly related to the demonstration of radioactivity, and the other more like a "teaser". The first exhibit is the radioactivity carousel, where different samples from the environment (fertiliser, potassium chloride, a watch with luminous dials, uranium glass, welding rods, gas mantle and an empty placefor background) are fastened on a round plate (Fig. 2). Visitors turns the plate and observe instrument response. Finding the most radioactive sample is a usual game that visitors play with the carousel, but nevertheless they also remember other samples. The carousel has also been copied (with our permission) as an applet on the Nucleonica.com site [3].
The second exhibit is just an instrument positioned above a table. Visitors can "measure" their own items, to check whether they are radioactive. As could be expected, nowadays the instrument is mostly used for checking mobile phones (Fig. 3).
Recently, we have also added two new historical exhibits: one exhibit is therapeutic 226 Ra source from the beginning of the twentieth century (Fig. 4), the oldest artificial radioactive source in Slovenia, which was donated by Faculty of Medicine, Ljubljana. The second exhibit are samples of uranium compounds (Fig. 5) prepared during pilot production of yellow cake in the 1980s at Jožef Stefan Institute, Ljubljana.
D. Radioactivity Workshop [2]
Demonstrations in the radioactivity workshop are intended to familiarise visitors with basic properties of radiation and principles of protection. The workshop takes place in a separate classroom with benches and a demonstration table with the measurement equipment. The walls of the classroom are covered with posters with some basic facts about radiation in support of the lecturer's explanations. Three separate measurement stations were installed, each with a ratemeter and a sensitive End Window GM tube on a carrier which assures better visibility. Stations are dedicated to experiments with α, β and γ radiation.
Absorbers were prepared for each station: paper, cardboard (1 mm), and aluminium foil (10 μm) for α radiation, cardboard (1 mm), aluminium plates (0.5 mm), and acrylic glass plates (5 mm) for β radiation, and half-value layers of lead (10 mm), steel (16 mm), aluminium (45 mm) and concrete (70 mm) for γ radiation. Special rulers with visible marks were also prepared for demonstration of the range of α radiation in air and dose-to-distance dependency for γ radiation.
For demonstrations we use pure sources: 210 Po as α source, 90 Sr as β source, and 60 Co (with β shield) as γ source. It is important not to use mixed sources to avoid additional unconvincing explanations and justifications. All sources are school sources below exemption levellow activity is compensated with sensitive detectors. We also use an additional check sourcesan old watch with luminous dials, similar to the watch on the radioactivity carousel, and Th-enriched Tungsten welding rods.
With this equipment we can demonstrate: The intention of these demonstrations is to illustrate basic properties and differences between α, β, and γ radiation, and also to convince visitors that we know how to measure radiation and how to protect from it.
There is also one less sophisticated item in the classroom: an ordinary vacuum cleaner. We use it to demonstrate the presence of radon progeny in the air. Due to the elevated concentration of radon in the exhibition hall (it is in the basement of ICJT), it is possible to collect significant activity of radon progeny on a plain kitchen filter within a few minutes by pumping with the vacuum cleaner. The collected activity is usually much higher than the activity of the α-source, which is always a huge surprise for visitors. There is even more "unusual" experimental gadget in the workshop: we use simple toy balloon for collection of radon progeny. After inflation, the surface of balloon becomes charged (some help with woollen rag is beneficial) and successfully collects radon progeny. Again, high radon concentration is helpful and collection of radon progeny in worst case requires no more than five to ten minutes. Recent addition to our workshop is a small Cloud chamber, which we use in combination with web camera and TV screen. Visible traces of charged particles in the chamber on the big TV screen are illustrative support of our explanations about interaction of radiation in matter.
III. RESPONSE FROM THE VISITORS
Almost all visitors of our Milan Čopič Nuclear Training Centre visit exhibition and radioactivity workshop. Most of them are teenagers, but we also have younger and also many older visitors, even retirees. Response from almost all visitors is positive, many of them have also express satisfaction for seeing "how things really work".
Our most faithful visitors are teachers who return every year with new groups of pupils or students. Teachers of Physics and Chemistry especially appreciate the radioactivity workshop as an addition to the regular classes. Most of them learned about radioactivity at University, but have no time and possibility to lecture and demonstrate radioactivity to their students. For other teachers (Biology, Math, Geography, History…) and students this is usually the first opportunity to observe radiation "at work".
In the radioactivity workshop demonstration of radon progeny is usually the most attractive and surprising. The visitors' reaction is almost always emotional, which also assures that it will stay in their memory. Other demonstrations in workshop are generally intriguing for the students with a more analytical mind and those, who already have some previous knowledge about radioactivity.
From time to time and for special interested groups, usually students from secondary schools, we prepare more advanced workshops where we perform measurements, and not only demonstrations. Although this is more demanding and time-consuming, it is also more rewarding.
IV. CONCLUSIONS
A number of teachers in elementary and secondary schools are returning visitors, who consider our workshop and demonstrations valuable addition to their lectures.
We can confirm that "A Picture is Worth a Thousand Words and an Experiment is Worth Fifty Slides" [4]. We have been very successful and efficient in the transfer of knowledge about radioactivity radiation and radiation effects to our visitors.
Probably one of the most important messages our visitors receive is related to the existence of natural background radiation and radon. | 3,151.4 | 2020-06-01T00:00:00.000 | [
"Education",
"Physics",
"Environmental Science"
] |
Delivery of magnetic resonance-guided single-fraction stereotactic lung radiotherapy
Highlights • MR-guidance enables high precision single-fraction lung SABR delivery.• Breath-hold gating resulted in a mean tracked GTVt coverage of 99.6% during beam-on.• On-table plan adaptation improved PTV coverage, but had little impact on GTV doses.• Improved techniques are needed to allow for consistent MR-tracking of small tumors.
Introduction
Stereotactic ablative radiotherapy (SABR) is the guideline-recommended treatment for medically inoperable early-stage non-small cell lung cancer (NSCLC) [1,2]. SABR can also improve survival in patients with oligometastatic disease [3]. Various dose fractionation schedules have been reported, and a biologically effective dose (BED 10Gy ) ≥100 Gy has been recommended for primary lung tumors [4].
Delivery of SABR in a single fraction is a potentially more convenient approach for patients, and the safety and efficacy of singlefraction SABR has been demonstrated for both early-stage NSCLC and pulmonary metastases [5][6][7][8][9]. However, clinical use of single-fraction SABR does not appear to be widespread, in part due to concerns about the accuracy of SABR delivery. One approach to improve accuracy is by using internal fiducial markers as a surrogate for x-ray based gating or tumor tracking [10]. However, the implantation of fiducials is not without risks, especially in the elderly and frail patients [10][11][12]. Approaches for tracking lung tumors without using fiducials have also been developed, but their reliability depends on tumor size and density [13,14]. Fast delivery of single-fraction lung SABR can be performed using flattening-filter-free (FFF) volumetric modulated arc therapy (VMAT), using an internal target volume (ITV) approach [15]. However, active motion monitoring is desirable as both 4-dimensional (4D) computed tomography (CT) and cone-beam CT (CBCT) may underestimate tumor motion during lung SABR [16].
Magnetic resonance (MR-)guided radiotherapy may facilitate singlefraction treatments as it permits SABR delivery under continuous image guidance [17]. Real-time MR-guidance circumvents the need for implanted markers, and allows for a more accurate assessment of respiratory-induced tumor motion when compared to use of a pre-treatment 4DCT [18]. In addition, use of gated delivery and daily on-table plan adaptation can allow for both optimization of target coverage and reduction in organ at risk (OAR) doses [19][20][21][22]. We report on our early experience with treating lung tumors in a single fraction, using the socalled stereotactic MR-guided adaptive radiation therapy (SMART) approach.
Introduction of single-fraction SMART
Single-fraction SABR of lung tumors has been an option in our departmental protocol since the safety and efficacy of this approach was reported in a prospective study [23]. Since late 2018, suitable patients with lung tumors were evaluated for single-fraction SMART on the MRIdian MR-linac (ViewRay Inc., USA). Patients were eligible if they fulfilled eligibility criteria used in the Radiation Therapy Oncology Group (RTOG) 0915 study, namely a tumor located ≥2 cm from the proximal bronchial tree and measuring ≤5 cm [23]. In addition, SMART was considered when delivery was technically challenging, for example if tumors were mobile and/or when clinicians were concerned about single-fraction delivery when using an ITV approach. This retrospective analysis was approved by the institutional ethics committee.
Treatment simulation and delivery were performed on the MR-linac, which has been in use in our institution since April 2018. The MR-linac incorporates a 0.35 T MR scanner and a linear accelerator delivering 6 MV FFF photons at a dose rate of 630 MU/min. The dose rate of our previous MRIdian Cobalt-60 system was considered unsuitable for single-fraction lung SABR due to long treatment times. The simulation and delivery procedures have been described previously [19]. Briefly, a 3-dimensional (3D) MR scan was first acquired during a 17-s breathhold. Subsequently, tumor motion was sequentially observed in all 3 planes using MR cine imaging with audio coaching, during normal respiration and in both quiet inspiratory and expiratory breath-holds. The patterns of tumor motion and position were observed visually in order to identify an optimal phase for gated delivery. The phase chosen depended on tumor visibility, distance to the chest wall, as well as breathhold reproducibility and tolerance, with most patients finally treated in shallow inspiration. Finally, real-time tumor tracking was evaluated in a sagittal MR plane which was generally in the middle of the tumor volume, using a slice thickness of 5 mm, but occasionally 7 mm. Tracking of a sagittal tumor outline was performed using the proprietary deformable image registration software. Briefly, the system acquired a series of preview MR cine images, from which it selected a reference (key) frame that best matched the sagittal 3DMR plane chosen for tracking. The tracking algorithm then automatically deformed the gating contour from the key frame to each acquired MR cine image at 4 frames per second [24]. Tracking performance was then assessed visually by a clinician and physicist present at the console.
After MR-simulation, a breath-hold planning CT scan was acquired for purposes of dose calculation, and for verifying tumor size and shape. After rigid co-registration of the CT to the planning 3DMR scan, the gross tumor volume (GTV) was contoured by a clinician on the breathhold CT scan, before the same clinician contoured the GTV on the corresponding breath-hold 3DMR scan. Any deviations in volume or shape observed between GTV contours on CT versus MR were reviewed by a second clinician, and a consensus was reached. Following delineation of the GTV and OARs on the 3DMR scan, a planning target volume (PTV) was created by adding an isotropic margin of 5 mm to the GTV. A step-and-shoot intensity modulated radiotherapy (IMRT) plan was then created in the MRIdian system, using a Monte Carlo algorithm with a dose calculation grid size of 2 mm, and 1% statistical uncertainty. Electron density maps were derived from planning CT scans, which were deformably registered to the respective 3DMR scans during offline and on-table adaptive planning. The accuracy of this deformable image registration, which accounted for potential differences in breathholds between CT and MR images, was assessed by the radiation therapist and/or physicist. The magnetic field was taken into account for both the fluence optimization and final dose calculation of all plans [25][26][27].
On the day of treatment, a new breath-hold 3DMR scan was acquired in treatment position, using the same respiratory instructions as used for simulation. After rigid fusion to the GTV on the baseline MR, OAR contours were deformably propagated to the MR-of-the-day, and edited as needed. GTV contours were modified by the clinician present only if this was considered necessary after visual assessment. The baseline plan was recalculated on the anatomy of the day, the so-called «predicted» plan. Hereafter, the IMRT plan was reoptimized based on the (adapted) GTV and OARs, using the same beam setup and optimization objectives as in offline planning. The planning objective was to deliver a prescription dose (PD) of 34 Gy to 95% of the PTV (V 34Gy ≥ 95%; V 47.6Gy ≤ 1 cm 3 ), while maintaining compliance with OAR constraints used in the RTOG 0915 study [23]. Clinicians then selected either the on-table reoptimized plan, or the baseline plan for delivery [19,26].
On-table plan quality assurance (QA) was performed using an independent Monte Carlo dose calculation engine available with the MRIdian online adaptive workflow. Treatment delivery was performed during breath-holds, with continuous visualization of the tracked GTV (GTV t ) in a sagittal MR plane, acquired at 4 frames per second. The beam was automatically turned off when a pre-specified maximum proportion of the GTV t , the so-called threshold-region of interest percentage (ROI%), was outside the gating window boundary. The gating window boundary was created by adding an isotropic margin of 3 mm to the breath-hold GTV. To facilitate patient breath-holds, both the GTV t and the gating window boundary were projected to the patient on an in-room monitor in real-time ( Supplementary Fig. 1). Due to lengthy delivery times, treatment plans were divided into two equal parts delivering 17 Gy each, and a breath-hold 3DMR scan was repeated midtreatment, with the option for plan re-adaptation. This approach also allowed for a short mid-treatment break should the patient require it.
Patients
Between October 2018 and November 2019, 17 patients were evaluated using MR simulation for single-fraction SMART, and 10 were identified as being suitable for treatment. Seven patients were considered unsuitable for MR-SABR for reasons including suboptimal GTV tracking due to adjacent blood vessels (n = 4), and limited visibility of a sub-centimeter tumor (n = 1). The average tumor diameter on CT images for these five simulation failures was 1.1 cm (range, 0.9-1.2 cm). Other reasons for deciding against single-fraction SMART were the proximity to chest wall (n = 1) and a patient with severe chronic obstructive pulmonary disease who was unable to perform repeated breath-holds. Of the MR-simulation failures, five patients subsequently underwent 1-or 3-fraction SABR delivered using an ITV-based approach on a conventional linear accelerator. Another patient received 3 fractions of 18 Gy on the MR-linac, and a wait-and-see approach was chosen for a patient with a small lung metastasis.
Image and outcome analysis of single-fraction SMART
The stored real-time MR cine images depicting the GTV t and gating window boundary in sagittal plane, were analyzed for each patient as described previously [17]. Briefly, the raw images were analyzed using ImageJ (v1.51i; National Institutes of Health, USA). In ImageJ, color thresholding was used to extract the areas encompassed by the GTV t , the gating window boundary, and both. The gating window boundary was isotropically expanded by 2 mm to recreate the PTV and measure the fraction of GTV t inside the PTV during beam-on (GTV t coverage). Centroid GTV t positions were used for motion analysis. All data were saved in comma-separated values (CSV) file format and analyzed using MS Excel 2013 to estimate the GTV t coverage, breath-hold patterns, and duty cycle efficiency. The latter was defined by the percentage of effective gating treatment time, namely the "beam-on" frames divided by the total number of MR cine frames acquired during treatment, including frames acquired during gantry rotation and multileaf collimator (MLC) motion.
Patients were followed for clinical outcomes, and clinical and imaging data were obtained from external institutions, when necessary. Toxicities were scored by at least two radiation oncologists, and graded using the Common Terminology Criteria for Adverse Events (CTCAE) version 5.0 [28].
Treatment characteristics
Duration of a full single-fraction SMART session, as measured from the patient entering the changing room to the end of delivery, was a median of 120 min (range, 74-185 min). Nine patients completed treatment as scheduled, and reported no discomfort other than fatigue and mild musculoskeletal complaints immediately after completion of breath-hold SABR. The tenth patient developed back pain during a lengthy treatment session, and after receiving a dose of 25 Gy, completed the treatment on a subsequent day.
On the day of treatment, only minimal re-contouring of the GTV was deemed necessary by clinicians. The average GTV variation versus baseline was +0.2 cm 3 (range, 0.0-0.8 cm 3 ), or 6.4% (0.0-16.7%). Clinicians selected the on-table reoptimized plan-of-the-day for delivery in all but one patient, in whom PTV coverage was slightly higher than prescribed with the baseline (or predicted) plan. Overall, on-table plan adaptation improved PTV coverage by the PD (V 34Gy ) from an average of 89.8% in predicted plans, to 95.0% in reoptimized ones. This corresponded to increases in the biologically effective doses (BED 10Gy ) delivered to 95% of the PTV (D 95% ) from an average of 142.7 Gy (range, 135.1-153.6 Gy) in predicted plans, to 149.6 Gy in all reoptimized ones. Doses delivered to the GTV were similar, with an average GTV D 50% (median dose; BED 10Gy ) of 223.5 Gy (193.8-248.0 Gy) and 224.7 Gy (195.6-244.3 Gy), respectively, in predicted and reoptimized plans.
On mid-treatment 3DMR scans, treatment plans were again reoptimized in seven patients, even though improvements in target coverage were minimal (data not shown). A minor chest wall (V 22Gy ) violation was observed in one predicted and reoptimized plan each, both during mid-treatment plan adaptation, but both were deemed acceptable by clinicians [29]. In another patient, the mid-treatment plan adaptation avoided a hot spot in the chest wall (predicted vs. reoptimized: chest wall Dmax 38.1 vs. 34.0 Gy; V 22Gy 3.6 vs. 2.1 cm 3 ). No other OAR violations were observed in any predicted or reoptimized plans.
Verification of single-fraction SABR delivery
A total of 7.4 h of MR cine imaging (105,951 frames) acquired during single-fraction SABR were analyzed (Table 1). SABR was Fig. 1. Breath-hold 3-dimensional (3D) magnetic resonance (MR) images of the first five patients treated with single-fraction lung stereotactic ablative radiotherapy using MR-guidance. The 3DMR scan is acquired on the MR-linac during a 17-second breath-hold, using a TrueFISP sequence with 1.6 mm × 1.6 mm × 3.0 mm resolution. Using the stereotactic MR-guided adaptive radiation therapy approach, one fraction of 34 Gy is delivered to the planning target volume (red), which is created by adding a 5 mm isotropic margin to the breath-hold gross tumor volume (purple). (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)
T. Finazzi, et al.
Physics and Imaging in Radiation Oncology 14 (2020) [17][18][19][20][21][22][23] delivered using an initial threshold-ROI% of 10% in all cases. In order to improve the duty cycle efficiency, the threshold-ROI% was increased during delivery to 15% in four patients, and to 20% in one patient. For the latter, visual assessment of the adequacy of tumor coverage was assessed on MR images for the revised thresholds. The GTV t area encompassed by the 3 mm gating window during beam-on averaged 95.4% (5th-95th percentile, 88.1-100.0%) for the 10 patients. The maximum proportion of the GTV t outside the gating window during beam-on did not exceed 0.1% of the preset threshold-ROI%. With use of a 5 mm PTV margin, the mean GTV t coverage by the PTV during beam-on averaged 99.6% (5th-95th percentile, 98.0-100.0%) for all patients. We observed variability in breathinginduced tumor motion, shown in Fig. 2 for the first five patients, but this did not affect GTV t coverage. Varying breath-hold patterns also resulted in variable duty cycle efficiency, which averaged 51% (range, 34-85%) for all patients. The median treatment delivery duration was 39 min (28-66 min), and this included beam-off phases between breath-holds, as well as gantry rotation and MLC motion. Real-time MR images acquired during treatment of the first five patients are available as Supplementary Video 1.
Early clinical outcomes
Nine of the 10 patients treated were alive at the time of this report. One death occurred in a 75-year old patient with a history of cardiovascular disease, who developed a fatal myocardial infarction 11 months following SABR to a peripheral lower lobe tumor (case 1). At a median follow-up of 5 months (range, 2-12 months), CTCAE grade ≥2 toxicities were as follows: one patient developed mild worsening of preexistant exertional dyspnea 10 weeks following SMART, consistent with symptomatic radiation pneumonitis (CTCAE grade 2) on CT imaging. This patient did not require medical treatment. Another patient reported persistent fatigue (CTCAE grade 2) for a few weeks after SABR, with spontaneous recovery. No CTCAE grade 3-5 toxicities, and no local recurrences, have been observed.
Discussion
MR-guided single-fraction lung SABR delivered during repeated breath-holds was generally well tolerated by patients. SABR was delivered with a high level of precision, as the average beam-on GTV t coverage by the PTV in sagittal plane was 99.6%. However, some small tumors (average diameter 1.1 cm) were found to be unsuitable for MRbased tracking using the software available at that time.
To the best of our knowledge, this is the first reported experience of single-fraction lung SABR using an MR-assisted approach. We had initial concerns about the feasibility of single-fraction breath-hold lung SABR on the MR-linac due to the long delivery times, as well as technical challenges such as the stability of patient positioning, and the ability to treat small tumors eligible for single-fraction SABR. Our MRguided approach is generally more complex, requiring longer delivery times than with FFF-VMAT [15,19,30]. Single-fraction SMART also involved additional mid-treatment simulation and plan assessment. In addition, the overall treatment time included discussions between members of the treatment team, all of whom needed to gain familiarity with the procedure. Longer on-table times may be acceptable when considering the resources spared with single-fraction treatments, and this may facilitate the scheduling of multiple SABR treatments between cycles of systemic therapy in oligometastatic patients [31]. However, further reductions in treatment times are needed in order to improve patient tolerance. We observed a variability in breath-hold patterns exhibited by patients, resulting in an average duty cycle efficiency of only 51%, although GTV t coverage was not impaired with use of realtime MR guidance. Furthermore, only approximately 60% of patients who were assessed for this procedure ultimately underwent singlefraction SABR on the MR-linac, indicating that improved imaging and tracking software are required in order to allow for the treatment of small tumors in the range of 1 cm.
We acknowledge that respiratory gating, or tracking, can also be performed using both internal and external markers [32][33][34][35], or with template matching and triangulation of kV images for markerless breath-hold lung SABR [13]. Both 4DCT and 4D cone-beam CT underpredict lung tumor motion during radiotherapy [16], and variations such as baseline drifts and shifts suggest that an active approach including real-time monitoring may be preferred when treating mobile tumors [18]. The demands for positional accuracy may be particularly high in single-fraction lung SABR, where inaccuracies are not mitigated by delivery in multiple fractions. There is a role for real-time image guidance and adaptive planning [36], with video-assisted MR-guidance being an attractive solution as it is without need for implanted fiducials, external surrogates, or additional radiation exposure [17]. Additional studies will be needed, however, to precisely quantify the accuracy of real-time MR-tracking of lung tumors [37,38]. We continuously assessed tracking performance visually as the tracking algorithm could be compromised by image noise and artifacts. Furthermore, the real-time monitoring was only performed in one sagittal plane, leading to a risk of undetected lateral movement, which may be suspected when the tracked tumor area appears to decrease, or when the system indicates a Table 1 Details of magnetic resonance (MR)-guided single-fraction lung stereotactic ablative radiotherapy (SABR) delivery during repeated patient breath-holds. The beam is automatically turned off when a pre-specified proportion (so-called threshold-region of interest percentage; ROI%) of the tracked gross tumor volume (GTV t ) is outside the 3 mm gating window boundary. Although the breathing patterns (Fig. 2) and resulting duty cycle efficiency were variable, excellent GTV t coverage during beam-on was observed in all cases, using a planning target volume (PTV) margin of 5 mm. The SABR delivery session is the period during which patients are instructed to perform breath-holds, whereas the full stereotactic MR-guided adaptive radiation therapy (SMART) session reflects the entire in-room workflow, measured from the patient entering the changing room to the end of treatment delivery. * Case 10 required SMART delivery in two sessions due to patient discomfort, and the total duration of both sessions is reported.
T. Finazzi, et al. Physics and Imaging in Radiation Oncology 14 (2020) [17][18][19][20][21][22][23] low correlation of the tracking algorithm. In such situations, an additional 3DMR scan was performed for intra-fractional positional verification. In addition, we applied a PTV margin that is larger than the boundary used for MR-gating, in order to account for the remaining positional uncertainties. Improvements in gating precision are desirable as this may increase confidence to reduce PTVs. In peripheral lung tumors, MR-guided breath-hold SABR was shown to result in PTVs measuring only 54% of those required with an ITV approach [20]. Reducing lung irradiation is important as indications for repeating SABR are becoming more common for patients with both metastases and primary lung cancers [39,40]. Our analysis suggests that on-table plan adaptation can improve PTV coverage, although the impact on GTV dose did not appear to be clinically relevant. Similar findings were observed for fractionated MRguided SABR delivery for peripheral lung tumors [20]. Given the need for optimal techniques for single-fraction SABR, we continue to perform on-table plan adaptation as the additional workload of re-contouring is limited with respect to the total duration of each session. However, future studies may reduce the mid-treatment procedures employed in our initial 10 patients. In addition, new clinical software for tumor tracking at 8 frames per second and with different deformable registration software options is now undergoing evaluation, and may improve system performance. Due to uncertainties in MR-based contouring of some lung tumors, we will continue to use the breath-hold planning CT scan in order to verify tumor size and shape. Future studies are needed to address additional challenges such as susceptibility and motion artefacts in the thorax [41,42].
Based on recent studies, single-fraction SABR is now a standard of care for medically inoperable patients with a peripheral stage I NSCLC. However, as local failure rates of 10% or higher have been reported after SABR for peripheral early-stage NSCLC [5,43,44], improvements in the delivery of radiotherapy remain desirable. The RTOG 0915 study demonstrated similar efficacy and toxicity of SABR delivered with 34 Gy in a single fraction, compared to 48 Gy in 4 fractions [5]. Similarly, no differences in tumor control or toxicity were seen for patients with medically inoperable stage I NSCLC in a study comparing 30 Gy in a single fraction versus 60 Gy in 3 fractions. The latter study also suggested that quality of life (QoL) measures of social functioning and dyspnea were better in the single-fraction arm, although QoL analyses were of exploratory nature [9]. Additional data will be forthcoming from a completed randomized phase II trial that evaluated single-fraction SABR for oligometastatic patients with 1-3 lung metastases, both in terms of clinical efficacy, as well as resource use and costs compared to SABR in 4 fractions [45].
In conclusion, single-fraction lung SABR using MR-guidance is feasible, and it allows for high-precision delivery. Improved imaging is needed to ensure tumor tracking in all patients who may be eligible for this approach, and faster workflows are needed to improve patient comfort and resource utilization. Breathing-induced craniocaudal tracked gross tumor volume (GTV t ) motion on magnetic resonance (MR) imaging observed during singlefraction lung stereotactic ablative radiotherapy (SABR) for the first five patients. Beam-on is indicated by the red tracing when the GTV t is in the correct position. The green tracing indicates beam-off when a prespecified fraction of GTV t is outside the gating window boundary. One patient (case 2) produced a shallow curve due to limited tumor mobility, resulting in a high duty cycle efficiency. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.) T. Finazzi, et al. Physics and Imaging in Radiation Oncology 14 (2020) 17-23
Declaration of Competing Interest
The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: The Department of Radiation Oncology at the VU University Medical Center has funded research agreements with ViewRay Inc. and Varian Medical Systems. M.A.P. reports personal fees from ViewRay, Inc., outside the submitted work. C.J.A.H. and S.S. report personal fees from Varian Medical Systems, outside the submitted work. B.J.S. reports personal fees from ViewRay Inc. and Varian Medical Systems, outside the submitted work. T.F., J.R.S. and F.O.B.S. have nothing to disclose. | 5,434.8 | 2020-04-01T00:00:00.000 | [
"Physics",
"Medicine"
] |
BIM-based preliminary estimation method considering the life cycle cost for decision-making in the early design phase
ABSTRACT Prediction of construction cost and life-cycle cost through preliminary estimate is very important for the economic decision-making in the early phase of building projects. However, the conventional preliminary estimate has a high error range and low reliability because it relies only on the basic information of projects. In addition, the consideration of the life-cycle cost of the building is insufficient, causing problems such as budget shortage, inaccurate budgeting, and life-cycle cost increase. In this study, we propose a method of preliminary estimate based on BIM below the level of detail 2 and actual construction cost data to support the decision-making in the early design phase. To verify the proposed method, a web-based prototype was implemented and applied to three test cases. The range of error rate for three test cases was 1.93–7.16% (average error rate: 5.18%). It satisfies the criteria (−30% ~ +50%) of the “American Association of Cost Engineering”. As a result, the feasibility of the proposed model was validated. It is expected to be utilized as useful information in the decision-making occurring in the early design phase, and this allows for a rapid economic review of the design alternatives, a reduced LCC, and a shortened decision-making time.
Introduction
Decision-making among the participants of a construction project has become a factor determining the success or failure of such projects because buildings have become larger and more complex. Particularly in the life cycle of a construction project, the planning and design phases have the most significant impact on the whole project (Kaplan and Norton 2005), and major decisions like the outline of a building and the selection of design alternatives are made in these stages. In addition, preliminary estimation in the early design phase determines not only the feasibility of the project but also the total project cost, and the government often considers the preliminary cost estimate the upper cost limit for the project (Azman, Abdul-Samad, and Ismail 2013). The estimation methods based on the previous studies require much time, and their use is limited due to the problems resulting from limited information, low data reliability, and poor estimation accuracy. To address these problems, estimation methods using building information modeling (BIM) have been studied of late. As most of the BIM-based preliminary estimation methods, however, are based on the BIM models, through which quantity calculation is possible as the design progresses, they pose limitations in supporting the decision-making in the planning and schematic design phases. In addition, the decision-making in the early design phase affects the construction costs as well as the costs incurred by the maintenance and dismantling of building structures. Therefore, decisions should be made considering the life cycle cost (LCC). The current decision-making process, however, makes it difficult to reflect the economic aspects of the entire life cycle, and the decision-making depends solely on the designer's experience, based on limited information (De Freitas and Delgado 2013). This has caused problems like exceeding the estimates due to design changes, increased maintenance costs, and budgeting difficulties.
To support the decision-making regarding the key elements of a construction project and the selection of design alternatives in the early design phase of the project, the scope of this study was limited to the BIMbased preliminary estimation in the early design phase.
In addition, this study proposes a method capable of predicting the LCC according to the design component alternatives as well as the existing construction costs, for accurate economic analysis. Here, LCC refers to the cost arising from the construction, maintenance, and dismantling and disposal phases. Towards this end, a database of actual cost data and cost estimation criteria was constructed and linked with the BIM model. Then a prototype was implemented for a comparison of the LCCs according to the design alternatives and the preliminary estimates of the entire building. The accuracy of the prototype was verified by comparing the preliminary estimates of three similar cases selected from actual construction cost data with the actual costs. In addition, the validity of the method of supporting the decision-making was confirmed based on the comparative analysis between the predicted LCCs for each design alternative.
Level of detail (LOD)
The BIM-based estimates differ in the accuracy of the estimates according to the BIM model details. The level of detail (LOD), which represents the LOD and steps of the BIM model, was first presented in 1976 in the field of computer graphics (Clark 1976). In setting up the LOD and conducting modeling in the course of the construction project, it is necessary to create a model that contains only the necessary information. If the LOD is set too high or too low, problems like overwork and rework, as well as lack of information will arise.
In the case of the 3D data model, a 1/100 scale for the planning and basic design process and a 1/50 scale for the practical design stage are applied (Properties 2009). In this case, expressions with finite details are difficult to express, so less than 1/50 expressions use 2D details. If this method is used, however, differences in data occur due to the missing objects in the 3D model, which causes problems in terms of the accuracy and reliability of the BIM-based estimation. To solve this problem, the detail level of the BIM model should be raised. The actual building type and material information can be determined from LOD 2. This, however, not only wastes cost and time by causing unnecessary modeling but is also difficult to apply to preliminary estimation because the detailed BIM model is constructed at the detailed design stage. The LOD is generally divided into 1 to 4 and may be expressed in a maximum of five steps (Abualdenien and Borrmann 2018) (refer to Table 1). For the preliminary estimation in the planning stage at the beginning of the design, this study set out the estimation using the mid-level model of LOD 1 to LOD 2, including basic information like the project outline, area, volume, and spatial information.
Definition of preliminary estimation
The preliminary estimation conducted at the beginning of the construction project is an important tool for supporting the decision-making to determine the progress of the project (Barzandeh 2011). Estimation can be divided into budget estimation, estimation based on the design, and practice estimation based on the time of estimation. Estimation is also divided into preliminary estimation and detailed estimation based on the estimation contents by design phase. The methods used in preliminary estimation include the cost indices method, the cost capacity factor method, the factor estimation method, the parameter cost estimation method, etc. In preliminary estimation, however, different estimation results may be obtained and errors may occur due to the differences in the engineering experiences, perspectives, viewpoints, knowledge, agencies involved, estimation methods used, reliability of the collected information, and time of estimation (Wibowo and Wuryanti 2007).
Accuracy of estimation
Preliminary and detailed estimation have different purposes of use, required levels of accuracy, and accepted levels of errors according to each phase, and American Association of Cost Engineering (AACE) has developed a cost estimate classification system, as shown in Table 2 (AACE 2016).
Generally, the existing estimation methods allow for an error rate of +50% to −30% or +30% to −15% through Hamilton's approximate method, but it is difficult to use these error rates for actual construction. The preliminary estimation obtained at the initial stage of a construction project is the basis for establishing the project budget and is a very important factor for the feasibility and alternative reviews of the project, but it is difficult to accurately estimate the construction cost due to the lack of structured actual construction cost data. Also, the ratio of re-estimating the initial estimate is 94.7% (Ahn, Song, and Heo 2003). Therefore, it is necessary to construct a database for accurately estimating the construction cost and for reducing the frequency of rework by linking information according to the project progress.
Life cycle cost
The life cycle of a building involves all the processes involved in the life of the building, including planning and design, construction, maintenance, and dismantling and disposal. The total cost is defined as the LCC of the building. The LCC is divided into the planning and design cost, construction cost, operation and maintenance cost, and disposal cost.
(a) Planning and design cost: Refers to the expenses required for construction planning, field survey, paper acquisition, and environmental management before the building construction, and includes the design cost and the necessary design technology. (b) Construction cost: Includes the costs incurred for contractor selection and construction contracts, and all the direct and indirect costs incurred for building construction, construction and site management, construction inspection, and construction support. (c) Operation and maintenance cost: The expenses incurred for building maintenance, such as the personnel and maintenance expenses for maintaining the performance and functions of the building to enable it to be used from after its completion until its demolition, as well as the gas and water expenses. (d) Waste disposal cost: Includes the costs incurred by dismantling and disposing of the wastes obtained from the dismantling of the building, and the costs of the use of environmental measures for preventing noise and dust during the dismantling of the building.
In this study, the costs incurred in the planning and design stages that are not directly affected by the selection of design alternatives were excluded. In addition, as shown in Figure 1, the construction cost, repair and replacement cost, and dismantling and disposal cost, which differ by design alternative, were excluded, except for the common expenses like the general management expenses and the labor cost incurred during construction. Also, LCC prediction according to the design alternative was carried out.
BIM based preliminary estimation
Studies related to BIM-based estimation have been carried out from various perspectives, and estimation methods including that of the LCC rather than simple construction cost estimation have been studied of late. In this study, the existing researches related to the BIMbased project estimation are classified into four aspects.
(a) To support the performance of building design, Vaidya et al. (2009) proposed database construction methods for energy analysis and cost estimate automation based on commercial software. Leśniak and Zima (2018) developed a relational database for the cost estimation model. A total of 173 construction projects were databased, and a final database was constructed using the casebased reasoning method, by establishing 14 relationships among the estimation variables. (b) For the studies on BIM modeling, Monteiro and Martins (2013) proposed a modeling guideline for accurate quantity calculation in BIM-based design, minimizing the limitation of BIM-based estimation. Wu, Wang, and Wang (2018) proposed a method of increasing the estimation accuracy through the use of an integrated model of extended three-dimensional analysis of building systems (ETABS) and BIM, to overcome the shortcoming of the lack of accurate information on the BIM structure. (c) In the review of the autocorrelation model by Cheung et al. (2012), the use of a BIM-based autocorrelation estimation scheme using Google's sketchup in the early stages of the project was proposed. In a similar study, Lee, Kim, and Yu (2014) used the IFCXML file format to extract the information needed for estimation, and improved the accuracy and efficiency of the quotation work by performing building estimation based on BIM and ontology. In addition, domestic researches have been carried out to improve the estimation accuracy by coming up with estimates based on specific processes and sites, rather than estimating the total number of buildings. For example, the study of Yeom and Kim (2014) confirmed the necessity of an estimation system specializing in high-rise buildings with large budgets, and developed an estimation system for exterior finishing work. (d) In the case of LCC studies, as the need for an economic review not only of the construction cost but also of the whole LCC has increased, studies on BIM-based LCC prediction rather than simple construction cost estimation using BIM are actively being conducted. Whyte and Scott (2010) assisted the decision-making on the design alternatives in the design process by suggesting ways of predicting the LCC of a building by defining and integrating model information through appropriate BIM tools. In addition to predicting the overall LCC, Cheng and Ma (2013) conducted research to help predict the cost incurred by building waste disposal through a BIM-based system, and to establish a waste treatment plan.
The current BIM-based estimation method, however, requires detailed modeling at a certain level because accurate automatic calculation of quantity is considered a priority. As the design information is limited in the initial stage of the design, however, it is difficult to estimate the BIM. Also, even if the quantity is calculated through detailed and accurate modeling, if the cost information is not accurate, a mistake may be made in the estimation. To increase the accuracy of the cost information, the standard cost or actual cost information is required, but it is difficult to directly apply the standard cost model to the actual project. In the case of the performance cost, not only is it limited to the collection of data; it is also difficult to grasp the accuracy of the information. In addition, as preliminary estimation itself deals mainly with the building's material and construction costs, there are insufficient studies on the LCC incurred by the maintenance and dismantling of buildings.
Concept: method of bim-based preliminary estimation for decision-making
3.1. Performance data and cost calculation standard 3.1.1. Criteria for calculating the total construction cost estimate In this study, among the various existing estimation methods, the cost index method, which estimates based on the historical construction cost data of similar projects performed in the past, was applied. The actual construction cost data, the most basic estimate, was utilized by Public Procurement Service in "Analysis of construction cost by type of public building 2008-2014" (Public Procurement Service of KOREA 2014). The construction cost index, which is used to convert past costs to current costs, was developed by "Korea Institute of Civil Engineering and Building Technology" and received "Statistics Korea" approval (General Statistics Approval No. 39701) (KICT 2018).
In the analysis of public construction cost analysis data, six kinds of analysis data were included (refer to Table 3), and based on these data, similar-building filtering of the total construction cost estimate was conducted.
The construction cost index is used for the direct construction costs incurred for the materials, labor, equipment, etc. to be used for the construction, using data from Bank of Korea's industry table and the producer price index (Ministry of Economy and Finance 2017). Construction Cost Research Center announced the monthly trends in the construction cost index as the production price index (100%) as of 2010 (refer to Table 4).
Life cycle cost estimation criteria for design alternatives
In this study, an attempt was made to support earlystage decision-making by predicting the LCC by design alternative as well as estimating the total construction cost. Presented in this chapter are the criteria selected for estimating the LCC based on the results of the public building cost analysis that was conducted (refer to Figure 1). Each criterion was divided into the useful life, the repair cycle and yield, the waste treatment standard, and the real discount rate.
The useful life of a building is the duration of usefulness of the building. It is also called "content period" or "durability training," and it is a training that can bring the expected effect according to the original building use and usage method. To estimate the LCC of a building, the useful life of the building should be utilized. In this study, the projected life expectancy of a building was estimated using "the standard service life and the useful life range table" (Ministry of Economy and Finance 2017). It is also important to know the accurate repair and replacement points to be able to predict the maintenance cost as part of the LCC by design alternative. To do this, the "Establishment Criteria for Long-Term Repair Plan" was used (Ministry of Land, Infrastructure and Transport 2018).
The LCC of a building also includes the cost of disposal of construction waste, so a criterion for waste disposal cost is needed. For this, the Ministry of Environment (2016) selected the waste receipt fee defined in the "Waste Public Treatment Facility Transfer Fee" announced by the said ministry. In the case of construction waste, much of the waste is included in the general waste in the Waste Management Act. The Ministry of Environment divided waste into the objects to be landfilled and incinerated. The LCC is a future cost, so the future values must be converted to their present values. In this study, the discount rate was used to convert a future value to its present value. The discount rate is divided into a nominal discount rate that does not take into account the inflation rate, and a real discount rate that takes inflation into account. The real discount rate is the nominal discount rate minus the inflation rate, which is based on Bank of Korea's deposit interest rate and consumer price inflation rate. The real discount rate, on the other hand, is calculated from 2008 to 2014 and is summarized in Table 5. The average real discount rate for 7 years is about 0.72%.
Preliminary estimation method for decision making
3.2.1. Preliminary estimation of the total construction cost Based on the actual construction cost data and the criteria for estimating the construction cost, estimation was carried out in the initial design stage. Based on the cost index method, the projected cost is estimated by applying the construction cost index based on the construction cost data for each type of public building in 2008-2013. In the case of the 2014 data, it is not used for preliminary estimation.
In the classification of the actual construction cost data, the cost of construction, electricity, communication, and other expenses, and the construction cost per m 2 , were analyzed. Next, the construction cost per unit area of the target construction was derived through general information like the construction scale, usage, structure, and floor area of the project outline. The construction cost per m 2 of the target construction was estimated by applying the construction cost index, as in Equations 1 and 2, and the total construction cost was calculated by substituting the total floor area.
where, TCC p A = target construction cost per area, SRCC p A (y) = similar-results construction cost per area, CI = cost index, and y = year.
where, TCC = target construction cost, TCC p A = target construction cost per area, and TCA = target construction area.
Life cycle cost estimation by design alternative
To support the decision-making at the early design stage, not only the overall construction cost but also the price information that can be compared with other price information are needed. The LCC scope is limited to the construction, maintenance, and dismantling and disposal phases, excluding the planning and design stage, to compare the costs incurred as a result of the decision. Table 6 defines the cost data and the criteria for estimating the LCC for each step. The LCC is calculated by summing up the construction cost, maintenance cost, and dismantling and disposal cost according to the detailed work type of the design alternative, as shown in Table 6 (refer to Equation 3).
where, LCC = life cycle cost, CTC = construction type cost, MC = maintenance cost, and DDC = decommissioning and disposal cost. The cost of the construction phase is estimated based on the construction cost and the construction area according to the detailed work of the design alternative. For estimating the construction cost according to the design alternative, the construction cost for each detail type of similar performance data was used. In Equation 4, the unit price of construction work per unit area is calculated using the construction cost per unit area and the construction cost index, and the construction area is multiplied by the construction cost index, as shown in Equation 5, to obtain the total construction cost.
where, CTC p A = construction type cost per area, SRCTC p A (y) = similar-results construction type cost per area, CI = cost index, and y = year.
where, CTC = construction type cost, CTC p A = construction type cost per area, and CTA = construction type area.
In the maintenance of a building, repair and replacement can be done depending on the design alternatives during the life of the building. Therefore, the formula for calculating the maintenance cost can be obtained as the sum of the repair and replacement costs, as shown in Equation 6. Also, the equations for calculating the repair and replacement cost shows the Equation S1 and S2.
where, MC = maintenance cost, RC = repair cost, and R p C = replacement cost.
In the dismantling and disposal phase, the sum of the cost of dismantling the building and the cost of disposing of the waste after dismantling is calculated. The equation is the same as Equation 7. Also, the Table 6. Cost data and criteria for calculating the LCC by step. Step
Calculation standard Construction Construction cost by detail type Construction cost index Maintenance Construction cost by detail type
Construction cost index Criteria for establishing long-term repair plans Designated waste public treatment facility bring-in commission Construction standard production unit Real discount rate Dismantling Construction standard production unit Designated waste public treatment facility bring-in commission Real discount rate equations for calculating the repair and replacement cost shows the Equations S3 and S4.
where, DDC = decommissioning and disposal cost, D e C = decommissioning cost, and D i C = disposal cost.
4. Implementing the decision support method in early design stage 4.1. Overview of the BIM-based preliminary estimation system Figure 2 shows a BIM-based preliminary estimation process.
• The BIM model information is linked with the BIMbased project preliminary estimate prototype to extract the basic information of the project outline and BIM model from the repository server. • A database of the performance cost and reference data is built, and the preliminary estimation algorithm is defined by applying the calculation formula. • Connection with the database is established based on the extracted basic information, through system linkage. • The similar cost and standard data are estimated and applied to the algorithm to implement the BIM-based preliminary estimation system. Unlike the existing BIM-based estimation method, the BIM-based estimation method in this study supports the decision-making by providing the designer with the LCC according to the design alternative as well as the total construction cost by coming up with an estimate through the mass model at the initial design stage.
Establishment of a database for performance data and cost
The structure and contents of the database include the project basic information for similar-performance-data matching and the actual construction cost and the construction cost index for the preliminary estimation. The actual construction cost data include the construction, machinery, electricity, communication, and engineering (civil engineering) construction costs, and the construction cost for each detail type. In addition, they include criteria for establishing a long-term repair plan for predicting the LCC according to the detailed work type, and cost estimation criteria like the waste treatment standard, standard product, and real discount rate. For taking advantage of these data, the database was built using Oracle SQL Developer, which is currently the most common and reliable database management system (DBMS). Figure 3 shows the process of database construction through Oracle SQL Developer. Figure 3(a) is the table tree constituting the database. It consists of the performance cost and the cost estimation criteria in the configuration screen of the table composed of a data header and data, through which similar performance data are matched.
Implementation of the BIM-based preliminary estimation system
The preliminary estimation system first grasps the client's requirements and then applies these to the BIM model. The generated model is saved to the repository server, and the data needed for the estimation are parsed and delivered to the prototype system. The delivered data and the design alternative input data are then quoted through the database's quotation algorithm.
For the implementation of the prototype system, a preliminary estimation algorithm is defined so that the estimates can be executed based on the basic information of the project and the designer's design alternatives (refer to Figure 4). Based on the client's requirements and the BIM model information, the designer selects the basic information of the project and the alternatives for each building site. The selected design contents are linked with the database to derive similar performance data. The derived data are assigned to the current value method to convert the past cost data to the current cost. The actual construction cost and the standardized data are then converted to LCC estimates.
The preliminary estimate quote algorithm was algorithmically coded using Oracle SQL Developer to associate it with the database. Figure 5 shows the algorithm coding screen, where (a) is a worksheet window that codes an equation and (b) is a window showing the coding result.
JAVA-based eclipse JSP was used to implement a Web-based user interface (UI) in conjunction with the built-in database and algorithm (refer to Figure 6). The prototype system UI is divided into a UI for the utilization of the preliminary estimate of the overall construction cost (refer to Figure S1) and a UI for forecasting the LCC by design alternative (refer to Figure S2).
BIM-based decision-making process
Through the implemented BIM-based project estimation system, the decision-making process of the early stage of designing the construction project is defined as shown in Figure 7. Through this, decision-making such as that on the basic elements of the project and the detailed work in the planning and planning design stages will be supported. Figure 7(a) shows the process of determining the basic elements of a construction project. This process sets a proper construction cost according to the client's requirements, and determines the basic elements, such as the purpose of the project, the number of floors, and the area, region, and structure. It then extracts the construction costs of similar projects in connection with the database, and applies estimation algorithms to derive the total construction cost. Thereafter, it performs an economic review through a comparison of the cost estimate and the proper construction cost. If the cost estimate exceeds the proper construction cost, or if a problem arises, measures should be taken to correct the basic elements so that the cost estimate will not exceed the proper construction cost.
The process of determining the alternatives of the materials and detailed works is shown in Figure 7(b). First of all, it defines the client's requirements, selects the building parts (eg, the exterior wall, interior wall, ceiling, floor, and stairs), and enters the design alternatives on the materials and detailed works for each part. Next, it extracts the LCC estimation criteria according to the design alternative and the construction cost for each detailed work in connection with the database, and applies estimation algorithms to predict the LCC according to the design alternative. This process should be performed more than twice to compare the alternatives. Lastly, through a comparison, it determines if the derived alternative complies with client's requirements, and then determines the final design alternative.
5. Verification of the decision-making support method in the early design stage 5.1. Preliminary estimation-based support for decision-making
Selection of verification targets and BIM modeling
The preliminary estimation of the total construction cost implemented in the prototype is tested to determine if it can provide accurate estimation data for decision-making in the early design phase. For accurate verification, the preliminary estimates formed through the completed construction cases were compared with the actual construction costs. The average KRW: USD exchange rate of 1,115.70 in 2018 was applied as a currency in this study (Ministry of Economy and Finance 2019). Among the projects included in the data on "Analysis of 2014 Construction Expenses Classified by Public Facility," three projects with similar use, size, structure, and location data were selected as verification targets. As shown in Table 7, the three projects were reinforced concrete buildings with five stories above the ground and one underground level used as public office facilities and located in Gyeonggi,-do, Daegu, and Incheon, respectively.
Each project building was modeled using Revit from Autodesk, one of the most widely used BIM tools, to conduct BIM-based project estimation through selected project data. The modeling of the building progressed to the mass model level of LOD1-LOD2, which is generally conducted at the planning and design stage (refer to Table 8).
Results of the decision making support method based on the preliminary estimation
To verify the BIM-based estimation system, three BIM models were built and estimated. Figure 8 shows a screenshot of the application of the BIM model of the Kimpo City project in Test 1. The two other projects were also conducted in this way. Table 9 compares the estimated cost of the system with the actual construction cost of the actual project, and shows the error rate. The errors from the three tests were 6.46, 7.16, and 1.93%, respectively. The error range was 1.93-7.16%, and the average error rate was 5.18%. This implies that the BIM-based estimation method in this study has high accuracy, estimated to be between −30% and +50%, for the feasibility analysis specified in AACE, and has a less than 10% error rate.
5.2.
Verification of the decision-making support method through life cycle cost forecasting 5.2.1. Life cycle cost estimation guideline verification target and calculation criteria Verification of the proposed LCC prediction method was conducted based on the model of Test 1 among the three projects that were previously modeled. In accordance with the decision-making process in this paper, some outer walls of the BIM model were selected, and the type of exterior material and the detailed work alternatives were set. Table 10 summarizes the calculation criteria for alternative LCC.
Test 1 is a reinforced-concrete structure with a life expectancy of 40 years according to the "basic contents training schedule," and the cost that is the basis of the calculation is defined according to each alternative, as shown in Table 11.
First of all, the construction cost by detail type is extracted from the project cost according to the basic elements and design alternatives (eg, actual construction cost), and is converted to the current cost through the construction cost index. In the case of the dismantling cost, the actual data on the existing project is not enough, and the method of dismantling according to each work type, and the cost, are applied using the standard part. Finally, the waste disposal cost is defined as the average value of the costs specified in the "designated waste public treatment facility import fee" announced by the Ministry of Environment. Table 12 shows the number of repair and replacement cycles occurring in the maintenance phase, determined by applying the repair cycle and the water recovery rate specified in "Criteria for the Establishment of a Long-Term Repair Plan" for the long-term repair plan among the estimation criteria defined in Table 10. In this chapter, the BIM-based preliminary estimation system was used for validation to estimate the LCC by design alternative. Figure 9 shows the screen of the results derived by selecting masonry, as in design alternative 1, and inputting the calculation criteria based on this into the system. The screen as a result of entering the calculation criteria for the tile design of alternative design 2 is shown in Figure 10. Table 13 summarizes the results of the LCC forecasts for each design alternative according to the stage, including the construction, repair, replacement, dismantling, and disposal costs. The cost of each design alternative shows that the construction cost accounts for more than 84% of the total LCC of design alternative 1, and that the proportions of the maintenance, disposal, and decommissioning costs are small (ie, 15.57%). The construction cost of design alternative 2 accounts for about 10.10% of the total LCC, and the maintenance, dismantling, and disposal costs account for the majority of the remainder (ie, 89.67%). A comparison of design alternatives 1 and 2 will show that design alternative 1 has a higher Figure 8. Preliminary estimation result screen. Table 9. Results and error rates of the preliminary estimation method to be verified. construction cost and lower maintenance and dismantling and disposal costs while design alternative 2 has a higher overall LCC. As a result of the prediction of the LCC according to the design alternative, it can be said that it is necessary to take into account not only the construction cost but also the whole LCC when examining the economics of the material at the initial design stage. Based on the verification results presented in this chapter, the validity of the decision-making method through LCC prediction was verified, and it can be concluded that the designer and the client can make the decision based on the LCC in the early design stage.
Conclusion
This study was carried out to support the decisionmaking in the early design phase through BIM-based preliminary estimation. As the decisions made in the early design phase have the most significant impact over the entire project period, it is very important to Figure 9. LCC forecast screen of alternative 1.
consider not only the construction cost but also the costs incurred in the post-construction stages in the decision-making process. In this regard, the previous studies related to BIM-based preliminary estimation and the LCC were analyzed to derive their limitations and confirm the research directions and differentiation.
The findings of this study can be summarized as follows. The analysis of the status of the previous studies revealed that in the research related to BIMbased preliminary estimation, an estimate is made based on the automatic quantification function of BIM, and thus, the higher the LOD of the model is, the more accurate the estimate. As the progress of the design is limited in the early design phase, however, its application cases are rarely available, and the research on the LCC is insufficient. Therefore, it is necessary to consider the LCCs of the design alternatives as well as the preliminary estimates of the total construction cost for supporting the decision-making in the early design phase.
The performance data and estimation criteria were defined, and calculation methods were proposed to support the decision-making through accurate preliminary estimation and LCC prediction. The preliminary estimation was done based on the actual data of the existing project and the building cost index, while the establishment criteria for the service life and the long-term repair plan, the designated waste public treatment facility bring-in commission, and the standards of construction estimation and real discount rate were additionally applied in the LCC estimation.
For the implementation of the decision-making support method in the early design phase, the prototype was implemented, and the process was defined based on the information of the BIM model. The prototype, which consists of a database and an estimation algorithm, enabled the preliminary estimation and LCC prediction through the BIM model via a Web-based portal system.
Verification of the decision-making support system was conducted to confirm the accuracy of the preliminary estimation method and the validity of the decision-making support method according to the LCC. The results revealed that the proposed preliminary estimation method in this study has high accuracy, and that the decision-making process considering the LCC is possible. The decision-making support method in the early design phase proposed in this study helps provide the LCC based on accurate preliminary estimation and design alternatives, and supports the decisionmaking by building designers and owners. Therefore, it is expected to be utilized as useful information in the decision-making occurring in the early design phase, and this allows for a rapid economic review of the design alternatives, a reduced LCC, and a shortened decision-making time.
This study has limitations, however, in that it is difficult to make an estimate for a building type, which is not found in the constructed database, because the estimate was made based on the database on the performance data and estimation criteria. In addition, the estimation criteria were examined based on the standard suggested by the government, but the limited classification and scope pose difficulties in estimating the LCC. For the future work, there is a need to conduct research on the LCC estimation criteria and the construction of big data, which can proceed through the collection, management, and analysis of performance data. | 8,514.8 | 2020-05-21T00:00:00.000 | [
"Engineering"
] |
The Role of Discounting in Energy Policy Investments
: For informing future energy policy decisions, it is essential to choose the correct social discount rate (SDR) for ex-ante economic evaluations. Generally, costs and benefits—both economic and environmental—are weighted through a single constant discount rate. This leads to excessive discounting of the present value of cash flows progressively more distant over time. Evaluating energy projects through constant discount rates would mean underestimating their environmental externalities. This study intends to characterize environmental–economic discounting models calibrated for energy investments, distinguishing between intra- and inter-generational projects. In both cases, the idea is to use two discounting rates: an economic rate to assess financial components and an ecological rate to weight environmental effects. For intra-generational projects, the dual discount rates are assumed to be constant over time. For inter-generational projects, the model is time-declining to give greater weight to environmental damages and benefits in the long-term. Our discounting approaches are based on Ramsey’s growth model and Gollier’s ecological discounting model; the latter is expressed as a function of an index capable of describing the performance of a country’s energy systems. With regards to the models we propose, the novelty lies in the calibration of the “environmental quality” parameter. Regarding the model for long-term projects, another in-novation concerns the analysis of risk components linked to economic variables; the growth rate of consumption is modelled as a stochastic variable. The defined models were implemented to determine discount rates for both Italy and China. In both cases, the estimated discount rates are lower than those suggested by governments. This means that the use of dual discounting approaches can guide policymakers towards sustainable investment in line with UN climate neutrality objectives.
Introduction
Nowadays, energy policies are a key governmental instrument for achieving economic, environmental, and social objectives, encouraging sustainable development, providing environmental protection, and containing greenhouse gas emissions (GHG) [1]. In this respect, the path to energy transition-increasingly advocated for by governments-is driven by investment programmes whose effects often manifest themselves in the long term; these include energy infrastructure and the pricing of environmental externalities such as carbon emissions [2]. Thus, choosing more sustainable investments means making intertemporal decisions. Such choices involve trade-offs between benefits and costs that occur at different times [3]. It follows that a critical issue in environmental and resource economics is the choice of social discount rate (SDR), as it significantly influences the outcome of a cost-benefit tests [4,5]. A social discount rate reflects a society's relative assessment of well-being today versus well-being in the future [6]. The SDR allows the costs and benefits that an investment generates over time to be weighted to make them economically comparable. It is therefore a fundamental parameter for being able to express an opinion on the economic performance of an investment project whenever the analysis is conducted from the point of view of a public operator or of the community [7].
Choosing an appropriate social discount rate is crucial for cost-benefit analysis. Choosing too high a social discount rate could preclude the realization of many desirable public projects for society, in terms of extra-financial repercussions. Conversely, setting an SDR that is too low would risk steering investment decisions towards economically inefficient investments. Furthermore, a relatively high social discount rate ends up giving less weight to the benefit and cost streams that occur in progressively more distant times, favouring projects with benefits that occur at the beginning of the analysis period [8].
The choice of social discount rate affects both the ex-ante decision that allows the testing of whether a specific public sector project deserves funding, and the ex-post evaluation of its performance [9].
The issue of discounting is also crucial for energy efficiency projects. In this case, investors must weight higher initial costs against future energy savings [10]. There are two aspects of energy projects that need to be addressed: Firstly, these are investments that have multiple extra-financial effects on the community, so their effectiveness is more of a social nature rather than a specifically financial one. Secondly, the time perspective is very long for some initiatives [11]; see, among others, the European Green Deal projects, with targets for 2050 [12], or energy transition programmes to curb global warming, whose effects last for centuries [13].
To guide the decision-making process towards efficient investments that respect the defined programmatic guidelines, it is necessary to attribute a greater 'value' to the extrafinancial effects that the intervention initiatives generate on the community in the analysis. According to Kula and Evans [14], in a moment of strong environmental stress like the one we are experiencing, environmental effects should be discounted separately and differently from economic impacts. In particular, the challenge today is to fix the discount rate for environmental effects at a rate that reaches either a natural capital depletion rate that maximises the utility of consumption of current and future generations, or the preservation of natural capital. One cannot assume a common discount rate for both natural and man-made capital, since natural capital is finite, while man-made capital is unlimited. So, there should be two discount rates. On the contrary, the two discount rates can only coincide if the demand for ecosystem goods and services does not exceed the ecosystem's regenerative ability.
The aim of this paper is to propose an innovative economic-environmental (or dual) discounting approach in which environmental externalities are weighted at a different and lower rate than that used for strictly financial cash-flows. This is possible because the social welfare function (SWF), from which the social discount rate derives, is no longer only a function of consumption-and therefore of economic parameters-but also of environmental quality. With this research, we want to define a dual discounting specification for energy projects. Specialising the discounting rate according to the investment sector can lead to a fairer and more equitable allocation of resources [11,15]; specifically, to consider the performance of the energy systems of individual countries, the variable "environmental quality" is defined as a function of the Energy Transition Index (ETI) [16].
In addition, we distinguish between intra-generational energy projects (or those with short-term effects) and inter-generational energy projects (or those with long-term effects). In the first case, we define a dual discounting approach based on time-constant environmental and economic discount rates. In the second case, both discount rates-environmental and economic-are based on a time-declining structure. The use of constant discount rates for projects with long-term implications would end up excessively contracting the present value of progressively more distant costs and benefits over time.
This paper is divided into the following four sections: Section Two first proposes a review of the relevant literature. Section Three defines the theoretical framework of the two environmental-economic discounting models. In Section Four, we implement the models defined to estimate constant and declining discount rates, with reference to both the Italian and Chinese economies. Section Five concludes and discusses energy policy implications.
Literature Review
The social discount rate (SDR) plays a critical role in cost-benefit analysis (CBA). The SDR allows the comparison of socio-economic costs and benefits-expressed in monetary terms-in order to make a judgement on the efficiency of a project, programme, or policy [17]. This judgement is summarised by performance indicators such as the economic net present value (ENPV). This indicator is a measurement of an investment's marginal utility for 'present' society [18,19]: In which Bt and Ct represent, respectively, the benefits and costs arising at time t; 1/(1 + SDR) is the discount factor. (1) shows that as the discount rate increases, the present value of net benefits decreases, as they become more distant from the time of valuation.
The effect of the contraction of the present value of cash flows is a crucial issue when the objects of analysis are long-lived projects, whose effects extend for at least 30-40 years and therefore involve more than one generation [6]. In the valuation of intergenerational projects, such as those with environmental impacts, the choice of appropriate discount rate involves the additional challenge of taking intergenerational equity into account [9,20]. This is one of the main reasons why there is still no consensus on the discount rate to be used in valuations. The issue becomes even more complex when environmental effects make large contributions and mainly occur in the long run.
The literature review shows that the most widely used approach to estimate the discount rate is the social rate of time preference (SRTP) [21,22]. According to this approach, the social welfare function (SWF) depends on the utility U(c) of income or consumption c alone. In the formula: SWF is dependent on the following parameters: U(c t ), which represents the utility that society derives from public and private per capita consumption at time t; e -ρt is the discount factor that allows the incremental utility resulting from an additional unit of consumption at time t to be weighted; ρ represents the rate at which future utility is discounted. This last parameter is also called the pure rate of time preference. In order to determine the discount rate that society should apply to incremental consumption, it is first necessary to estimate the discount factor by maximising the SWF. If W denotes the integral of equation 2, then the derivative of W with respect to consumption in period t represents the discount factor and can be interpreted as the social present value of an incremental unit of consumption in period t [21]. The social discount rate is equal to the proportional rate of decrease in this discount factor over time. In other words, this parameter-also called SRTP-is the rate at which the value of a small increment of consumption falls as time changes. It is shown that the SRTP is a function of two components [9]. The first is ρ, the pure time preference rate (or the utility discount rate). ρ reflects the importance that society attaches to the welfare of the current generation relative to the welfare of the future generation. The second contribution is the product of the elasticity of the marginal utility of consumption η and the growth rate of per capita consumption g. This product shows that an additional unit of consumption for a future generation has a lower utility value than an incremental unit of consumption for the current generation [8]. The formula: (3), also known as the Ramsey formula, depends only on economic parameters and is time-constant, i.e., it leads to estimating a constant discount rate throughout the analysis period. Therefore, according to some authors, this approach fails to properly consider environmental externalities, which often occur in the long term. In this regard, Emmerling et al. [23] argue that the climate goals of the Paris Agreement (2015) can only be achieved by employing very low discount rates, such as the one estimated by Stern [24]. Similarly, van den Bijgaart et al. [25] and van der Ploeg and Rezai [26], using analytical integrated assessment models (IAMs), reveal that the discount rate is a key factor in the social cost of carbon. Gollier [27] proposes an extension of the Ramsey formula for projects with longterm effects, e.g., investments for climate change that reduce greenhouse gas emissions.
The assumption is that the consumption level in SWF is uncertain and that fluctuations in consumption growth are distributed independently and normally. According to these assumptions, (3) becomes: where μg and are respectively the consumption growth rate mean and variance. 0.5 × η 2 × is the precautionary term and indicates the planner's intention to save more now in favour of future benefits. This term, called "precautionary", summarizes the uncertainty of the growth rate of consumption and determines a reduction in the value of the discount rate [18,27]. Luo et al. [3] demonstrate that non-diversifiable idiosyncratic risk reduces the discount rate and increases the present value of the uncertain future benefits of projects.
Other scholars suggest the use of dual discounting approaches, whereby environmental components are weighted at a lower "ecological" rate than the "economic" rate, which is useful for assessing strictly financial costs and revenues [14,[28][29][30][31]. This means that the economic net present value (ENPV) is given by the sum of two rates: where: Ft and Et indicate, respectively, the annual economic cash flows and net environmental benefits at time t; rc represents the consumption discount rate (or economic discount rate); rq is the environmental quality discount rate (or more simply environmental discount rate), with rq < rc. In other words, the environmental and social damages and benefits generated by the project, after being transformed into monetary terms, are discounted using rq. While the economic benefits and costs are assessed through the rc [18,20,27]. The formulas for estimating rc and rq are derived in the following section, via Formulas (7) and (8).
Another branch of the literature proposes the use of time-declining discount rates to give more weight to distant project effects than is the case when using time-constant discount rates [32][33][34][35][36]. Two methods are used to estimate the declining discount rate (DDR): the expected net present value approach and the consumption-based approach. For both, the theoretical assumption is to include an uncertainty factor in the time-structure of the discount rate. In the ENPV approach, the same discount rate is modelled as an uncertain parameter, while in the consumption-based approach, the uncertainty concerns the growth rate of consumption which appears in the Ramsey formula.
With reference to the first approach, Weitzman [6] shows that estimating ENPV with an uncertain but constant discount rate is equivalent to computing net present value (NPV) with a certain but decreasing "certainty equivalent" until it reaches the minimum possible value at time t = ∞. Thus, if the discount rate is modelled as a stochastic variable, we can first estimate the certainty equivalent discount factor, then the corresponding certainty equivalent discount rate, understood as the exchange rate of the expected discount factor or rate of progression from t to t + 1.
According to Gollier's consumption-based approach [18,27,29], the absence of a sufficiently large dataset covering the growth process of the economy in the long run implies that parameters μ and σ of (3) can be treated as uncertain. It is then assumed that the consumption log follows a Brownian motion with trend μ(θ) and volatility σ(θ). These values depend on parameter θ, which is uncertain at time 0. These assumptions allow us to transform (3) into a time-declining function.
Weitzman's [6,32,] findings guided the UK and France to adopt discount rates with a declining structure for projects with long-term consequences [37,38]. The U.S. Environmental Protection Agency [39] has also followed suit.
Finally, recent studies analyse the need to use a specific discount rate for environmental sectors and services. Baumgärtner et al. [31] show that ecosystem services should be discounted at significantly lower rates than those used to weight consumer goods. Vazquez-Lavín et al. [40], with reference to projects aimed at preserving biodiversity in marine protected areas in Chile, estimate a declining SDR for eco-system services. Muñoz-Torrecillas et al. [41] estimate an SDR to be employed in the appraisal of afforestation projects in the United States.
With specific reference to the energy sector, Steinbach and Staniaszek [42], Kubiak [10], and Poudineh and Penyalver [2] offer a review of social discount rates for energy transition policies and their implications for decision-making. Foltyn-Zarychta et al. [11] consider employing a lower discount rate than that suggested by the government, as energy policy planning horizons are generally very long. The US Department of Energy (DOE) evaluates a rate of 3% for energy conservation and RES projects. The estimate is based on long-term Treasury bonds, averaged over a 12-year period [42].
The following Table 1 summarises the main literature studies concerning approaches to estimating the discount rate. [14,18,28,30] Specific discount rate per investment sector/area of intervention Energy systems [11,42] Application for different investment sectors [15] GHG emissions [23] Ecosystem Services [31,40] Afforestation Projects [41] Considering the framework outlined, this research intends to characterise new approaches for estimating SDR for use in economic evaluations of energy interventions and policies. As the literature review shows, there is a lack of studies proposing both constant and declining dual models specifically for the economic evaluation of energy projects. Thus, building on the existing literature, we define a new discounting model in which environmental quality is described as a function of an energy transition index. Specifically, we define: (i) a constant-dual discounting model for intra-generational energy projects, whose effects can be assessed over a thirty-year period. In this case we define an environmental and an economic discount rate, which are constant over time; (ii) a declining-dual discounting model for inter-generational investments, i.e., those with appreciable effects over the long run. In the second case, however, we define an environmental discount and an economic discount, both with a declining structure over time; this is possible because we take macroeconomic risks into account in the modelling.
Modelling the Social Discount Rate for Energy Policies
In this section, we characterise discounting models that can fairly account for the environmental impacts of energy policies, both short-and long-term. Section 3.1 focuses on the model for estimating discount rates for intra-generational energy projects, i.e., investments whose impacts occur over a period of at most thirty years. Section 3.2 defines the discounting model for energy projects with long-term effects for which inter-generational equity issues need to be considered.
Both models are based on the use of discount rates, that are lower for discounting environmental externalities than rates which weight only the strictly economic components. This is because the mathematical structure of the discount rate is a function not only of consumption, but also of environmental quality. The latter is, for the first time, expressed as a function of the Energy Transition Index (ETI), to orient decision-making towards investments increasingly in line with climate neutrality goals.
The model for energy intra-generational projects proposes the use of time-constant rates. This is legitimate as the contraction effects on the present value of cash flows are acceptable for time intervals limited to 20-30 years. Instead, in the case of investments with long-run effects, inter-generational equity issues are addressed by using rates with a declining structure over time. Otherwise, long-term environmental damage and benefits would be underestimated, or not considered at all in the analysis.
A New Discounting Model for Energy Intra-Generational Projects
Our approach to discount the effects of intra-generational projects in the energy field is based on Ramsey's growth model [47] and Gollier's ecological discounting model [29].
Gollier [29,51] proposes discounting the environmental components of investment at a different and lower rq than the rc needed to weight the strictly financial effects. To derive useful rates discounting different costs and benefits at different time horizons, it is necessary to consider a representative agent consuming two goods whose availability evolves stochastically over time. This is possible by extending Ramsey's rule (Equation (3)-taking into account the degree of substitutability between the two goods and the uncertainty surrounding economic and environmental growth. The rate at which environmental impacts should be discounted is in general different from the rate at which monetary benefits should be discounted. It is shown that, under Cobb-Douglas certainty and preferences, the difference between the economic and ecological discount rates is equal to the difference between the economic and ecological growth rates.
More specifically, it is assumed that the utility function Ut also depends on environmental quality as well as consumption , i.e., Ut = ( , ). In addition, since the environment tends to deteriorate over time, an incremental improvement in environmental quality will be more valuable to future generations than to current ones. Assuming again that is a partial substitute for environmental quality, economic growth has a positive impact on the ecological discount rate, potentially offsetting the effect of environmental deterioration. If the substitutability is limited, the effect of environmental deterioration dominates economic growth. This leads to a low ecological discount rate that allows environmental assets to be preserved.
Based on the assumptions introduced, the inter-temporal SWF becomes the sum of the utilities derived from both consumption and environmental quality : To derive the economic discount rate and the environmental discount rate, we assume that environmental quality is a deterministic function of economic performance: = f( ). Common sense implies that environmental quality is a decreasing function of GDP per capita, but this is much debated in scientific circles. For this reason, it is permissible to assume the following monotone relationship qt = ct ρ , where ρ can be either positive or negative. If we assume that follows a geometric Brownian motion, we obtain an analytical solution for and r q . Without going into the analytical demonstration of the formulae, for which we refer to Gollier [29], it should be noted that deriving ( , ) with respect to consumption , we have the function describing the economic discount rate : Deriving U(ct, qt) with respect to environmental quality qt, we obtain the ecological discount rate function r q : (7) and (8) show how rc and rq depend on: (i) socio-economic parameters, such as the time preference rate ρ, risk aversion to income inequality η1, the growth rate of consumption g1, the uncertainty of the consumption growth rate σ11 in terms of the mean square deviation of the variable; (ii) environmental variables, such as the degree of environmental risk aversion η2 and the elasticity δ of environmental quality to changes in the growth rate of consumption g1. The estimation of each parameter is detailed at the end of this section.
The aim of this research is to propose discount rates that adequately account for the costs and benefits of energy investments. The main novelty of the model is therefore the modelling of environmental quality qt, which for the first time is defined as a function of the Energy Transition Index (ETI). The index, estimated by the World Economic Forum (WEF), provides a framework to compare and support countries in their energy transition needs, considering their current energy system performance and the readiness of their macroeconomic, social, and regulatory environment for transition. The index, which summarises 40 different indicators, is currently available for 114 countries. The scores show that while 92 countries have risen their score over the last 10 years, only 10% of countries have been able to reach consistent gains, which are necessary to achieve climate targets for the next decade.
According to the World Economic Forum report 'Fostering Effective Energy Transition [16], even as countries continue in their progress in clean energy transition, it becomes necessary to embed the transition in economic, political, and social practices to ensure irreversible progress. For this reason, it is essential to introduce a variable into the mathematical structure of the discount rates that sees the progress of countries on the path to energy transition. This introduces an acceptance criterion that can guide decision-making towards those projects that are in keeping with climate neutrality goals to be achieved by 2030 and 2050.
Defining qt = f(ETI), we can derive the value of the elasticity δ of environmental quality to changes in the growth rate of consumption as follows. Let c1 be the GDP per capita of a country and c2 the relative ETI. The slope of the regression line that correlates the two parameters GDP per capita and ETI corresponds to the value of δ. It follows that a different definition of environmental quality may allow the model to be adapted to the assessment of project categories other than energy projects.
In the following, the approaches to estimate the parameters that make up (7) and (8) are defined.
With reference to socio-economic variables, the time preference rate ρ is the sum of (i) l, which coincides with the average mortality rate for a country-this is because individuals tend to discount future utility according to the probability of being alive at the time of the decision; and (ii) r, or the pure time preference rate. This parameter reflects the irrational behaviour of individuals in making choices about the distribution of resources over time and is generally between 0 and 0.5% [49,50].
The elasticity η1 of the marginal utility of consumption represents the percentage change in marginal utility resulting from a unit change in consumption [51]. It is a measure of risk aversion to income inequality, and it is estimated using the formula proposed by both Stern [52] and Cowell et al. [53]: (9) is a function of t, the marginal tax rate, and T/Y, the average tax rate. The growth rate of consumption g1 expresses the degree of wealth in society and it is generally at the average growth rate of a country's GDP per capita [46,48].
Finally, a further environmental parameter is η2, which represents the degree of environmental risk aversion. It can be expressed as a function of the consumption expenditure η* to be allocated to environmental quality, considering that 10% < η * < 50% [29,54,55]:
A New Discounting Model for Energy Inter-Generational Projects
To provide the "right" weighting for the environmental effects of energy projects and policies in the long run, a dual and diminishing discounting approach is proposed. In other words, the structure of the two functions of the discount rate, economic and environmental, defined in the previous section begins decreasing over time.
This can be done by considering macroeconomic risk, i.e., we assume that the growth rate of consumption g1 is a risky variable. To do this, we must first analyse the variable's trend over time, then define the probability distribution that best approximates the historical data. From the probability distribution of g1 thus obtained, we derive the probability distributions of the unknowns rc and rq. From these parameters we then derive the values of the economic and environmental discount rates for each of the n years of the analysis period. The next step is to move from the two uncertain and constant discount rates rc and rq, which coincide with the expected value of the probability distributions obtained, to certain but decreasing rates with a 'certainty equivalent'. This is possible by using the expected net present value (ENPV) approach, according to which, assessing the ENPV with an uncertain but constant discount rate is correspondent to evaluating the NPV with a certain rate, but diminishing with a 'certainty equivalent' until it has the minimum value at time t = ∞ [33]. In order to move from the uncertain and constant discount rate to the certain but decreasing discount rate with a 'certainty equivalent', it is first necessary to assess the economic discount factors Ec(Pt) and environmental discount factors Eq(Pt), and then rct and rqt: In (11) Ec(Pt) is calculated using the following formula: E c (P t ) = E c p rci · e (-r ci t) m i = 1 (13) where rci is the value of the i-th economic discount rate, resulting from the probability distribution of rc; pci = probability of the i-th value of rc; m = intervals of discretization of probability distributions rc and rq; In (12) Eq(Pt): E q (P t ) = E q p rqi · e (-r qi t) m i = 1 (14) In which rqi is the value of the i-th environmental discount rate, deriving from the probability distribution of rq; pqi = probability of the i-th value of rq.
Application: Estimation of SDRs for Italy and China
The approaches described in Sections 3.1 and 3. 2 are implemented below to estimate discount rates for intra-and inter-generational energy projects for two very different economies: Italy and China. This is to demonstrate how: (i) the model can be applied to any territorial context; (ii) different social, economic, and environmental conditions lead to significantly dissimilar results.
Estimation of Constant and Dual Discount Rates for Italy and China
In the following we detail the estimation of the socio-economic and environmental parameters in (7) and (8).
The time preference rate ρ is a function of the mortality-based discount rate l and the pure time preference rate r. The first parameter, l, corresponds with the time-averaged mortality rate of the country. Since this rate undergoes small variations over time, it is considered correct to consider data from the last 30 years. l is estimated using mortality rates for the period 1991-2020 given by ISTAT for Italy and by the World Bank for China. Table 2 below shows the result of the calculations. Year 1991 1992 1993 1994 1995 1996 1997 1998 1999 The result for Italy is l = 1.00%, in line with the estimations obtained by Percoco [46] and Florio and Sirtori [48]. For China l = 0.68%. This lower value compared to Italy is the effect of lower mortality rates over the 30-year period.
The pure time preference rate r is positive and reflects the irrational behaviour of individuals in making choices about the distribution of resources over time. As suggested by both Pearce and Ulph [49] and Evans and Kula [50], 0 < r < 0.5% and is assumed to be 0.3%. It follows that: ρ Italy = 1.00% + 0.3% = 1.30%; ρ China = 0.68% + 0.3% = 0.98%.
By implementing (9) we calculate the elasticity η1 of the marginal utility of consumption. Using the data of the marginal t and average T/Y individual income tax rates given by the Organization for Economic Cooperation and Development Countries (OECD), we assess log(1 − t), log(1 − T/Y), and the corresponding ratio. Processing returns a value of η1 = 1.34 for Italy.
The analysis of average and marginal tax rates by income bracket in China gives instead a value of η1 = 1.14 (source: https://taxsummaries.pwc.com/peoples-republic-ofchina/individual/taxes-on-personal-income, 10 July 2021).
In summary, the estimations return the following values: Italy = 1.34; China = 1.14. Estimates are consistent with known values from the literature, where the social values approach leads to 1 < η < 2.
From the analysis of the trend of per capita GDP growth rate of the two countries, g1 is estimated for Italy by averaging data over the last forty years, while for China the evaluation is carried out based on data over the last sixty years.
As for the estimation of the two environmental parameters, the value of η2 is derived from (10), assuming η * = 30%, according to Hoel and Sterner [54], Sterner and Persson [55], and Gollier [29]. Hence, it follows that: Italy = 1.15; China = 1.06. δ expresses the sensitivity of environmental quality q, expressed through the ETI, to changes in consumption c. The latter parameter is related to GDP per capita. For 115 countries, the index values in 2021 are related to their GDP per capita in the same year. Figure 1 gives the results of the ETI-GDP per capita regression analysis, from which δ is 0.23. Table 3 gives the values obtained for each parameter as well as the estimated rc and rq for Italy and China.
Estimation of Declining and Dual Discount Rates for Italy and China
To estimate time-declining rct and rqt discount rates for energy projects with intergenerational effects, the reference is the approach defined in Section 3.2. Also, in this case, estimations are carried out with reference to both the Italian and Chinese economies.
gt is estimated based on the growth rate of GDP per capita, in accordance with literature data [46]. As anticipated in Section 4.1, we consider it consistent to select data for the last forty years, i.e., from 1981 to 2020.
In fact, the data reported for the previous period reflect historical and economic contexts that can no longer be linked to either the current or foreseeable future economic, social, and cultural context of the country.
We identify the probability distribution that most closely approximates the historical series to predict the values to be associated with the growth rate of consumption, which in this context is the Weibull curve, chosen based on the Anderson-Darling test. Then, the expected values of the GDP growth rate are predicted by implementing the Monte Carlo analysis, calibrated on 10,000 random trials. The simulation was carried out using Oracle Crystal Ball software. Once the probability distribution of the consumption growth rate g1 is defined, the probability distributions of the economic discount rate rc and the ecological discount rate rq are extracted by implementing (7) and (8). Table 4 shows the values of the statistical indices for the Monte Carlo simulation. The calculations indicate that: g1 has values between −8.56% and 4.82%, and after 10,000 simulations the standard error of the mean is 0.02%; rc and rq have values between −10.71%-7.67% and −4.08%-3.99% respectively. In both cases, the mean standard error is acceptable as it is 0.02% and 0.01% respectively after 10,000 trials. Since negative discount rates have no economic significance, only positive values are considered in the definition of the declining structure of the two rates. The analysis interval chosen for China is that between 1960-2020, in which the GDP growth trend rate tends to be steadily increasing. In this case, the Anderson-Darling test showed that the curve that best approximates the historical data is the logistic curve.
Again, the likely values of the GDP growth rate are predicted by implementing the Monte Carlo analysis, based on 10,000 random trials. Table 5 shows the values of the statistical indices for the forecast: g1 has values between −18.96% and 40.58%, and after 10,000 simulations, the standard error of the mean is 0.06%. Furthermore, in the case of the simulations of rc and rq, the standard error is acceptable because it holds for the first variable at 0.07% and for the second at 0.02%. In addition, only positive values for the two discount rates are considered. This assumption is acceptable because the probability of having a positive discount rate rc is 95.06% and the probability that the discount rate rq is greater than 0 is 95.96%.
The probability distributions of rc and rq obtained are first discretized into 100 intervals. Then, for each of the two distributions, we estimate the probability that the average rate of each interval has of occurring. Given the set of values to be associated with the discount rates rc and rq and their probability, the equivalent certainty discount factors Ec(Pt) and Eq(Pt) are estimated using formulae (11) and (12). Finally, using (13) and (14) for each instant t leads to the estimation of the time sequence of the declining economic discount rate rct and the declining ecological discount rate rqt. These are declining functions along the time horizon, assumed to be 300 years. Figures 2 and 3 illustrate the term-structure of the economic and environmental discount rates for Italy and China respectively.
Results and Discussion
As Table 3 indicates, the values of the discount rates to be used in the analysis of intra-generational energy projects for Italy are significantly lower than those obtained for China. In fact, rc and rq for Italy are 2.7% and 1.8% respectively, while for China rc is 9.8% and rq is 4.0%. It should be noted that the difference between the environmental and economic discount rates for China is marked. On the contrary, in the Italian case, the values of the two discount rates are much closer to each other.
The implementation of the discounting model for energy inter-generational projects leads to the following results. For Italy: • The economic discount rate function rct for Italy begins from an initial value of 3.4% to attain a value of 0.7% after 300 years, thus decreasing by about 2.6%. • The environmental discount rate rqt, on the other hand, takes on significantly smaller values of rct, starting from 1.92% and reaching 0.18% after 300 years. • The average economic discount rate for the first 30 years is about 3.0%, which coincides with the value of the discount rate suggested by the European Commission [56]. • The average environmental discount rate for the first 20 years is 1.8%, highlighting how from the beginning of the assessment more weight is given to the damages and benefits that the investment generates on the environment.
For China: • The economic discount rate function rct is of 12.90% and reaches a value of 5.36% after 300 years. • The environmental discount rate rqt is well below the values of rct, as it has an initial value of 4.54% and a final value at t = 300 years of 1.01%. • The average economic discount rate for the first 30 years is about 10.2%, which is slightly higher than the value of the discount rate suggested by the Asian Development Bank [57] for economic analysis, which is 9.0%. • The average environmental discount rate for the first 30 years is 4.0%.
Figures 4 and 5 explain the step functions (with solid lines) that approximate the functions (dashed lines) of the economic declining rate and the ecological declining rate for Italy. For practical purposes, it is useful to approximate the declining function to a step function. In other words, it may be permissible to use the same value of the discount rate for a period of thirty years in the analysis. In this time interval, the effect of present value contraction on cash flows can be considered acceptable [37][38][39]. Figures 6 and 7 indicate the same step functions of rct and rqt for China.
Figure 4.
Step structure of the economic discount rate rct for Italy.
Figure 5.
Step structure of the economic discount rate rqt for Italy.
The results indicate that the use of two different rates for discounting strictly financial and extra-financial components would allow greater weight to be given to environmental damages and benefits, thus orienting the decision-making process towards more sustainable investment choices. Step structure of the economic discount rate rct for China.
Figure 7.
Step structure of the environmental discount rate rqt for China.
It is interesting to underline that the two functions of the discount rate for China start from higher initial values than for Italy but decline much more rapidly after the early years of the period of analysis. The higher initial value is mainly due to the higher values of GDP growth rate for China compared to Italy. However, the faster decline in the termstructures of the discount rates is linked to China's 'worse' environmental condition. Indeed, as shown by the lower Energy Transition Index (ETI) value, more weight should be given to environmental impacts of energy projects in China from the early years of the assessment. This is to prioritise investment choices in line with sustainability and climate neutrality objectives to be achieved in the coming decades.
Conclusions
Energy transition policies aim to respond to both economic, social, and environmental challenges. Therefore, it is essential to steer the decision-making process towards policy initiatives that ensure a balance between socio-economic benefits and costs. In this context, the choice of discount rate becomes central to comparing policy strategies and investment programmes-but also to determine the speed with which an energy transition policy should be delivered to reach decarbonisation targets within the defined timeframe [2].
Thus, the discount rate affects the final judgement on the efficiency of the investment policy or project. However, there is still no unanimity in the literature as to what value of the discount rate should be used in analyses, or how it should be estimated. The question becomes even more controversial when a very long-term perspective is adopted.
With this research, we propose an innovative discounting approach for discounting energy investments, distinguishing between intra-generational and inter-generational projects.
In the first case, a constant and dual discounting approach is characterised. The discount rate used to discount the environmental components is lower than the discount rate used to weight the strictly financial contributions. However, since the effects of these projects are felt over a period of thirty years at the most, both discount rates are assumed to be time-constant.
For projects with inter-generational environmental effects, a dual and time-declining econometric model is defined to give greater weight to long-term environmental components that would be underestimated using constant rates.
For both models, the main change is that environmental quality is defined as a function of the Energy Transition Index (ETI). It is considered essential to introduce into the mathematical structure of the SDR a variable that considers the progress of countries on the path towards energy transition. In other words, a discount rate defined in this way allows decision makers to be oriented towards those projects that are in line with 2030 and 2050 climate neutrality goals. In addition, the dual and declining approach also takes macroeconomic risk into account, as the growth rate of consumption is modelled as a stochastic variable.
The defined models were implemented to estimate discount rates for both Italy and China. The results obtained show that: (i) in the case of the dual and constant approach for both Italy and China, the environmental discount rate has smaller values than the economic discount rate; (ii) in the case of the dual and declining approach, the two functions of the discount rate-economic and environmental-for China start from higher initial values than for Italy, but decline much faster from the beginning of the analysis period. The higher initial value is mainly due to the higher values of GDP growth rate for China compared to Italy. However, the application demonstrates how China's 'worse' environmental condition leads to a more rapid decline in the term-structures of the discount rates.
While the model is relatively easy to implement, for some countries it may be difficult to find the data needed to estimate each parameter of the model. In addition, estimates of discount rates need to be periodically updated. The application shows, firstly, how different discount rates can be in relation to socio-economic context. Secondly, it is clear how the use of estimated discount rates can favour more sustainable investment choices in line with UN climate neutrality objectives. The decision-making effects on energy policy investments are therefore evident and extremely important; evaluating the economic feasibility of energy projects using dual, and possibly even time-declining approaches, means attributing greater weight to extra-financial damages and benefits. On the contrary, by using the social discount rates provided by governments, which are generally unique and constant over time, policymakers would orient their choices towards investments with higher initial financial returns, without considering the short and long-term repercussions on the environment. Finally, research perspectives may include the implementation of the model for other countries in order to provide a larger database of environmental and economic discount rates, as well as the adaptation of the model to other sectors of intervention. | 9,892.4 | 2021-09-23T00:00:00.000 | [
"Economics"
] |
Salivary Factors that Maintain the Normal Oral Commensal Microflora
The oral microbiome is one of the most stable ecosystems in the body and yet the reasons for this are still unclear. As well as being stable, it is also highly diverse which can be ascribed to the variety of niches available in the mouth. Previous studies have focused on the microflora in disease—either caries or periodontitis—and only recently have they considered factors that maintain the normal microflora. This has led to the perception that the microflora proliferate in nutrient-rich periods during oral processing of foods and drinks and starves in between times. In this review, evidence is presented which shows that the normal flora are maintained on a diet of salivary factors including urea, lactate, and salivary protein degradation. These factors are actively secreted by salivary glands which suggests these factors are important in maintaining normal commensals in the mouth. In addition, the immobilization of SIgA in the mucosal pellicle indicates a mechanism to retain certain bacteria that does not rely on the bacterial-centric mechanisms such as adhesins. By examining the salivary metabolome, it is clear that protein degradation is a key nutrient and the availability of free amino acids increases resistance to environmental stresses.
Introduction
The common perception of bacteria in the mouth is that they reside there because of the available warmth, moisture, and protection and they take advantage of the regular input of nutrients from food whilst providing little or no benefit to the host. At best their contribution to oral health appears to be exclusion of pathogenic bacteria by maintaining a commensal population of bacteria and fungi. Possibly this view of oral microbes has been driven by research investigating the causes of dental caries. However, more recent studies have been examining the oral microbiome in normal healthy (caries free/treated caries) subjects , the influence of non-sugar aspects of diet (De Filippis et al. 2014), and the other nutritional sources (Jakubovics 2015;Gardner et al. 2019) which paint a different picture in which the host actively promotes the growth of certain bacteria by providing them with suitable nutrients to maintain growth. A major benefit of the oral microbiome to whole body physiology has already been described-the nitrite-producing bacteria on the tongue which contribute to nitric oxide production and the lowering of blood pressure (Webb et al. 2008). There are likely to be others as more studies explore the oral metabolome in relation to whole body health. Clearly, if there is a benefit to whole body health then the body should nurture the oral microbiome. If true, this could explain the recent concept of "resilience" (Rosier et al. 2018), the ability of the oral microbiome to resist pressure to change from antibiotic treatment or overgrowth of one species, into a dysbiotic state often associated with disease. Crucial to the process of maintaining oral commensals is saliva. Previously, most studies have described the anti-microbial properties of saliva as bacteriostatic with some bacteriocidal properties, which it clearly has, but this paper will also review the evidence that it has bacterial growth-promoting properties as well. Broadly speaking, the growth-promoting properties can be split into three main sections; nutrients, attachment, and environment.
Nutrients
Most of the nutrients for oral bacteria are specifically added and are not merely leakage from the serum compartment. Saliva is formed by an active process of ion secretion into the lumen of the gland, creating an osmotic gradient (Thaysen et al. 1954) which draws water through from the interstitial space. Most ions and metabolites are transported by specific channels into saliva. Proteins are synthesized in the glands and added mostly by a separate mechanism of storage granule release dependant on cyclic adenosine monophosphate (AMP) signaling (Castle and Castle 1998) and as a consequence few serum proteins are found in saliva collected directly from the duct. In contrast, whole mouth saliva contains some serum proteins derived from a serum transudate leaking around teeth (via gingival crevicular fluid). In a recent comparison of metabolites in parotid saliva, whole mouth saliva and plasma (Gardner 915486J et al. 2019) urea concentrations were greater in parotid saliva than whole mouth saliva or plasma implying the active transport of urea into parotid saliva, presumably by the urea transporters (UT-A and UT-B) although their presence hasn't been confirmed in salivary glands so far. In our study, urea was one of the few components to decrease in whole mouth saliva relative to parotid saliva suggesting its uptake and use by bacteria. Urea is the most abundant (non-protein) nutrient in saliva ( Fig. 1) used by bacteria such as Streptococcus salivarius, Actinomyces naeslundii, and haemophilus apparently through their expression of urease (Chen et al. 1996), an enzyme that converts urea to ammonia and carbon dioxide. Whilst the production of ammonia in plaque would help to neutralize lactic acid in caries lesions (Gordan et al. 2010), a recent review concluded there was no beneficial effect on caries (Zaura and Twetman 2019). To further understand the metabolism of urea by oral bacteria C 13 labeled urea was added to an expectorated whole mouth saliva sample and incubated for 1 h (Carpenter unpublished data). The sample was then analysed by C 13 nuclear magnetic resonance (NMR) which permits the tracking of the added urea. Surprisingly, urea was seen to be first converted into ammonium carbamate and then to formate and propionate (see Fig. 2 and Appendix 1 for spectra). Although conversion of urea to ammonium carbamate has been described before, even by urease (Mobley et al. 1995), it is then assumed to degrade into ammonia and carbon dioxide. Indeed, this reaction is so reliable that it is the basis of the urea breath test for Helicobacter pylori infections of the gut (Megraud and Lehours 2007). If urease activity was present in the mouth this would compromise the urea breath test. A more logical explanation is that the ammonium carbamate is converted to formate and not ammonia. This is interesting as it could account for the large amounts of formate in saliva and the lack of efficacy of urea in preventing caries. The present results do not exclude the possibility of urease action and whether ammonia is produced may depend on the amount of urea added. Clearly more work is required to substantiate this new idea and delineate which bacteria convert urea to formate and/or which convert to ammonia. Figure 2. C 13 labeled urea was added to whole mouth saliva and incubated for 1 h at 37°C. C 13 nuclear magnetic resonance analysis revealed peaks assigned to ammonium carbamate and formate. In addition, propionate and acetate were detected of which only acetate was detected in the unlabeled control sample due to the natural abundance of C 13 acetate isoform. The presence of ammonium carbamate and formate suggests urease is not active in reducing urea to ammonia. It is unclear how labeled propionate appeared or why formate is not further reduced to carbon dioxide by formate dehydrogenase (dotted box).
Resting whole mouth saliva, which is present when there is no food in the mouth, has very low levels of sugars/carbohydrates present. Typically, parotid saliva has around 20 to 100 umol/l glucose (Andersson et al. 1998), but the glucose becomes undetectable in resting whole mouth saliva, presumably because the bacteria rapidly utilize it via the Embden Meyerhof Parnas (EP) pathway (Fig. 1). The greatest sources of carbohydrate are food itself, which can still be detected in saliva 20 min after consumption although it is usually cleared from the mouth after 1 h. Thus, most of the time bacteria in the mouth are utilizing intrinsic nutrients in saliva as their substrates (Jakubovics 2015). So if the commensal bacteria are not utilizing glucose to any great extent, what nutrients do they use? The metabolomic analysis of whole mouth saliva indicates the proteolytic degradation of salivary proteins fuels many bacteria (Fig. 1). The abundance of free amino acids in whole mouth saliva (Syrjanen et al. 1990) contrasts with their almost complete absence in sterile saliva collected from the gland (Gardner et al. 2019). Their degradation via 5 amino pentanoate to acetate and proprionate (Cleaver et al. 2019) probably accounts for the most abundant metabolites in saliva. Although some amino acids, such as proline, appear not to be utilized as it is one of the most abundant in saliva (Santos et al. 2020), lysine, glycine, glutamate, and arginine are further utilized. The Arginine Deiminase System (ADS) hydrolyses arginine to create citrulline and ammonia; the ammonia is beneficial to the host by neutralizing lactic acid in carious lesions. This pathway has become prominent as some dental products now contain arginine as an additive. A recent study found a reduction in sucrose metabolism when subjects used an argininecontaining toothpaste which was associated with altered salivary microflora, but not altered plaque (Koopman et al. 2017). Although arginine is being added to toothpastes, it's interesting to note that saliva already contains many free amino acids, including arginine (Syrjanen et al. 1990) from proteolysis of salivary and cellular proteins by bacterial and mammalian proteases (Vitorino et al. 2009).
In addition to the amino acids, the sugars linked to the proteins are also utilized; many bacteria contain sialidases (McDonald et al. 2016) and other glycosidases which can be utilized by the glycolytic EMP pathway to form pyruvate and formate. The close association of bacteria in biofilms permits the complete degradation of salivary glycoproteins as no single bacterium contains all the necessary enzymes (Wickström et al. 2009). Mucins are often cited as being important nutritional additives for oral bacterial culture systems, presumably due to their high sugar content, but in fact, most salivary proteins are glycosylated to some degree (Carpenter et al. 1996) and indeed the basic proline-rich proteins, agglutinin and SIgA have the same O-linked glycans as mucins (Cross and Ruhl 2018).
The active secretion of nutrients into saliva is perhaps the best evidence of positive selection of microbes in the mouth and the best characterized is the nitrate/nitrite system (Hezel and Weitzberg 2015). In this system, salivary glands actively transport nitrate from the blood system, via the sialin transporter and deliver it into saliva (Qu et al. 2016). Bacteria including Rothia and Veillonella within the mouth then convert the nitrate to nitrite which can be converted to nitric oxide when the nitrite reaches the acidity of the stomach. Several studies have shown salivary nitrate to correlate to lowered caries risk (Doel et al. 2004) and longer supplementation with nitrate appeared to alter the microbiome suggesting some degree of utilization (Burleigh et al. 2019). Other important nutrients include lactate, bicarbonate, and vitamins. The role of lactate appears central to the food networks that permit the high diversity of bacteria in the mouth (Jakubovics 2015). Food networks describe how lactate producers co-exist with lactate consumers in multi-species biofilms thus permitting a larger variety of bacteria to co-exist through beneficial exchange. In low sugar/ carbohydrate environments most lactate is delivered by saliva derived from plasma-the active salivary gland secretion of lactate (as opposed to leakage) again suggesting the host selection of bacteria. Bicarbonate is another essential nutrient used by many bacteria such as Streptococcus anginosus (Matsumoto et al. 2019) and Porphyromonas gingivalis (Supuran and Capasso 2017), but it is actively secreted by salivary glands as part of the fluid secretion mechanism particularly for mucinsecreting sublingual and minor glands (Lee et al. 2012). Bicarbonate could also form a food network although it is less well studied than lactate. Some bacteria are bicarbonate consumers whereas some are bicarbonate producers. P. gingivalis expresses carbonic anhydrase which forms bicarbonate by the hydrolysis of carbon dioxide (Supuran and Capasso 2017). As well as propagating certain bacteria by supplying certain nutrients, saliva also limits the availability of other key nutrients. For example, there are very low levels of cobalamin (vitamin B12) in saliva which are an important nutrient for some bacteria, particularly P. gingivalis. As well as not transporting any into saliva from serum, saliva also contains vitamin-binding proteins such as transcobalamin which strongly binds cobalamin and prevents its use by bacteria. This chelation of nutrients is similar to lactoferrin for iron or haem. Salivary glands secrete iron-free lactoferrin which avidly binds iron and thus prevents bacteria utilisation. This could be interpreted as the body wishing to keep the certain bacteria quiescent and is an important mechanism in resilience. As shown by oral diseases, the availability of alternative nutrient sources such as serum (for periodontitis) or plant-based sugars (for caries) encourages pathogenic traits in bacteria. Overall, the nutrient needs of bacteria are varied but can be completely supplied by saliva but only by the coordinated actions of bacteria. If most of the nutrient needs can be supplied by saliva through mostly proteolytic degradation, a central question is why are there so many saccharolytic bacteria in the mouth-one possible explanation is specific attachment.
Attachment
Most research concerning attachment has focused on mechanisms by which bacteria bind teeth to understand the dental caries process. Selected salivary proteins bind the enamel surfaces forming what is termed the "acquired enamel pellicle" the bacteria then bind these proteins through adhesins expressed on pili projecting from the surface of the bacteria (Cross and Ruhl 2018). The adhesins are usually lectin-like molecules which bind the glycans attached to the salivary proteins (Bensing et al. 2016). These glycans are either N-or O-linked to the peptide backbone and often terminate in sialic acid. Several bacteria make sialidases to remove and utilize sialic acid (McDonald et al. 2016) and to gain access to galactose, fucose, and mannose glycans for either nutrition or attachment (Wong et al. 2018). Although there is some specificity of bacteria for certain glycans, many salivary proteins express the same glycans. For example, the Tn antigen (GalB1-3GalNAc) is an O-glycan frequently found on the salivary mucins MUC5B and MUC7 (Chaudhury et al. 2016), but this antigen is also present on many of the basic proline-rich proteins (Carpenter and Proctor 1999) which are the most abundant group of proteins in parotid saliva. Most previous research gives the impression that it is the bacteria which are binding to the salivary proteins to prevent being swept away by the nearly constant flow of saliva. But is there any evidence that salivary proteins actively promote the adherence of specific bacteria? One possible mechanism involves secretory IgA (SIgA). This antibody is often cited as preventing colonization as it agglutinates bacteria in solution due to its dimeric arrangement (Fig. 3). Recently it has been shown that SIgA also forms part of the mucus attached to mucosal surfaces of the mouth (mucosal pellicle) (Gibbins et al. 2014). The SIgA binds the salivary mucins (Biesbrock et al. 1991) via mucin-mucin interactions (Gibbins et al. 2015) in solution and then binds the cell membrane mucin MUC1 (Ployon et al. 2016). Even though not all of the salivary SIgA binds to the mucosa it does concentrate to high levels forming an immune reservoir. By doing so, SIgA would aid colonization of mucosal surfaces by bacteria that SIgA is reactive against. It is known that SIgA binds many oral bacteria, such as S. mitis, S. oralis, and S. mutans using shared epitopes (Cole et al. 1999). A role of mucosal bound SIgA determining commensal bacteria has been demonstrated in the gut (Donaldson et al. 2018). It's possible then that SIgA influences which bacteria are present in the mouth by specifically binding them. This would be particularly important for the Streptococcal species that grow best in high sugar environments. It would interesting to investigate if bacteria bind epithelial cells in babies since at birth SIgA is relatively scarce (Seidel et al. 2001) but develops over the first year with increased exposure to bacteria. In general, there are no changes in SIgA availability or epitope recognition with ageing.
Environment
The mouth has the greatest variety of bacteria compared to other sites of the body probably because of the variety of niches available. Most of the mucosal and dental surfaces will be covered in bacteria fed by saliva and occasional nutrients from foods using aerobic respiration. Using an in vitro model (saliva inoculated hydroxyapatite discs, cultured in sterilised saliva) aerobic conditions mimicked salivary metabolites with acetate, proprionate, and formate being the most abundant metabolites. Whereas the same cultures under anaerobic conditions led to a loss of glycine and lactate production and an increase in ethanol production (Cleaver et al. 2019). In addition to the aerobic sites there are a number of anaerobic sites: in crevices on the tongue supplied by saliva or within plaque, either supragingival, fed by saliva, or sub-gingival pockets fed by the gingival crevicular fluid, a serum filtrate. Salivary metabolomics Figure 3. Secretory IgA (SIgA) complexes with salivary mucins (Muc 5B and Muc 7) before binding to epithelial membrane-bound mucin Muc 1 to form the salivary mucosal pellicle. Secretory IgA can then mediate binding of bacteria (red rods and circles) helping them to adhere to epithelial cells. The mucin hydrogel-like properties of the mucosal pellicle allow concentration of bacterial products allowing quorum sensing and food networks that enhance their growth. As the epithelial surface is constantly sloughing, thick biofilms do not occur as they do in plaque around teeth. (Not drawn to scale).
is dominated by the aerobic metabolism of streptococci degrading salivary glycoproteins except when sugars become abundant following ingestion of food. In contrast, tongue and dental plaque metabolomics indicate anaerobic activity, particularly when protected within a biofilm structure. Each of these sites will have very different metabolic and genetic composition. Presumably the body would prefer most bacteria to remain aerobic as most disease is associated with anaerobic biofilms. Although saliva does contain many anti-bacterial proteins and enzyme systems (peroxidase) it is not as anti-bacterial as other sites such as the eye or the lungs (Lloyd-Price et al. 2016) which again supports the concept that the body propagates the oral microbiome rather than opposing colonization.
One aspect of environment that has not been studied extensively is the effect of age. Several factors affect the supply of nutrients to the mouth as already outlined in Gardner et al. 2018. The amount of exercise, nutrition, and dental status will affect the salivary metabolome. In addition, the number of pockets or crevices on the tongue and around teeth will increase with age. A large study indicated several metabolomic changes with increasing periodontal disease (Liebsch et al. 2019), most notably the increased production of phenylacetate. Salivary amino acids have also been shown to alter with ageing (Tanaka et al. 2010). All these changes are likely to change many aspects of the salivary metabolome but at present not so many studies have been completed on healthy individuals whilst controlling for all the variables listed above that may confound the results.
Discussion
If the body does promote the colonization of the mouth by any of the mechanisms outlined above, this would increase the resilience of oral bacteria by providing alternative sources of nutrition and increased residence time in the mouth. The conversion of urea to formate and propionate would allow the bacteria to extract energy from the process whereas no adenosine triphosphate (ATP) is formed during the conversion of urea to ammonia and carbon dioxide (Burne and Marquis 2000). In addition, the presence of free amino acids in saliva also increases the ability to resist stresses (osmotic, smoking, and heat) in solution since many amino acids can buffer pH, osmotic, and redox changes by themselves. But some amino acids, if taken up by the bacteria, also confer resistance to osmotic and oxidative stress (Christgen and Becker 2019). The ability of the microflora to recover from antibiotic use is a hallmark of a healthy oral microbiome. So could these mechanisms be used when a dysbiotic state exists in the mouth? Most of the ideas outlined are only relevant to mucosal-bound bacteria. As a group they are distinct to other sites in the mouth (O'Donnell et al. 2015) but may equal the number of bacteria present in plaque. These mechanisms are unlikely to apply to the pathogenic bacteria on or around teeth because these are special niches fed by different nutrient sources (serum or diet) often protected from saliva by extracellular matrices or by existing within pockets. Presumably removing this nutrient source should reduce the pathogenic bacteria which is easier to achieve for diet-related but not for serum-fed biofilms. It seems unlikely that the nutrients identified in Figure 1 would affect plaque microbiology as shown by an arginine supplementation study which altered the salivary microbiome and metabolome but not the plaque microbiome (Koopman et al. 2017). Another interesting implication is that these prebiotics could affect taste. Arginine supplementation has been shown to affect taste (Melis and Barbarossa 2017), dietary protein associated with differences in the oral microbiome (De Filippis et al. 2014), and bacteria derived D and L amino acids are known to bind taste receptors (Lee et al. 2017). As protein degradation accounts for most of the metabolites found in resting whole mouth saliva it suggests it is an important factor in determining the composition of the oral flora. Factors involved in this process may be useful prebiotics. Based on Figure 1, one obvious but missing factor is lipoate. Lipoic acid is an essential cofactor for several of the amino acid pathways and is pivotal to the virulence of Staphylococcus aureus (Zorzoli et al. 2016), but despite having a clear NMR signature it could not be detected in any samples in our studies. Its absence may suggest it is the rate-limiting step in many bacteria and thus may form a useful prebiotic to alter a dysbiotic microbiome back toward normal metabolism. Interestingly, lipoic acid as an oral treatment for burning mouth (Femiano and Scully 2002) has had some success although no studies to date have examined changes to the oral microbiome.
In summary, the nutrient supply by saliva suggests a deliberate attempt to maintain certain bacteria and exclude others. This propagation is aided by the selective absorption of bacteria onto mucosal surfaces by the immobilization of SIgA into the mucosal pellicle.
Author Contributions
G.H. Carpenter, contributed to conception, design, and data analysis, drafted the manuscript. The author gave final approval and agrees to be accountable for all aspects of the work. | 5,160.4 | 2020-04-13T00:00:00.000 | [
"Biology"
] |
Deducing high-accuracy protein contact-maps from a triplet of coevolutionary matrices through deep residual convolutional networks
The topology of protein folds can be specified by the inter-residue contact-maps and accurate contact-map prediction can help ab initio structure folding. We developed TripletRes to deduce protein contact-maps from discretized distance profiles by end-to-end training of deep residual neural-networks. Compared to previous approaches, the major advantage of TripletRes is in its ability to learn and directly fuse a triplet of coevolutionary matrices extracted from the whole-genome and metagenome databases and therefore minimize the information loss during the course of contact model training. TripletRes was tested on a large set of 245 non-homologous proteins from CASP 11&12 and CAMEO experiments and outperformed other top methods from CASP12 by at least 58.4% for the CASP 11&12 targets and 44.4% for the CAMEO targets in the top-L long-range contact precision. On the 31 FM targets from the latest CASP13 challenge, TripletRes achieved the highest precision (71.6%) for the top-L/5 long-range contact predictions. It was also shown that a simple re-training of the TripletRes model with more proteins can lead to further improvement with precisions comparable to state-of-the-art methods developed after CASP13. These results demonstrate a novel efficient approach to extend the power of deep convolutional networks for high-accuracy medium- and long-range protein contact-map predictions starting from primary sequences, which are critical for constructing 3D structure of proteins that lack homologous templates in the PDB library.
Introduction
Protein structure prediction represents an important unsolved problem in computational biology, with the major challenge on distant-homology modeling (or ab initio structure prediction) [1,2]. Recent CASP experiments have witnessed encouraging progress in protein contact predictions, which have been proven to be helpful to improve accuracy and success rate for distant-homologous protein targets [3][4][5][6].
The idea of developing sequence-based contact-map prediction to assist ab initio protein structure prediction is, however, not new, which can be traced back to at least 25 years ago [7,8]. In general, the methods for sequence-based protein contact-map prediction can be classified into two categories: coevolution analysis methods (CAMs) and machine learning methods (MLMs). In CAM, the predictors try to predict inter-residue contacts by analyzing evolutionary correlations of the target residue pairs from multiple sequence alignments (MSAs), under the assumption that correlated mutations in evolution usually correspond to spatial contacts of residue pairs. The CAMs can be further divided into local and global approaches. The local approaches use correlation coefficient, e.g., mutation information [9] and covariance [10], to predict contacts; these approaches are "local" because they predict contact between two residue positions regardless of other positions. In contrast, the global approaches, also called direct coupling analysis (DCA) methods, consider effects from other positions to better quantifying the strength of direct relationship between two residue positions. DCA models demonstrated significant advantage over the local approaches, and essentially re-stimulated the interest of the field of protein structure prediction in contact-map predictions. However, the success of most DCA methods [11][12][13][14][15][16] is still limited for the proteins with few sequence homologs, because a shallow MSA significantly reduces the accuracy of DCA to derive the inherent correlated mutations. In addition, DCA models only capture linear relationships between residues on MSA data (S1 Text) while residue-residue relationships in proteins are inherently non-linear.
As a more general approach, MLMs intend to learn the inter-residue contacts from sequential information and coevolution analysis features with supervised machine learning models trained with known structures from the PDB. Early attempts utilized support vector machines (SVMs) [17,18], random forests (RFs) [13,19], artificial neural networks (NNs) [20][21][22][23] etc., to model the complex relationships between residues. Recently, great improvements have been achieved by the application of convolutional neural networks (CNNs) in several predictors, including DNCON2 [24], DeepContact [25] and RaptorX-Contact [26]. Most of the predictors were however trained on the final contact-map confidence scores [24][25][26], which may suffer coevolutionary information loss in data post-processing. In a recent study, we proposed ResPRE [27] which directly utilized the ridge-regularized precision matrices calculated from raw alignments without post-processing in regular coevolution analysis features. Although it uses the evolutionary matrix as the only input feature, the performance of ResPRE was comparable to many state-of-the-art methods that combine additional onedimensional features, such as solvent accessibility, predicted secondary structure and physicochemical properties. Despite the success, ResPRE still bears several shortcomings. First, ResPRE lacks consideration for multiple coevolutionary matrices as features, which could provide complementary information. Second, it was trained by the supervision of binary protein contact-maps that lack continuous inter-residue distance information. Finally, the coevolution features were derived from a somewhat simplified HHblits [28] MSA collection procedure, which did not always include sufficient homologous sequences for meaningful precision matrix generation.
In this work, we proposed a new deep learning architecture, TripletRes, built on a residual neural network protocol [29] to integrate a triplet of coevolutionary matrices features from pseudolikelihood maximization of Potts model, precision matrix and covariance matrix for high-accuracy contact-map prediction (Fig 1). The model was trained on a non-redundant subset of sequences with known PDB structures supervised by discretized inter-residue distancemaps in order to capture the inherent distance information between residues, where a previously introduced deep MSA generation protocol [30] was employed to derive the coevolutionary matrices. The benchmark results on the public CASP and CAMEO targets, along with the community-wide blind tests in the CASP13 experiment, show that the new approach is capable of creating contact-maps with high precision. Although the TripletRes does not outperform the state-of-the-art methods trained after CASP13, the precision is higher than previous methods based on the same training set up to CASP13. An improvement of 9.2% in mean precision can be further observed based on an augmented training set after CASP13. Thus, TripletRes provides an alternative approach to protein contact-map prediction using multiple coevolution ensembles and is capable of achieving comparable performance to other available leading methods. The TripletRes server is available at https://zhanglab.ccmb.med.umich.edu/TripletRes/.
Results
To examine the contact prediction pipelines, we collected two independent sets of test targets, including 50 non-redundant free-modeling (FM) domains from the CASP11 and CASP12 and 195 non-redundant targets assigned as hard by CAMEO [31]. TripletRes was trained on 7,671 non-redundant domains collected from SCOPe-2.07 (downloaded in March 2018) [32]. Here, non-redundancy is defined by setting the maximum pairwise sequence identity to 30%. Detailed procedures to obtain the training and testing datasets are described in S2 Text.
Overall performance of TripletRes
Following the CASP criterion [4], two residues are defined as in contact if the Euclidian distance between their Cβ atoms (or Cα in case of Glycine) is below 8.0 Å. In this study, the accuracies, or mean precisions, of the top L/10, L/5, L/2, and L of medium-(12�|i-j|�23) and long-range (|i-j|�24) contacts are evaluated, where i and j are sequential indexes for the pair of considered residues and L is the sequence length of the target. We focus on the performance of FM targets (or hard targets in CAMEO) and on long-range contacts for evaluation, since the metric is most relevant for assisting the prediction of the tertiary structure of non-homologous proteins [6,33]. Starting from the MSA generated for the query sequence, three L×L×441 feature matrices (also called tensors) are computed for the three sets of coevolutionary features (PRE, PLM, COV). Here, L is the length of the query sequence while 441 = 21×21 is the combination of all 21 amino acid types (including the gap) for two positions in the MSA. Each tensor is input to a separate ResNet, where the first layer reduces the number of feature channels from 441 to 64, followed by instance normalization and 24 consecutive residual blocks to get an L×L×64 tensor. Details of a residual block are shown on the right-hand side inset. The three tensors from the three ResNets are concatenated into an L×L×192 tensor to feed into a final ResNet. In this ResNet, the first layer again reduces the feature channels from 192 to 64, followed by instance normalization, and 24 residual blocks to get an L×L×64 tensor, which is further reduced to L×L×12. Finally, a softmax layer is used to scale the values in the tensor between 0 and 1 and to make the sum of all values for each pixel (i.e. residue pair) equal to one. Since a protein contact/distance map is symmetric, TripletRes averages the corresponding softmax output of residue pair (i,j) and (j,i) to get the final L×L×12 distance-map prediction, where 12 stands of the number of distance bins. The contact-map is obtained by summing up the first 4 distance bin.
https://doi.org/10.1371/journal.pcbi.1008865.g001 Table 1 summarizes the overall performance of long-range contact prediction on the two test datasets by TripletRes, in control with five state-of-the-art methods which are available for free-download and run with default setting (see S3 Text for introduction of the control methods). The results show that TripletRes creates contact models with a higher accuracy than the control methods in all separation ranges for both test datasets. For example, on the 50 FM CASP targets, the average precision of the long-range top L/10, L/5, L/2, and L predicted contacts by TripletRes is 55.1%, 53.2%, 57.1%, and 58.4% higher, respectively, than the precision achieved by DeepContact, the most accurate third-party program in this comparison, which correspond to statistically significant p-values of 4.1e-08, 2.5e-07, 4.4e-10, and 1.1e-11 in the Student's t-test. Notably, TripletRes only uses coevolutionary features, which is a subset of the diverse features employed by DeepContact. The better performance is also probably due to the more effective integration of raw coevolutionary information in the TripletRes neural-network training.
TripletRes also outperforms ResPRE, an in-house program previously trained on precision matrix [27], by a large margin. The long-range top-L precision of TripletRes is 36.9% higher than that of ResPRE with a p-value of 1.9e-07 on 50 FM targets. ResPRE achieved a significantly higher precision on CAMEO than the FM dataset, but its precision is still lower than that of TripletRes. For example, the mean precision of the top-L long-range contacts by Triple-tRes is 12.6% higher than that of ResPRE on the CAMEO targets. Given that both programs utilized the same precision matrix feature, the superiority of TripletRes is mainly attributed to the integrations of triplet coevolutionary features. In addition, as examined in detail below, the supervision of the distance predictions and the new deep MSA constructions also helped improve the accuracy of the TripletRes models.
The proposed TripletRes pipeline's performance could be overrated since more data have been used compared to those methods in CASP11&12. To reduce the bias, we have ensured that the maximum pairwise sequence identity is 30% between training and test set. In addition, we have tweaked those control methods by replacing their MSAs with DeepMSA and S1 Table presents the performance of TripletRes and control methods after the tweaks. The use of DeepMSA improves all control methods, including ResPRE, for which the top-L precision increases from 33.9% to 42.9% on 50 CASP FM targets. Nevertheless, TripletRes still take the lead over the control methods and the top-L precisions on CASP and CAMEO targets are 28.8% and 27.9% higher than those of the best third-party programs, DeepContact.
Feature extraction based on raw potentials outperforms that with postprocessing
Feature extraction is essential for all machine-learning based modeling approaches. To quantitatively examine the effectiveness of the feature extraction strategy and the contribution of different feature types in TripletRes, we compare in Fig 2A-2C the performance of two feature extraction strategies, based on three component features from covariance (COV), precision (PRE), and pseudolikelihood maximization (PLM) analyses (see "Coevolutionary feature extraction" in Methods and Materials), respectively. The first feature extraction strategy, which was used by TripletRes, uses the raw coevolution potentials as input features, while the second strategy, which was commonly employed in many state-of-the-art predictors [22,24,25,34], employs a specific post-processing procedure as described in Supplementary Eqs A and B in S4 Text. Since the traditional coevolutionary features can also be used to predict contacts directly without using supervised training, we list their performance as baselines (see dotted lines in Fig 2A-2C). Here, a total of 767 sequences are randomly selected from 7,671 non-redundant SCOPe proteins as the validate set, while the remaining 6,904 sequences are used as the training set for feature extraction strategy selection in TripletRes. All experiments are performed by keeping other elements (e.g., MSA generation, neural network structure and its hyper-parameters) fixed. It can be observed from Fig 3A-3C that the new feature extraction strategy achieves a better contact prediction performance compared to the traditional feature extraction for all three considered matrix features. The highest mean precisions of the new feature extraction strategy on the long-range top-L/5 contact prediction are 84.2%, 87.5%, and 88.6%, respectively, for COV, PRE, and PLM features. If the post-processed features of Eqs A and B in S4 Text are used, the mean precisions are reduced to 66.8%, 80.8%, and 81.6%, which represent a precision drop by 20.7%, 7.7%, and 7.9%, respectively, compared to the TripletRes feature extraction strategy. On the other hand, the mean precisions of both feature extraction strategies are consistently higher than the baseline through the training epochs, indicating the necessity of supervised training.
One reason for the performance degradation by the post-processing approach is that the potential score for different types of residue-pairs have been treated equally and the sign of these potential scores is thus completely ignored in Eq A in S4 Text, when the post-processed coevolutionary features are fed to the supervised models. In contrast, the approach in TripletRes can keep detailed score information of different residue-pair types from the coevolutionary analyses for each residue pair, and thus allow for deep residual neural networks to automatically learn the inter-residue interactions not only on the spatial information but also on the residue pair-specific scores of different residue-pair types, while the traditional supervised machine learning models can only learn the spatial information of each residue pair during the training.
Ensembling different component features improves contact-map prediction
Compared to ResPRE [27], a major new development in TripletRes is on the integration of multiple coevolutionary feature extractions. To examine the efficiency of ensembled feature collection on the contact predictions, Fig 2D presents In general, the COV-based model has the lowest precision among the three individual feature models, probably due to the translational noise in the covariance matrix [27]. The performance of the two DCA-based features by PRE and PLM are comparable and both consistently outperform COV by a large margin. TripletRes ensembles three features that can obtain more comprehensive coevolutionary information from the deep MSAs. As a result, the ensemble model has a higher precision than all models from the component features, demonstrating the effectiveness of multiple feature integration.
To perform a critical analysis of individual features' contributions, S1 Fig compares the precision of TripletRes against the feature sets without corresponding particular features on the validation set. For both top-L/5 and top-L precisions, excluding the PLM feature has the lowest values during the training process, indicating that the PLM feature makes significant contributions. Interestingly, feature sets without PRE or COV feature and feature ensemble seem to be indistinguishable for top-L/5 precision. While for top-L precision, the full TripletRes with the triplet features ensemble stands out, achieving a precision of 68.2%, higher than the precisions of 67.3%, 67.5%, and 67.4% without COV, PRE, and PLM feature respectively. Surprisingly, COV and PRE seem to have similar contributions to the TripletRes model, even though the model using only PRE feature is previously shown to significantly outperform the model using only COV feature (Fig 2). The reason could be that COV and PLM are two different kinds of co-evolutionary features, i.e., local and global, providing complementary information when ensembled by TripletRes. In other words, all considered features make contributions, and a combination of all three feature generates the most robust contact models.
In CASP13, in addition to TripletRes_CASP13 that used an ensemble of PLM, COV and PRE features, the individual raw PLM and COV features have also been utilized by AlphaFold [35] and DMP [36], respectively. The inverse of the covariance matrix, i.e., the PRE feature (with a different derivative) has also been considered by trRosetta [37] afterward. Thus, the introduction of the concept of multiple raw coevolutionary feature ensemble should help improve individual methods and push the boundary of inter-residue contact/distance prediction.
Loss function with continuous distances outperforms that with binary contacts
The correct loss function selection plays an important role in the training of neural networks because it determines the performance metric of the model during training. The most commonly used loss function for contact-map predictions is the binary cross-entropy loss function, which encodes each residue pair with 2 states (contact and not in contact). Typically, with a single distance threshold of 8 Å, such a loss function does not encode detailed distance information, e.g., residue pairs separated by 9 Å will be treated the same as those by 22 Å. Alternatively, recent methods [34,35,37] have considered predicting discretized distance distribution matrices rather than contact-maps, mostly for assisting 3D structure prediction. However, whether incorporating distance training could help contact-map prediction accuracy remains unstudied. Inspired by those works, the loss function in TripletRes (Eq 6 in Methods and Materials) considers a discrete representation of each residue pair's distance information. We then systematically evaluate the impact of adding distance information during the training on the accuracy of contact-map prediction. Fig 2E compares the long-range top-L/5 and top-L precisions between TripletRes programs trained with Eq 6 or a binary cross-entropy loss (see Eq A in S5 Text) on CASP FM and CAMEO hard targets, respectively. It can be observed that incorporating continuous distance information in training can lead to improvements in contact-map prediction, even though the contact-maps are not directly optimized. For example, the distance information in the loss function can improve the top-L/5 precision from 68.7% to 71.4%, and 74.1% to 75.6%, for the CASP set and CAMEO set, respectively, which correspond to a p-value of 2.1e-02 and 7.9e-03 in Student's t-test. Interestingly, when more top-ranked contacts are considered (i.e., top-L), the p-values become more significant and decrease to 1.6e-04 and 2.0e-07 on the two datasets, respectively, which means the distance information may have a stronger effect on improving the precision when more contacts are evaluated. Protein structure prediction methods can thus benefit more from TripletRes, which was trained with the discrete distance loss function because more predicted contacts can be reliably considered as restraints for protein folding.
To have a detailed analysis of the effect of discrete distance loss function on different fold types, S2 Fig presents the comparison of long-range top-L/5 and top-L precisions with different loss functions on the different fold types, with median and mean precisions marked in solid and dash lines, respectively. Structures from 195 CAMEO set and CASP 11&12 set are classified into 63 alpha proteins, 24 beta proteins and 157 alpha&beta (alpha+beta and alpha/ beta) proteins. For three fold types, consistent improvements can be observed with the distance loss function for all evaluation indexes. For example, for long rang top-L predicted contacts, training with discrete distance loss function achieves precisions of 34.9%, 49.0%, and 54.0% for alpha, beta, and alpha&beta folds, which are slightly higher than the baselines, corresponding to p-values of 4.0e-03, 2.0e-02 and 9.4e-08, respectively. Among three types of fold types, alpha proteins have the lowest mean top-L/5 and top-L precision, regardless of the loss function type; this may be due to the fact that contact patterns, including hydrogen-bonds, between alpha-helical segments are not as evident as those between beta-strand elements in proteins.
Deep MSA search help create more comprehensive coevolutionary information
TripletRes utilizes MSAs as the only input and the quality of the latter is thus essential to the final contact prediction models. It is worth noting that the TripletRes model is trained on features extracted from MSAs generated by HHblits, but a deeper MSA generated by multiple databases has been used for test proteins (see "MSA generation" in Methods and Materials). We expect the strategy could reduce over-fitting between the training and test proteins.
To examine the impact of different MSA collections on the contact models, Fig 3 shows a comparison of TripletRes models with and without deep MSAs on the test proteins from CASP FM targets ( Fig 3A) and CAMEO hard targets (Fig 3D). Here, dashed lines mark the mean precision value of the long-range top-L prediction by each dataset. For the CASP FM targets, the usage of deep MSAs during testing significantly improves the mean precision of Tri-pletRes from 40.0% to 46.4% with a p-value 1.9e-05 in Student's t-test, where 35 out of 50 FM targets (70%) achieve a higher precision with deep MSAs while only 8 targets (16%) do so when the HHblits MSAs are used. The same trend can be observed in the CAMEO targets, where the p-value of improvement in long-range top-L precision is 1.7e-06. This difference is mainly due to the higher number of homologous sequences collected in deep MSA search protocol, which allows the extraction of more reliable coevolutionary information. For example, the average number of effective sequences of MSAs, or Neff calculated by Eq 1 in Methods and Materials, generated by deep MSA is 85.4, which is 34.3% higher than that obtained by HHblits on CASP FM targets (63.6).
In Fig 3B, 3C, 3E and 3F, we select two illustrative cases from the CASP and CAMEO datasets respectively. The example in Fig 4B is from the third domain of CASP12 target T0896 with experimental structure presented in Fig 4C, where HHblits collects a relatively shallow MSA with a Neff = 0.94, which resulted in only 39 true positives in the 162 long-range top-L contact predictions. The deep MSA search increased the Neff value to 3.78, where the number of true contacts with the deeper MSA increases to 73, which is 87.2% higher than that with the HHblits MSA. In Fig 3E and
Performance of TripletRes for blind prediction in CASP13
An early version of TripletRes, denoted as TripletRes_CASP13, participated in the 13th CASP experiment for inter-residue contact prediction [6,35]. It was ranked among the top two methods based on the mean precision score (http://www.predictioncenter.org/casp13/zscores_rrc. cgi), with another top method RaptorX-Contact which also ranked as the top method in previous CASPs. In Table 2, we list a summary of the average results by TripletRes and TripletRes_-CASP13, along with three other top CASP13 predictors from RaptorX-Contact, DMP, and ZHOU-Contact. For the long-range top-L/5 contacts on the 31 FM targets, TripletRes_-CASP13 achieved a mean precision of 64.6%, while the mean precision of RaptorX-Contact, DMP, and ZHOU-Contact are 69.4%, 60.2%, and 58.3%, respectively. TripletRes, however, achieves the highest precision of 71.6% for long-range top L/5 contacts. Here, TripletRes and TripletRes_CASP13 are based on the same input MSAs and the only difference between them is that TripletRes utilizes a new loss function (Eqs 6 and 7 in Methods and Materials) to integrate distance profiles for contact-maps, while TripletRes_CASP13 used a binary crossentropy loss function (Eq A in S5 Text). These data demonstrate the validity of the distancesupervised training strategy.
In a recent study, trRosetta [37] reported an alternative MSA construction approach by performing HHblits and hmmsearch search through a much larger propriety database with~7 billion sequences. In comparison, the Metaclust database used by DeepMSA only has 424 million sequences. Unfortunately, both the scripts and the database used in the trRosetta MSA construction are unavailable, preventing us from testing DeepMSA on the same database. Nonetheless, we observed that the top-L/10, L/5, L/2 and L precisions could be boosted to 84.1%, 78.4%, 62.0% and 47.1%, respectively, by simply feeding TripletRes model with pregenerated MSAs downloaded from the trRosetta [37] website. The average Neff value of trRosetta generated MSAs is 82.18, which is 2.6 times higher than that of DeepMSA. This data confirmed again the impact of the size of sequence databases on the contact prediction models.
In Fig 4, we present an example from the first domain of T0957s1 of CASP13 which is a contact-dependent growth inhibition toxin-immunity protein (PDB ID:6cp8) with an α+β fold and 108 residues. TripletRes collected a deep MSA with Neff = 6.7, significantly higher than the Neff value (1.3) by HHblits. This resulted in a mean precision of 86.4% for the top-L/ 5 long-range contact predictions, compared to 40.9% by RaptorX-contact, 36.4% by DMP, and 54.5% by ZHOU-Contact, respectively. TripletRes also performed better than the CASP13 version in precision (77.3%), benefited from the distance information during the training. As shown in Fig 4B and 4D, RaptorX-Contact and ZHOU-Contact failed to hit any long-range contacts in Region 1 which is a critical loop-loop contact region. DMP, on the other hand, was not able to cover contacts in Regions 2 that are important to pack the core structure of the two helices with the center beta-sheet (Fig 4C). TripletRes can cover both Regions marked in yellow and magenta in Fig 4E, respectively. Among the top-L/5 correctly predicted long-range contacts, 94.7% of them have the distance profile with a probability peak at <8Å and nearly 74% of the residue pairs have the accumulated probability >80% in the region below 15Å, indicating a high confidence of contact prediction on the residue pairs based on the distance profile.
Note that both TripletRes_CASP13 and TripletRes are trained on the same training set before CASP13. To examine the impact of the size of training dataset on the proposed framework, we re-train the TripletRes model with a dataset newly collected after CASP13 with 26,151 PDB sequences and perform the evaluation on a test set containing 37 sequences (S2 Text), with the re-trained model termed as TripletRes (Post-CASP13). S2 Table lists the overall performance of TripletRes (Post-CASP13) in comparison with TripletRes and trRosetta, considering that trRosetta is the representative method that predicts inter-residue geometric terms for protein folding after the CASP13 season. DeepMSA was employed to generate MSAs for the test set for its availability and all control methods are sharing the same MSAs. It is shown in S2 Table that the performance can be considerably improved by a simple employment of a larger training set. TripletRes (Post-CASP13) achieves a top-L/5 precision of 76.2% on the 37 test sequences, 9.2% higher than that of TripletRes with a p-value of 1.7e-03. Such differences in performance with different amounts of training data indeed demonstrate the importance of available dataset when training the model. Compared to TripletRes, trRosetta has a slightly higher prediction along all the cutoffs; the difference is however statistically insignificant, with the p-value equal to 0.68, 0.59, 0.15, and 0.20 for top-L/10, L/5, L/2 and L precisions, respectively. It is noted that the higher contact accuracy by trRosetta is mainly attributed to various auxiliary prediction tasks such as orientation prediction, while for TripletRes, the improvements mainly come from the ensemble of multiple co-evolutionary features. In this sense, the proposed TripletRes method should be considered complementary to the trRosetta.
Apart from trRosetta and DMP discussed above, AlphaFold [35] also perform contact/distance prediction by predicting discretized distance bins. While AlphaFold did not participate in the contact prediction category of CASP, its top-L long range contact precision has reportedly achieved 46.1% [35], which was higher than what TripletRes achieved in CASP13. In the MSA generation step, AlphaFold performs a routine HHblits search through the standard Uniclust database, which is equivalent to Stage 1 of our three-stage DeepMSA approach. The input features of AlphaFold is mainly derived from PLM, which is only a subset of our triplet features. Given its simple MSA and feature design, part of the advantage of AlphaFold over Tri-pletRes is the complexity of neural network architecture. Since the DeepMind team has access to computational resources unattainable for most academic groups, it can train a neural network with 220 residual blocks. In comparison, due to the resource limit, TripletRes can only be trained with 24 residual blocks for each of its three ResNet branches (corresponding to the three sets of input feature) and another 24 residual blocks for fusing the three branches. Meanwhile, the iterative 3D model construction and contact prediction procedures can further improve the contact prediction accuracy since the process of 3D structure construction can help filter out physically non-practical contacts.
The re-training of the TripletRes (Post-CASP13) model took up to 30 days on 4 Nvidia P100 GPUs from the public XSEDE Comet Cluster [39] due to the heavy I/O loads of pre-calculated feature data. However, the running time during the test should be theoretically comparable with regular methods, e.g., AlphaFold or RaptorX-Contact. The full 3-stage DeepMSA pipeline, on average, takes 1.32 hours [30] on its benchmark set. After that, the majority of the time would be spent on the calculation of the PLM matrix. To the best of our knowledge, the CCMpred program utilized by TripletRes to calculate the PLM matrix is one of the most efficient programs in the field.
Conclusion
Protein contact-map prediction has been critical to assist protein folding in the form of spatial constraints. This work presented a new deep learning method for high-accuracy contact prediction by learning from raw coevolutionary features extracted with deep multiple sequence alignments. The method was tested on FM domains in CASP11-13 and hard targets from CAMEO experiments, which demonstrated the effectiveness of the proposed method.
Several factors were found to contribute to the success of the TripletRes pipeline. First, coupling deep residual convolutional networks directly with raw coevolutionary matrices can result in better performance than feeding neural networks with the post-processed features. Second, a triplet of coevolutionary features, from covariance matrix, inverse covariance matrix and the inverse Potts model approximated by pseudolikelihood maximization, are ensembled in TripletRes by a set of four neural networks constructed with residual blocks. This feature ensemble strategy was found to enable more accurate prediction than using the three sets of features individually. Third, including more discrete distance information into the network training was proven to be beneficial to the contact-map prediction compared to binary contact training, although the contact-map models are binary on their own. This is largely because the distance-based loss function enables the learning of detailed spatial features specified by the sequence profiles. Finally, a hierarchical sequence searching protocol was proposed to obtain deeper MSAs, which impact the performance of the final model prediction. A significant improvement of contact prediction precision can be achieved through MSAs generated by searching an enlarged protein sequence database. These data underscore the impact of the volume of the sequence database on contact/distance prediction. The studies extending the DeepMSA pipeline to utilize the enlarged databases are in progress.
It is worth noting that the major goal of contact-map prediction is for assisting ab initio 3D structure construction, where a significant amount of efforts has been made along this line in the past decades [8,33,[40][41][42]. Although recent progress of the field has shown an advantage of distance predictions [34,35], contact-map can provide reliable information of short-distance residue-residue interactions that is critical to specify the global topology of the protein fold. In fact, our results showed that most of the accurately predicted distances in TripletRes are still on the residues pairs with a short distance below 9-10 Å, which is part of the reason that has motivated our idea of distance-supervised learning in TripletRes. In addition, the development of feature extraction for protein contact-map prediction has direct contributions to the prediction of other forms of long-range residue-residue interactions. Therefore, with the development of new approaches and consistent improvement of the model accuracy, the advanced sequence-based contact-map predictions will continue to be an important driving force for template-free structure prediction of the field.
Methods and materials
TripletRes is a deep-learning based contact-map prediction method consisting of three consecutive steps (Fig 1). It first creates a deep MSA and extracts three coevolutionary matrix features. Next, the feature sets are fed into three sets of deep ResNets and trained in an end-toend fashion. Finally, a symmetric matrix distance histogram probability is created and binarized into the contact-map prediction.
MSA generation
To help offset the overfitting effects, TripletRes creates MSAs using different strategies for training and testing protein sequences. For training proteins, MSAs are created by HHblits with an E-value threshold of 0.001 and a minimum sequence coverage of 40% to search through Uniclust30 (2017_10) [43] database with 3 iterations.
For test proteins, the DeepMSA pipeline [30] was utilized to generate MSAs. The initial MSA is created also by HHblits but followed-up with multiple iterations. If the Neff value of the initial MSA is lower than a given threshold (= 128 that was decided by trial and error), a second step will be performed using jackhmmer [44] through UniRef90 (release-2017_12) [45]. Here, Neff measures the number of effective sequences in the MSA and is defined as: where N is the total number of sequences in the MSA, I½S m;n � 0:8� ¼ 1 if the sequence identity S m,n between sequences m and n is over 0.8; or = 0 otherwise. To assist the MSA concatenation, the jackhmmer hits are converted into an HHblits format sequence database, against which a second HHblits search was performed. In case that Neff is still below 128, a third iteration is performed by hmmsearch [44] through the MetaClust (2017_05) [46], where the final MSA is pooled from all iterations (see S3 Fig for the whole MSA construction pipeline).
Coevolutionary feature extraction
Three sets of coevolutionary features are extracted from the deep MSAs. First, the covariance (COV) feature measures the marginal dependency between different sequential positions and is calculated by where f i (a) is the frequency of a residue a at position i of the MSA, f i,j (a,b) is the co-occurrence of two residue types a and b at positions i and j. The COV feature captures marginal correlations among variables, which contains transitional correlations. The negative of the inverse of the covariance matrix, i.e., precision matrix, can be interpreted as the Mean-field approximation of Potts model [12] and thus can capture direct couplings. In this work, a ridge regularized precision matrix (PRE), Θ, is estimated by minimizing the regularized negative log-likelihood function [27,47] where the first two terms are the negative log-likelihood of Θ assuming that the data follows a multivariate Gaussian distribution; tr(SΘ) is the trace of matrix SΘ; log|Θ| is the log determinant of Θ; and RðYÞ ¼ r P kY i;j k 2 2 is the regularization function of Θ to avoid over-fitting, with ρ = e −6 being a positive regularization hyper-parameter.
The last feature, which was firstly introduced by plmConv [48], is the raw coupling parameter matrix of the inverse Potts model approximated by PLM. Instead of assuming the data follows a multivariate Gaussian distribution, PLM approximates the probability of a sequence for the Potts model with Here, P(σ m ) is the probability model for the m-th sequence in the MSA and Pðs l ¼ s ðmÞ l js nl ¼ s ðmÞ nl Þ is the marginal probability of l-th position in the sequence by where h and J are single site and coupling parameters, respectively. In TripletRes, the raw coupling parameter matrix J is used as the PLM feature. Thus, each feature is represented by a 21 � L by 21 � L matrix for a protein sequence with L amino acids. The entries of the 21 by 21 sub-matrix of a corresponding amino acid pair are the descriptors, which are fed into a convolutional transformer as conducted by a fully convolutional neural network with residual architecture (Fig 1).
Deep neural-network modeling
TripletRes implements residual neural networks (ResNets) [29] as the deep learning model. Compared to traditional convolutional networks, ResNets adds feedforward neural networks to an identity map of input, which helps enable the efficient training of extremely deep neural networks such as the one used in TripletRes. As illustrated in Fig 1, the neural network structure of TripletRes has four sets of residual blocks, where three of them are connected to the input layer for feature extraction. Each of the three ResNets has 24 basic blocks and can learn layered features based on the specific input. After transforming each input feature into a feature map of 64 channels, we concatenate the transformed features along the feature channel and employ another deep ResNet containing 24 residue blocks to learn the fused information from the three features.
The activation function of the last layer is a softmax function which outputs the probability of each residue pair belonging to specific distance bins. Here, the residue-residue distance is split into 10 intervals spanning 5-15Å with an additional two bins representing distance less than 5Å and more than 15Å, respectively. The whole set of deep ResNets are trained by the supervision of the maximum likelihood of the prediction, where the loss function is defined as the sum of the negative log-likelihood over all the residue pairs of the training proteins: Here, T is the total number of residue pairs in the training set. y k t ¼ 1 if the distance of t-th residue pair of native structures falls into k-th distance interval; otherwise y k ¼ 0: p k t is the predicted probability that the distance of the t-th residue pair falls into the k-th distance interval.
The probability of the t-th residue pair forming a contact P t is the sum of the first 4 distance bins: The training process uses dropout to avoid over-fitting, where the dropout rate is set to 0.2. We use Adam [49], an adaptive stochastic gradient descent algorithm, to optimize the loss function. TripletRes implements deep ResNets using Pytorch [50] and was trained using the Extreme Science and Engineering Discovery Environment (XSEDE) [39]. | 8,726.2 | 2020-10-07T00:00:00.000 | [
"Computer Science"
] |
Elliptical Multi-Orbit Circumnavigation Control of UAVS in Three-Dimensional Space Depending on Angle Information Only
: In order to analyze the circumnavigation tracking problem in complex three-dimensional space, in this paper, we propose a UAV group circumnavigation control strategy, in which the UAV circumnavigation orbit is an ellipse whose size can be adjusted arbitrarily; at the same time, the UAV group can be assigned to multiple orbits for tracking. The UAVs only have the angle information of the target, and the position information of the target can be obtained by using the angle information and the proposed three-dimensional estimator, thereby establishing an ideal relative velocity equation. By constructing the error dynamic equation between the actual relative velocity and the ideal relative velocity, the circumnavigation problem in three-dimensional space is transformed into a velocity tracking problem. Since the UAVs are easily disturbed by external factors during flight, the sliding mode control is used to improve the robustness of the system. Finally, the effectiveness of the control law and its robustness to unexpected situations are verified by simulation.
Introduction
In the past decade, the problem of UAV control has attracted more and more attention [1][2][3]. Since the multi-agent control problem has received continuous attention [4,5], multi-UAV control is also a hot area of research [6][7][8]. The multi-UAV circumnavigation control problem is a field of application for multi-UAV control, which refers to a group of UAVs surrounding the target in a prescribed formation to perform tasks such as monitoring [9], tracking and rounding up [10].
The circumnavigation problem has gradually developed from a single-agent circumnavigation static target [11][12][13] to a single-agent circumnavigation dynamic target [14][15][16], and up to the current multi-agent circumnavigation dynamic target [4,17,18]. Ref. [19] analyzed the circumnavigation problem in a two-dimensional plane, assuming that each agent has a fixed velocity. Ref. [20] used the self-propelled particle system to analyze the circumnavigation problem, but it required the circumnavigation target to be a static target. Most of the initial research assumed that each agent can accurately know the position information of the target [17,21], but this is difficult to achieve in practice. In order to overcome this drawback, researchers have proposed the use of distance measurement to obtain the position of the target. In [22], by measuring the distance between the agent and the target, the backstepping control is used to analyze the circumnavigation problem. In [23], a sliding mode control method based on distance measurement is designed to enable the UAV to achieve circumnavigation without using GPS. In practice, achieving distance measurement requires high-accuracy and expensive sensors, but UAVs are small and have limited payload capacity, so this approach is not realistic for UAVs. In contrast, angle measurement is relatively easy to implement. UAVs only need to carry cameras or other small sensors to measure angles.
In [24], the kinematic model of the underwater vehicle is combined with the dynamic model, and a cascade-based distributed circumnavigation control system is proposed, which can realize the circumnavigation of static and dynamic targets. In [25], the navigation control is realized by using the angle information between adjacent UAVs and the angle information between the UAV and the target. According to the local information of the target, ref. [26] uses the estimation method to estimate the location information of the target, so as to achieve circumnavigation, and this method does not need to consider the initial state of the agent. Refs. [27,28] propose a wireless speed measurement method to keep the multi-agent formation in the desired formation. Ref. [29] proposes a swarm control strategy for fixed-wing UAVs using multi-objective reinforcement learning. Taking the underwater robot as the research object, ref. [30] uses fast non-singular terminal sliding technology to provide faster convergence for full-drive underwater robot trajectory tracking control, while using adaptive technology to remove the restrictions that require system parameters.
However, the above studies were all conducted in the two-dimensional plane, but the UAV is active in the complex three-dimensional space; it is obviously more challenging to analyze the circumnavigation problem in three-dimensional space. At the same time, it should be noted that the formations proposed in these works are generally circular formations, which limits their practical application. In this paper, we set the formation to be elliptical and allowed the UAVs to be deployed in different orbits. The main contributions of this paper are summarized as follows: (1) A circumnavigation control law in three-dimensional space using only angle information is proposed, and an estimation method is used to obtain the position information of the target. In this way, the limitation of requiring knowledge of both target position information and angle information at the same time is eliminated. (2) The circumnavigation trajectory is set as an ellipse instead of being limited to a circular trajectory, and the major and minor semi-axes of the ellipse can be arbitrarily set. At the same time, UAVs can be deployed on multiple orbits by setting different coefficients. (3) Using the dynamic equation of the UAV, the three-dimensional position estimator and the adjustable elliptical orbit, the relative ideal velocity equation is designed, and by constructing the dynamic error between the ideal relative velocity and the actual velocity, the circumnavigation control problem is transformed into the tracking problem of relative velocity. At the same time, by adopting sliding mode control, the robustness of the system is greatly improved, and the stability of the system is proved by the Lyapunov method.
Define the Desired Angle
In the three-dimensional space, UAV i moves with speed υ i , and a space coordinate system is established with the UAV i as the origin, as shown in Figures 1 and 2. The target is denoted by h, and its projection point on the i-plane is denoted by h'. The distance between i and h' is denoted by l i , and l d is the desired radius, that is, the desired distance between the UAV emphi and the target projection point h'. Definition 1. δ i is the angle between ih and ih . γ i is the angle formed by ih' and the positive direction of the X-axis. θ i is the pitch angle of i, and α i is the heading angle of i. Next, make the following assumption: Assumption 1. Assume that every UAV can know the angles in Definition 1, and the communication between UAVs is realized under a cyclic directed graph, i.e., UAV i + 1 can get the information of UAV i; at the same time, there is no interference in the communication between UAVs. Figure 1, where κ i represents the unit vector pointed by the i to h ;κ i represents the unit vector in the XOY plane, perpendicular to κ i ; andκ i represents the vector perpendicular to the XOY plane. (1)
Multi-Orbit Circumnavigation
The multi-orbit circumnavigation is shown in Figure 3. For agent i, its required circumnavigation radius at time t should be set as l i (t) = τ i l d . l i (t) represents the actual distance between the UAV i and the target projection point. τ i and l d are positive real numbers, and by adjusting different τ i , different circumnavigation radii can be obtained.
Therefore, in the multi-orbit circumnavigation, different orbits are realized according to the given basic circumnavigation radius l d and multiplied by the corresponding τ i . The relationship between different orbits is based on the multiple relationship of l d .
Dynamic Model of UAVs
First, we make the following assumption: Assumption 2. In this paper, the quadrotor UAV is taken as the research object. Next, the quadrotor UAV dynamic model in the inertial coordinate system can be described as [31] where m i represents the mass of the UAV. Defining p i = [x i , y i , z i ] T indicates the position of the i-th UAV in the inertial coordinate system, and B i = [ψ i , θ i , φ i ] T indicates the roll, pitch and yaw of the UAV in the body coordinate system. The relationship between the inertial coordinate system and the body coordinate system is given in Figure 4. F i is the total thrust generated by the motors of the UAV, and g is the acceleration of gravity. d i = [d xi , d yi , d zi ] T indicates system external interference. In order to convert the coordinates of the UAV from the body coordinate system to the inertial coordinate system, we need the following rotation matrix ϑ i After some transformations, the dynamics model can be written as in [32] where T i is a continuous variable that represents the external input of UAV i, and can be written as where u i = [u xi , u yi , u zi ] T represents the components of the control force of the UAV i on the three axes in the inertial coordinate system. Next, combining (2), (4) and (5), a new expression can be obtained From Equation (6), it can be seen that by designing the control u i = [u xi , u yi , u zi ] T , the UAV can be controlled by applying the external input T i and the attitude angle so as to realize the circumnavigation control.
Control Objective
Given a group of n UAVs and a moving target h, the purpose of this paper is to design a control law to make the UAV group around the given target in elliptical orbits. Ultimately, the UAV group needs to meet the following conditions: (1) The UAV group should circumnavigate the target on multiple elliptical orbits, so the circumnavigation radius of each UAV is different. (2) During the circumnavigation process, the circumnavigation angular velocity of the UAV group should be consistent. (3) The angular spacing between adjacent UAVs should remain unchanged.
The above objective can be written in the following form: where N = {1, · · · , n}, ω d represents the desired circumnavigation angular velocity; β d is the desired angular spacing between two UAVs, with a value of 2π/n; and n is the number of UAVs. This means that at time t → +∞, the angular spacing between adjacent UAVs will be a fixed value.
Circumnavigation Control
Due to the target coordinate position, an estimator is needed to estimate the position of the target. We use the following estimator where k is a constant gain, p t = [x t , y t , z t ] T is the coordinates of the target, andp ti = [x ti ,ŷ ti ,ẑ ti ] is the coordinate estimation of the target by the UAV i. I is the identity matrix. Through this three-dimensional position estimator, the position of the target can be estimated with only the angle information known. The desired elliptical circumnavigation radius is set as follows where a and b are the major and minor axes of the ellipse, respectively. By setting different values for a and b, the desired elliptical orbit l d can be obtained. Using l d multiplied by constant τ i , we can obtain different elliptical orbits; then, we can cause each UAV circle to be in its own orbit. It is worth noting that when a = b, it is a circular orbit. Next, we write the dynamic Equation in (4) in the following form: At the same time, the angle in Definition 1 can be represented by known information and estimated values where ζ = 0,ŷ ti < y i , 1,ŷ ti ≥ y i .
It can be seen that γ i is discontinuous, but the derivative of γ i with respect to time can be expressed in the following forṁ where l i (t) is the distance between the UAV and the target projection point.
Proof. Suppose the motion of UAV i at time t is as shown in Figure 5; at this time, the relationship between the angles γ i and α i is also marked in Figure 5. The velocity of i on the horizontal plane can be decomposed orthogonally into υ ia and υ ib ; then, υ ia can be written as: The derivative of γ i + (2π − α i ) can be written as: Then, (14) and (15) can be combined to obtain the result in (13). The proof is finished. Next, the ideal relative velocity is constructed from the orthogonal vector in (1) where n . σ 1i is defined as the difference between the actual relative position and the ideal relative position of the UAV i, and σ 2i as the difference between the actual relative speed and the ideal relative speed of the UAV i; thus, we can obtain In order to improve the robustness of the system, sliding mode control is adopted. The sliding surface is designed as where χ > 0, using the exponential reaching law as followṡ where c 3 > 0, c 4 > 0. Then, we obtain the following control law Theorem 1. Considering a group of UAVs as a nonlinear system (4), if the three-dimensional position estimators are set as (8), the ellipse surround radius is set as (9), and the parameters c 1 , c 2 , c 3 , c 4 , and χ are selected to be greater than 0, then the controller (20) can make the group of UAVs circle the target on multiple elliptical orbits.
Simulation Results
In this section, the performance of the estimator (8) and control law (20) is verified by considering a group of five UAVs circling a moving target on multiple trajectories, where β d is 2π/5. We set the mass of each UAV to m i = 1 kg, and the acceleration of gravity to g = 9.8 m/s 2 . For the ideal relative velocity in (16), c 1 = 1 and c 2 = 5 were selected. Parameters in the controller (20) were selected as c 3 = 5, c 4 = 10, χ = 1.
Case 1: Target Moves in a Straight Line
The speed of the moving target in three-dimensional space is selected asṗ t = [2 2 4] First, the parameters of the estimator are selected, k = 1, and the initial positions of the five UAVs are randomly generated. Figure 6 presents the simulation results of the convergence effect of the position estimator. Figure 7 shows the angular spacing between two adjacent UAVs, Figure 8 gives the circumnavigation control error, and Figure 8a shows distance between each UAV and the target, that is ||p i − p t ||. Since the circumnavigation track is elliptical, the distance should change all the time. Figure 8b is ||p i − p t || − τ i l d , that is, the error between the actual distance and estimated distance from the position of UAV to target. The three-dimensional simulation diagram of the UAVs circumnavigation is given in Figure 9. Since the UAV is easily interfered with by external factors during the operation, the robustness of the control system is tested. When t = 70 s, the external interference of the third UAV is set to d 3 = [0, 30,70] T . Figure 10 shows the change in the angular spacing between each UAV, and Figure 11 shows the circumnavigation control error, where Figure 11a is ||p i − p t ||, and Figure 11b is ||p i − p t || − τ i l d . It can be seen from the above pictures that when there is interference, the sliding mode control can quickly eliminate the influence of the interference and maintain the formation of the UAV group.
Case 2: Target Moves in a Curve
The speed of the moving target in three-dimensional space is selected asṗ t = [3 cos(0.5t) 3 sin(0.5t) t] T .
The parameters of the estimator are selected, k = 1, and the initial positions of the five UAVs are randomly generated. Figure 12 presents the simulation results of the convergence effect of the position estimator. Figure 13 shows the angular spacing between two adjacent UAVs, Figure 14 gives the circumnavigation control error, and Figure 14a shows the distance between each UAV and the target, that is ||p i − p t ||. Figure 14b is ||p i − p t || − τ i l d . The three-dimensional simulation diagram of the UAVs circumnavigation is given in Figure 15. The robustness of the system is verified again. When t = 70 s, the external interference of the fifth UAV is set to d 3 = [30, 10, 50] T . Figure 16 shows the change in the angular spacing between each UAV, and Figure 17 shows the circumnavigation control error, where Figure 17a is ||p i − p t ||, and Figure 17b is ||p i − p t || − τ i l d . The above experiments once again prove that the sliding mode control adopted in this paper has strong robustness.
Conclusions
In this paper, the problem of circumnavigation control in three-dimensional space is studied, the group of UAVs is made to circle the target in elliptical formation and with multiple orbits. The target is made to move with curvilinear variable speed, and UAVs estimate the position of the target only from the angle information. The error dynamic equation is constructed using the ideal relative velocity and the actual relative velocity, and the circumnavigation control is transformed into a velocity tracking problem. In order to improve the robustness of the system, sliding mode control is used to design the control law. Finally, the effectiveness of the proposed control law is proved by simulation, and disturbance is added to the simulation process to verify the robustness of sliding mode control. | 4,158 | 2022-10-10T00:00:00.000 | [
"Engineering"
] |
Impact of sediment-seawater cation exchange on Himalayan chemical weathering fluxes
Continental scale chemical weathering budgets are commonly assessed based on the flux of dissolved elements carried by large rivers to the oceans. However, the interaction between sediments and seawater in estuaries can lead to additional cation exchange fluxes that have been very poorly constrained so far. We constrained the magnitude of cation exchange fluxes from the Ganges-Brahmaputra River system based on cation exchange capacity (CEC) measurements of riverine sediments. CEC values of sediments are variable throughout 15 the river water column as a result of hydrological sorting of minerals with depth that control grain-sizes and surface area. The average CEC of the integrated sediment load of the Ganges-Brahmaputra is estimated ca. 6.5 meq/100g. The cationic charge of sediments in the river is dominated by bivalent ions Ca (76%) and Mg (16%) followed by monovalent K (6%) and Na (2%) and the relative proportion of these ions is constant among all samples and both rivers. Assuming a total exchange of exchangeable Ca for marine Na yields a maximal 20 additional Ca flux of 28 x 10 mol/yr of calcium to the ocean, which represents an increase of ca. 6 % of the actual river dissolved Ca flux. In the more likely event that only a fraction of the adsorbed riverine Ca is exchanged, not only for marine Na but also Mg and K, estuarine cation exchange for the Ganga-Brahmaputra is responsible for an additional Ca flux of 23 x 10 mol/yr, while ca. 27 x 10 mol/yr of Na, 8 x 10 mol/yr of Mg and 4 x 10 mol/yr of K are re-absorbed in the estuaries. This represents an additional riverine Ca flux to 25 the ocean of 5% compared to the measured dissolved flux. About 15% of the dissolved Na flux, 8% of the dissolved K flux and 4% of the Mg are reabsorbed by the sediments in the estuaries. The impact of estuarine sediment-seawater cation exchange appears to be limited when evaluated in the context of the long-term carbon cycle and its main effect is the sequestration of a significant fraction of the riverine Na flux to the oceans. The limited exchange fluxes of the Ganges-Brahmaputra relate to the lower than average CEC of its sediment load that do not 30 counterbalance the high sediment flux to the oceans. This can be attributed to the nature of Himalayan river sediment such as low proportion of clays and organic matter.
Quantifying the weathering flux exported to the oceans is therefore crucial to assess the role of weathering in the global carbon cycle and further compare it to other mechanisms that control atmospheric CO 2 content on 5 geological time scales. It is further highly relevant to a broader understanding of oceanic geochemical cycles.
Modern continental weathering fluxes have largely been derived from the study of dissolved elements exported by rivers (Gaillardet et al., 1999). However, most of these fluxes do not account for elements delivered to the oceans through cation exchange, when river sediments are transferred through estuaries and towards the ocean. 10 In the riverine environment, sediment surfaces are mainly occupied by adsorbed Ca 2+ species, which is the dominant dissolved cation. When transferred to the oceans, the Ca 2+ adsorbed on sediment surfaces is partially exchanged for Na + , Mg 2+ and K + (Sayles and Mangelsdorf, 1977) representing an additional source of Ca to the oceans and a potential sink for Na, Mg and K. For the Amazon, Sayles and Mangelsdorf (1979) estimated that cation exchange fluxes remained under 10% of the dissolved flux for the major elements Na, Mg, Ca and K. On 15 a global scale, first order estimates suggest that cation exchange can account for an extra Ca 2+ flux to the ocean ranging from 5 to 20% of the riverine dissolved flux (Berner and Berner, 2012;Berner et al., 1983;Holland, 1978). Nevertheless, these exchange fluxes to the oceans have received little attention and are currently poorly constrained. Global estimates mainly rely on the upscaling of the Amazon data from Sayles and Mangelsdorf (1979) and the magnitude of these fluxes has so far not been assessed for other major river systems . 20 In an effort to refine the weathering budget of the Himalayan range and its implications for the long-term carbon cycle, we evaluate the exchange flux delivered to the oceans by the Ganga and Brahmaputra (G&B) Rivers. The G&B is the largest river in terms of sediment export, with a flux of ca. 10 9 t.yr -1 sediments transported from the Himalayan range to the Bay of Bengal (RSP, 1996). The high sediment to dissolved load ratio of the G&B of ca. 25 11 (Galy and France-Lanord, 2001), more than double the world average (ca. 5, Milliman and Farnsworth (2011)), could potentially yield significant cation exchange fluxes that need to be properly quantified. Raymo and Ruddiman (1992) proposed that Himalayan weathering generated a major uptake of atmospheric carbon during Neogene potentially triggering the Cenozoic climate cooling. This suggestion was moderated based on the observation that Himalayan silicates are mostly alkaline and therefore generate a flux of alkalinity linked to Na 30 and K ions that cannot lead to precipitation of carbonate in the marine environment (France-Lanord and Derry, 1997;Galy and France-Lanord, 1999). Nevertheless, cation exchange on sediment surfaces at the river-ocean transition can potentially exchange Na + for Ca 2+ , strengthening the subsequent carbonate precipitation. Earlier studies on the carbon budget of Himalayan weathering used a rough approximation of this process, and in order to better evaluate the carbon budget of Himalayan silicate weathering, it is necessary to assess the importance of 35 cation exchange fluxes based on the specific physico-chemical properties of G&B suspended sediments.
Sampling
Sediments used in this work were sampled at the mouth of the Ganga and Brahmaputra Rivers as well as their confluence the Lower Meghna in Bangladesh during monsoon seasons between 2002 and 2010 ( Figure 1). These sample locations integrate all Himalayan tributaries and therefore cover the entire sediment flux exported by the 5 G&B basin. Suspended sediments were sampled along depth profiles in the center of the active channel in order to capture the full variability of transported sediments . Bedload samples were dredged from the channel as well. Sediments were filtered at 0.2 µm within 24h of sampling and freeze dried back in the lab.
Sediments contact with anything else than river water was prevented to avoid biases in the composition of bound cations due to so called "rinsing effects" (Sayles and Mangelsdorf, 1977). The major element composition of 10 sediments was determined by ICP-OES after LiBO 2 fusion at SARM-CRPG (Nancy, France).
Cation Exchange Capacity determination
The cation exchange capacity (CEC) is defined as the amount of cations bound to mineral surface charges that can be reversibly exchanged. In this work, cation exchange capacity was measured by displacing the adsorbed ions with Cobalt-Hexammine ("CoHex", Co(NH 3 ) 6 3+ ). CoHex is a stable organometallic compound that 15 effectively displaces major cations while maintaining the pH of the sample constant (Ciesielski and Sterckeman, 1997;Orsini and Remy, 1976). To avoid carbonate dissolution during exchange, the CoHex solution was saturated with pure calcite (Dohrmann and Kaufhold, 2009). Between 1 and 2g of sediments where reacted with 30 ml of a calcite-saturated CoHex solution for 2 hours. After centrifugation, the remaining cobalt concentration in the supernatant was determined by spectrometric UV absorbance measurements (Aran et al., 2008), which by 20 difference with the initial cobalt concentration of the solution, yields a first estimate of the total CEC of the sediments (CEC UV ). Additionally, major cations (Ca 2+ , Mg 2+ , Na + , K + ) released by the sediments during exchange were determined by atomic absorption spectrometry at SARM-CRPG on the same solution. The sum of the released cations provides a second determination of the total CEC of the sediments (CECΣ cat ). No systematic differences between CEC UV and CECΣ cat are observed (Figure 2), which underlines that no significant amounts of 25 other cations are released during exchange or through mineral dissolutions. Repeated measurements showed that the reproducibility of both measurements is better than 10 %. Freeze-drying the sediment samples prior to CEC analyses did not affect their CEC behaviour since different splits of sediments conserved in river water until exchange and splits subsequently freeze-dried showed similar CEC values within uncertainty.
Total cation exchange capacity
The CEC of river sediments in the Ganga, Brahmaputra and lower Meghna are reported in Table 1. The CEC of sediments is correlated to the sediment sampling depth. Surface sediments have generally a higher CEC than coarse bedload sediments. This is further illustrated by the positive correlation between CEC and the Al/Si ratio of sediments ( Figure 3). Al/Si is well correlated to grain size, which is controlled by hydraulic mineral sorting of 35 sediments within the water column Lupker et al., 2012;Lupker et al., 2011). The variable Earth Surf. Dynam. Discuss., doi:10.5194/esurf-2016-26, 2016 Manuscript under review for journal Earth Surf. Dynam. Published: 4 May 2016 c Author(s) 2016. CC-BY 3.0 License.
Al/Si ratio of sediments in the water column is to a first order the result of binary mixing between Si-rich, coarsegrained quartz bottom sediments and Al-rich phylosilicates and clays that are relatively enriched in surface sediments. Surface sediments also have a higher surface area favouring adsorption compared to bedload samples (Galy et al., 2008). Sediments from the Ganga show higher CECs for a given Al/Si ratio compared to sediments from the Brahmaputra. Ganga sediments also have a higher surface area (Galy et al., 2008), which can be attributed 5 to a higher abundance of mixed layer and smectite clays of Ganga sediments relative to the Brahmaputra (Heroy et al. 2003;(Huyghe et al., 2011). The variable CEC of sediments in the water column and amongst river reaches can therefore be tentatively summarized as resulting from the mineralogical and grain-size control on the surface area of the sediments (Malcolm and Kennedy, 1970). Figure 4 shows the molar fraction of each major cation adsorbed onto the sediments delivered to the Bay of Bengal. Ca 2+ and Mg 2+ are the dominant adsorbed cations in river water with 76 % and 16 % of the total exchangeable cations, respectively. Na + and K + account respectively for 1 and 7 % of the total adsorbed species.
Nature of adsorbed cations 10
However, in contrast to total CEC, the nature of the exchangeable cations is not dependent on the Al/Si ratio of the sediments and is constant amongst all samples. The partitioning of exchangeable cations bound to the riverine 15 sediments is therefore not controlled by grain-size or mineralogical sorting in the water column. These exchangeable compositions are also very similar for Ganga, Brahmaputra and Lower-Meghna sediments and for samples collected over different years.
The composition of sediment exchangeable cations is to a first order imposed by the dissolved composition of the 20 river water transporting these sediments. For the two most abundant adsorbed cations, the binary Ca/Mg exchange is commonly described as an exchange isotherm with an equilibrium constant K v (Sayles and Mangelsdorf, 1979), such that: where X Ca and X Mg are the fraction of adsorbed cations, a Ca and a Mg the cation activity in the river water and p a constant. The chemical composition of the river water directly in contact with the sampled sediments has not been systematically measured. However the constant composition of exchangeable cations for sediments sampled at 30 different seasons suggests that a first order determination of K v can be made using the average dissolved composition of the Ganga, Brahmaputra and Lower Meghna (Galy and France-Lanord, 1999). The equilibrium constant, K v , for sediments of the Ganga, Brahmaputra and lower Meghna is relatively similar (between 1.7 and 2 for p = 1) despite the use of average dissolved river water compositions that do not take into account for the compositional variability of these rivers (Galy and France-Lanord, 1999;Singh et al., 2005). Using a p value of 35 0.76 as found in Amazon sediments (Sayles and Mangelsdorf, 1979), the calculated K v ranges from 2.1 to 2.5, also in agreement with the equilibrium constants found on the Amazon (Table 1) show that the behavior of Himalayan sediments with respect to the cation exchange composition is very similar to the sediments transported by the Amazon. These similarities most probably stem from the first order resemblance of the mineralogical composition of both rivers (Garzanti et al., 2011;Martinelli et al., 1993).
Exchangeable flux to the Bay of Bengal
In order to derive the flux of exchangeable cations that can be delivered to the Bay of Bengal by Himalayan 5 sediments it is necessary to take into account the variability of the CEC of sediments with the water depth. The average CEC of sediments exported to the BoB can be constrained using the average Al/Si ratio of the sediments owing to the linear correlation between CEC and Al/Si ( the sequestrated flux is limited and the Al/Si ratio of sediments in Bangladesh is close to that inferred for the Himalayan crust. The major immobile element content (Al, Si and Fe) of Brahmaputra sediments is very similar to that of Ganga sediments (Lupker, 2011) suggesting that the parent material has a very similar composition.
Furthermore, the constricted morphology of the Brahmaputra floodplain does not favour high sedimentation fluxes in the floodplain. We therefore suppose here that the average Al/Si of the Brahmaputra is very similar to 15 that of Ganga sediments.
Using an Al/Si ratio of 0.23 (±0.01) yields an average total CEC of 8.0 (±0.9), 4.2 (±1.2) and 6.5 (±1.3) meq/100g for Ganga, Brahmaputra and lower Meghna sediments respectively. The average lower Meghna CEC deduced from the regression through the analyzed sediments is very similar to the ca. 6.0 meq/100g CEC that would be 20 expected from the mixing of 550 x 10 6 t/yr of Ganga sediments and 590 x 10 6 t/yr of Brahmaputra sediments (RSP, 1996). For a combined Ganga and Brahmaputra sediment flux of 1.14 x 10 9 t/yr, the total exchange capacity of the sediments amounts to 74.1 (±14.8) x 10 12 meq/yr. The maximum exchangeable flux is reported in Table 2. During exchange with seawater, river sediments mainly loose Ca 2+ to the ocean while adsorbing Mg 2+ , Na + and K + Mangelsdorf, 1979, 1977). Assuming a total exchange of Ca 2+ (the dominant cation in riverine 25 water) for Na + (the dominant cation in seawater) during the transfer of sediments to the ocean yields a maximum exchange flux of 28 (±6) x 10 9 mol/yr Ca 2+ to the Indian Ocean, while 56 (±12) x 10 9 mol/yr Na + are adsorbed onto the sediments. These additional Ca, and lower Na fluxes to the ocean are not accounted for by modern dissolved riverine fluxes. 30 However, Mangelsdorf, 1979, 1977) show that only a fraction of adsorbed Ca 2+ is exchanged during prolonged contact of sediments and clays with seawater and that these cations are not only exchanged for Na + but also partially for Mg 2+ and K + . In their experiments, the authors found that ca. 82% of the adsorbed riverine Ca 2+ is exchanged for Na + , Mg 2+ and K + in respective molar proportions of 58%, 32% and 10%. reasonable estimate of the effective exchanged flux in the G&B estuary can therefore be made based on the exchange equilibrium constant found for the Amazon. This estimation suggests that ca. 23 x 10 9 mol of Ca 2+ are desorbed from the sediments in the Bay of Bengal while 27 x 10 9 mol Na + , 5 x 10 9 mol K + , and 8 x 10 9 mol Mg 2+ are reabsorbed (Table 3). The main exchange reaction is therefore still the exchange of riverine Ca 2+ for marine Na + , but non-negligible amounts of K + and Mg 2+ are fixed in the marine environment by the sediments. 5
Comparison with Ganga-Brahmaputra dissolved fluxes
To evaluate the importance of cation exchange fluxes to the ocean we compare the maximum and probable exchange fluxes derived above to the dissolved flux exported by the G&B. Galy and France-Lanord (1999) estimated that the G&B export an annual molar flux of 183 x 10 9 Na + , 59 x 10 9 K + , 462 x 10 9 Ca 2+ and 187 x 10 9 Mg 2+ . These estimates are close to the fluxes estimated from the GEMS / Water program (UNESCO) and show 10 the dominance of the Ca 2+ flux, largely derived from carbonate dissolution. Assuming a total replacement of adsorbed Ca 2+ with seawater Na + , the maximum cation exchange flux would be +28 x 10 9 mol/yr Ca 2+ and -57 x 10 9 mol/yr Na + to the dissolved flux. This would increase by ca. 6% the riverine Ca 2+ flux and decrease by 32% the Na + flux ( Figure 5). However as discussed earlier, total cation exchange is not expected and a more probable exchange flux can be determined from Sayles and Mangelsdorf (1979) work on the Amazon. This more probable 15 estimate suggests that the cation exchange flux represents an addition of 5% of the dissolved Ca 2+ flux and a subtraction of 16% of the dissolved Na + flux, 8% of the dissolved K + flux and 4% of the dissolved Mg 2+ flux (Table 3, Figure 5). The main effect of estuarine cation exchange for the Himalayan weathering budget is therefore a moderate but significant decrease of the overall Na + flux to the Indian Ocean since about one sixth of the riverine flux is reabsorbed. The increase in riverine Ca 2+ and decrease in K + , Mg 2+ fluxes remain limited. 20
Magnitude of cation exchange fluxes
The exchange fluxes of G&B sediments are in the the order of few percent of the riverine dissolved fluxes exported to the Bay of Bengal. Despite the fact that the G&B sediment flux is on the same order of that of the Amazon River (Milliman and Farnsworth, 2011), the cation exchange flux of the G&B appears lower by a factor 3 to 5, 25 depending on element, compared to that determined for the Amazon by Sayles and Mangelsdorf (1979). This difference can be attributed to the lower average CEC value of ca. 6 meq/100g of the G&B sediments compared to the ca. 22 meq/100g of Amazon sediments, (Sayles and Mangelsdorf (1979)) that compensates for the high sediment yield of the Himalayan system. The overall low CEC of G&B sediments also limits the relative importance of cation exchange on the dissolved fluxes. Even though the suspended to dissolved load ratio of the 30 G&B is almost 3 times higher than that of the Amazon River (ca. 4, Milliman and Farnsworth (2011)) the effect of cation exchange are comparable with an increase of ca. 4 to 5% of the Ca 2+ dissolved flux and a decrease of 4 to 8 % of the Mg 2+ and 6 to 8% of the K + dissolved fluxes (Sayles and Mangelsdorf, 1979). The effect of riverine Na + re-adsorption is more substantial with a decrease of ca. 16% for the G&B compared to the 6% determined for the Amazon, but this can mainly be attributed to the high dissolved Na flux of the Amazon. If a CEC value of 35 world average river sediments of 18 meq/100g is retained (Berner and Berner, 1996;Holland, 1978), the total riverine cation exchangeable flux would also be higher by a factor of ca. 3 and yield an additional Ca 2+ flux in Earth Surf. Dynam. Discuss., doi: 10.5194/esurf-2016-26, 2016 Manuscript under review for journal Earth Surf. Dynam. Published: 4 May 2016 c Author(s) 2016. CC-BY 3.0 License. excess of 15 to 18 % compared to the actual dissolved Ca 2+ flux. This difference highlights the importance of assessing the average CEC on a river-by-river basis.
The relatively low CEC values of G&B sediments can be linked to the dominance of physical erosion in the Himalayan system that does not favour the formation of high area clay minerals (smectite) and leads to the export 5 of clays dominated by illite and overall coarse-grained material with low surface areas (Galy et al. 2008). CEC exchange fluxes can be expected to scale with the magnitude of sediment fluxes, which means that the underestimation of modern dissolved chemical weathering fluxes is greatest in the most active areas, the ones that already have a greater contribution to the global dissolved load (West et al., 2005). However, it seems unlikely that this scaling is linear since active erosion processes does not necessarily favour high surface area mineral 10 formations and hence limit the overall CEC of exported sediments. We would therefore expect the CEC flux over dissolved flux ratio to decrease with increasing erosion or sediment yield. Accordingly, the relative importance of CEC fluxes compared to dissolved fluxes is probably limited for most large fluvial systems. Volcanic areas may be notable exceptions, as these areas are known to export sediments smectite-rich, high surface area clays, e.g. (Chen, 1978). 15
Effect of cation exchange on the long-term carbon budget of Himalayan erosion
The effect of continental weathering on the long-term carbon cycle is mainly dictated by dissolved fluxes derived from Ca-silicate weathering following the Ebelmen-Urey reaction (eq. 1) because it can directly lead to precipitation of carbonate. This reaction stabilizes half of the alkalinity flux used to dissolve the initial silicates and release the other half as CO 2 to the ocean and atmosphere. Silicate-derived Mg fluxes are also similarly 20 efficient as they are exchanged for Ca during oceanic crust alteration or consumed during Mg-rich calcite precipitation (Berner and Berner, 2012). Reversely, it is generally assumed that on the long-term, the uptake of CO 2 by Na + or K + silicate weathering (eq. 3) is balanced by the CO 2 release during the formation of new Na and K silicates on the seafloor during reverse weathering reactions (eq. 4) (MacKenzie and Garrels, 1966). In such case case, Na and K silicate weathering do not participate in the long-term carbon budget of continental erosion. Alternatively, cation exchange reaction allows exchange of Na + or K + for Ca ++ and may subsequently lead to CaCO 3 precipitation and long term C sequestration (eq. 5) (Berner, 2004;Berner et al., 1983;MacKenzie and Garrels, 1966;Michalopoulos and Aller, 1995 Assuming annual exchange fluxes as discussed above (Table 3), 27 x 10 9 mol/yr Na + and 5 x 10 9 mol/yr K + would be exchanged for 16 x 10 9 mol/yr Ca 2+ which can ultimately precipitate as CaCO 3 . This is substantial but remains relatively marginal compared to the total flux of silicate derived alkalinity of the Ganga-Brahmaputra that is estimated to be around 270 x 10 9 mol/yr (Galy and France-Lanord, 1999). 60 to 65% of this silicate alkalinity is balanced by Na + and K + , which corresponds to 160 to 175 x 10 9 mol/yr of HCO 3 -. Therefore, about 10% of the 5 alkalinity linked to Na-K silicate weathering could finally lead to carbonate precipitation through cation exchange.
Hence the total flux of silicate weathering derived alkalinity that can precipitate as CaCO 3 is 55 to 62 x 10 9 mol/yr. This estimate remains highly speculative since the extent and magnitude of reverse weathering reactions are currently poorly quantified.
10
These fluxes may be substantial but are still limited when compared to the ca. 300 x 10 9 mol/yr C storage associated to the organic carbon burial fluxes of the modern Himalayan system (Galy et al., 2007), which remains the main forcing of the carbon cycle from Himalayan erosion. It should nevertheless be kept in mind that our estimates are formulated based on the Himalayan system at present. On longer time scales, the variability in both sediment (Goodbred and Kuehl, 2000) and weathering fluxes (Lupker et al., 2013) mean that the relative 15 importance of cation exchange fluxes in the global weathering budget has likely varied and hence should be treated carefully. Finally, it's worth mentioning that these estimates of weathering impact on the carbon cycle do not take into account the role of chemical weathering through sulfuric acid (Galy and France-Lanord, 1999;Turchyn et al., 2013) that is known to also contribute to the weathering budget of Himalayan erosion and does counteract long-term carbon sequestration (Calmels et al., 2007). 20
Conclusions
The Ganga-Brahmaputra is the first sediment point source to the oceans with an export of about 1 billion tons of sediments every year. The high average sediment concentration suggests that the cation exchange fluxes of this system may be significant or at least need to be quantified in order to derive robust weathering flux estimates. The flux of exchangeable cations has been quantified in this study based on CEC measurements of riverine sediments. 25 These measurements show that the CEC of sediments is strongly variable within the water column, which is linked to sediment sorting effects and variable mineralogical composition with depth. Contrary to the total CEC, the nature of adsorbed cations is remarkably constant amongst all samples with the dominance of divalent cations Ca 2+ and Mg 2+ . The equilibrium constants between adsorbed cations and river water composition of the Ganga-Brahmaputra are also very close to the ones derived for sediments from the Amazon in a previous study. 30 Based on the sediment flux of the Ganga-Brahmaputra and assuming a total exchange of adsorbed riverine Ca 2+ for marine Na + we estimated that estuarine cation exchange could increase the dissolved Ca 2+ flux to the ocean by 6 % at most. Taking more realistic estimations based on a partial exchange of riverine Ca 2+ for marine Na + , Mg 2+ and K + yields an increased Ca 2+ flux of ca. 5%, while the equivalent of 15% of the dissolved Na + flux, 8% 35 of the dissolved K + flux and 4% of the Mg 2+ are reabsorbed by the sediments in the estuaries. Estuarine sedimentseawater cation exchange is therefore mainly a riverine Na + sink. In the context of the long-term carbon budget of Himalayan erosion, cation exchange increases the pool of Ca 2+ that can participate to CaCO 3 storage. This additional flux is however limited to ca. 10% of the Ca-Mg silicate derived flux. In spite of the very intense Earth Surf. Dynam. Discuss., doi:10.5194/esurf-2016-26, 2016 Manuscript under review for journal Earth Surf. Dynam. Published: 4 May 2016 c Author(s) 2016. CC-BY 3.0 License. particle flux associated to physical erosion of the Himalaya, the cation exchange process occurring in the estuarine zone does not change significantly the estimate of the impact of silicate weathering on long term carbon sequestration. It is likely limited by the relatively coarse nature and low surface area of Himalayan sediments that lead to an overall low CEC. Galy, V., France-Lanord, C., and Lartiges, B.: Loading and fate of particulate organic carbon from the Himalaya to the Ganga-Brahmaputra delta., Geochimica et Cosmochimica Acta, 72, 1767-1787, 2008. Earth Surf. Dynam. Discuss., doi:10.5194/esurf-2016-26, 2016 Manuscript under review for journal Earth Surf. Dynam. fluxes (partial exchange of riverine Ca 2+ for Mg 2+ , K + and Na + ) based on exchange data of Sayles and Mangelsdorf (1977;1979) of G&B sediments. These exchange fluxes are compared to the total dissolved fluxes exported by the G&B as estimated by Galy and France-Lanord (1999 X K , X Ca and X Mg . The exchange coefficient for a binary Ca-Mg exchange in an average Ganga, Brahmaputra and lower Meghna river water composition is given for a p-exponent value of 1 and 0.76 as in Sayles and Mangelsdorf (1979), see text for more details. Samples BR1027 and BR207 are average values of n = 7 replicates each. | 6,528.2 | 2016-08-12T00:00:00.000 | [
"Environmental Science",
"Geology"
] |
Impact of short and long exposure to cafeteria diet on food intake and white adipose tissue lipolysis mediated by glucagon-like peptide 1 receptor
Introduction The modern food environment facilitates excessive calorie intake, a major driver of obesity. Glucagon-like peptide 1 (GLP1) is a neuroendocrine peptide that has been the basis for developing new pharmacotherapies against obesity. The GLP1 receptor (GLP1R) is expressed in central and peripheral tissues, and activation of GLP1R reduces food intake, increases the expression of thermogenic proteins in brown adipose tissue (BAT), and enhances lipolysis in white adipose tissue (WAT). Obesity decreases the efficiency of GLP1R agonists in reducing food intake and body weight. Still, whether palatable food intake before or during the early development of obesity reduces the effects of GLP1R agonists on food intake and adipose tissue metabolism remains undetermined. Further, whether GLP1R expressed in WAT contributes to these effects is unclear. Methods Food intake, expression of thermogenic BAT proteins, and WAT lipolysis were measured after central or peripheral administration of Exendin-4 (EX4), a GLP1R agonist, to mice under intermittent-short exposure to CAF diet (3 h/d for 8 days) or a longer-continuous exposure to CAF diet (24 h/d for 15 days). Ex-vivo lipolysis was measured after EX4 exposure to WAT samples from mice fed CAF or control diet for 12 weeks. . Results During intermittent-short exposure to CAF diet (3 h/d for 8 days), third ventricle injection (ICV) and intra-peritoneal administration of EX4 reduced palatable food intake. Yet, during a longer-continuous exposure to CAF diet (24 h/d for 15 days), only ICV EX4 administration reduced food intake and body weight. However, this exposure to CAF diet blocked the increase in uncoupling protein 1 (UCP1) caused by ICV EX4 administration in mice fed control diet. Finally, GLP1R expression in WAT was minimal, and EX4 failed to increase lipolysis ex-vivo in WAT tissue samples from mice fed CAF or control diet for 12 weeks. . Discussion Exposure to a CAF diet during the early stages of obesity reduces the effects of peripheral and central GLP1R agonists, and WAT does not express a functional GLP1 receptor. These data support that exposure to the obesogenic food environment, without the development or manifestation of obesity, can alter the response to GLP1R agonists. .
Introduction: The modern food environment facilitates excessive calorie intake, a major driver of obesity. Glucagon-like peptide 1 (GLP1) is a neuroendocrine peptide that has been the basis for developing new pharmacotherapies against obesity. The GLP1 receptor (GLP1R) is expressed in central and peripheral tissues, and activation of GLP1R reduces food intake, increases the expression of thermogenic proteins in brown adipose tissue (BAT), and enhances lipolysis in white adipose tissue (WAT). Obesity decreases the efficiency of GLP1R agonists in reducing food intake and body weight. Still, whether palatable food intake before or during the early development of obesity reduces the effects of GLP1R agonists on food intake and adipose tissue metabolism remains undetermined. Further, whether GLP1R expressed in WAT contributes to these effects is unclear.
Methods: Food intake, expression of thermogenic BAT proteins, and WAT lipolysis were measured after central or peripheral administration of Exendin-4 (EX4), a GLP1R agonist, to mice under intermittent-short exposure to CAF diet (3 h/d for 8 days) or a longer-continuous exposure to CAF diet (24 h/d for 15 days). Ex-vivo lipolysis was measured after EX4 exposure to WAT samples from mice fed CAF or control diet for 12 weeks. .
Results:
During intermittent-short exposure to CAF diet (3 h/d for 8 days), third ventricle injection (ICV) and intra-peritoneal administration of EX4 reduced palatable food intake. Yet, during a longer-continuous exposure to CAF diet (24 h/d for 15 days), only ICV EX4 administration reduced food intake and body weight. However, this exposure to CAF diet blocked the increase in uncoupling
Introduction
The modern food environment is obesogenic. The easy access to various palatable foods rich in fat and carbohydrates results in excess calorie intake, increasing the risk of obesity and related diseases (1). This excessive calorie intake is driven by food intake despite hunger or satiety, a behavior called hedonic intake (2,3). Further, repeated palatable food intake alters the mechanisms regulating food intake, thereby increasing hedonic intake and body weight gain (4). In rodents, the obesogenic environment is usually modeled by exposing rodents to diets enriched in a single macronutrient (i.e., fat or sucrose) (5). However, these diets do not model the hedonic intake driven by easy access to various palatable foods observed in the human obesogenic environment. The cafeteria (CAF) diet overcomes this limitation by using a rotating schedule of highly palatable human snacks and free access to standard rodent chow (6). The CAF diet causes hedonic intake and, over time, severe obesity (7,8). Thus, the CAF diet is a valuable model to study not only diet-induced obesity but also the influence of the obesogenic environment on the control of feeding behavior and metabolism.
Glucagon-like peptide 1 (GLP1) is an anorexigenic neuroendocrine peptide that has been the basis for developing pharmacotherapies against obesity (9). Endogenous GLP1 is released post-prandially by neuroendocrine L cells of the small and large intestines and by neurons in the nucleus of the solitary tract that express the pre-proglucagon peptide (PPG-NTS neurons) (10-12). In rodents, activating the GLP1 receptor (GLP1R) in peripheral tissues (i.e., vagal afferents, enteric neurons) or different brain regions can reduce food intake (12). Still, the reduction in weight loss and weight loss caused by activation of peripheral GLP1R do not require activation of PPG-NTS neurons (13,14). In rodents, peripheral administration of exendin-4 (EX4), a longlasting GLP1R agonist that crosses the blood-brain barrier, activates GLP1R in several hypothalamic nuclei (15), including the paraventricular nucleus (PVN). In this brain region, pharmacological activation of GLP1R or chemogenetic activation of GLP1R-expressing neurons reduces the intake of standard rodent food and decreases operant responding for sucrose (16)(17)(18)(19). However, whether activation of central GLP1R, including in the PVN, can regulate hedonic intake in an obesogenic environment similar to the human food environment remains unclear.
In rodents, diet-induced obesity or the availability of palatable food reduces the anorectic effect of GLP1R activation by peripheral EX4 administration (20, 21). However, whether the same results would be observed after activating central GLP1R by the intracerebroventricular administration of EX4 is unclear. Also, whether this apparent resistance to EX4 administration results from exposure to palatable food or is the consequence of weight gain after exposure to a palatable diet remains unknown. This distinction is relevant as palatable food intake can elicit behavioral and metabolic effects independent of severe weight gain in rodents (22-24). Overall, whether exposure to an obesogenic environment before the onset or during early stages of obesity can reduce the anorectic effects of the central or peripheral administration of GLP1R agonists remains unclear.
Metabolic effects, in addition to reduced food intake, might explain the body weight loss caused by GLP1R activation. Central and peripheral administration of EX4 enhance white adipose tissue (WAT) lipolysis (25,26), increase plasma free fatty acid (FFA) and triglyceride (TG) clearance, and promotes mitochondrial fatty acid oxidation in brown adipose tissue (BAT), WAT, and muscle (25,(27)(28)(29)(30). Concordantly, central and peripheral administration of GLP1R agonists increases the expression of uncoupling protein 1 (UCP1) and other proteins involved in energy metabolism (i.e., Peroxisome proliferator-activated receptor gamma coactivator 1alpha or carnitine palmitoyltransferase I) in WAT and BAT, leading to increased thermogenesis (27,28,31). Yet, two questions need to be answered regarding the effects of GLP1R agonists on WAT. First, it remains controversial whether exposure to palatable foods before the onset of obesity can decrease the impact of central and peripheral EX4 administration on BAT and WAT, or if this effect is observed only in obese animals (25)(26)(27). Second, it is unclear whether the effects of GLP1R agonists on WAT metabolism are mediated by direct activation of GLP1R in WAT, as the expression of a functional GLP1R in this tissue remains controversial (32, 33).
We previously showed that obesity caused by long-term and continuous access to a CAF diet (24 h/d for 10 weeks) reduced the anorectic effects of peripheral EX4 administration (21). Here we aimed to understand how exposure to an obesogenic environment modeled by CAF diet could alter the impact of EX4 on food intake, weight loss, and WAT metabolism. First, we examined whether different schedules of access during early exposure to a CAF diet before the onset of obesity reduced the anorectic effects of GLP1R activation by ICV and intraperitoneal (IP) EX4 administration. Second, we examined whether EX4 increased WAT lipolysis by acting in this tissue, and if this effect was also reduced by long-term exposure to a CAF diet.
Animals
All experimental protocols were approved by the Institutional Animal Care and Use Committee at Pontificia Universidad Catolica de Chile. Male C57BL/6J mice (originally obtained from Jackson Laboratories and bred at Pontificia Universidad Catoĺica de Chile, 8-10-week-old at the beginning of experiments) were used in all experiments. Mice were maintained on a 12:12 h light:dark cycle in a temperature-controlled room (20-24°C) and had free access to standard rodent food (i.e., chow) and water, except where noted. Mice were grouped or singly housed in clear solid-bottom cages with paper bedding (2:1 mixture of sterilized shredded filter paper and paper towels) supplemented with environment-enriching materials. Mice were euthanized by isoflurane (Baxter) overdose at the end of each experiment.
Drugs and injections
EX4 (#1933, Tocris Bioscience) and EX3 EX3-(9-39), #2081, Tocris Bioscience) were dissolved in sterile saline and stored at -20°C in single-use aliquots. Mice were acclimated to IP injections by receiving a daily IP injection (0.9% NaCl solution, 200 mL) for three consecutive days. Mice with ICV or PVN cannulae were acclimated to intra-cannula injections by receiving a single daily artificial cerebrospinal fluid (aCSF) injection, either ICV (0.5 mL) or intra-PVN (0.25 mL), respectively. All injections occurred within the last hour before lights off. After euthanasia, cannula placement was verified by histological methods (35). Data from mice with misplaced cannulae were excluded from the analyses.
Ex-vivo EX4 treatment of adipose tissue explants
Inguinal WAT (iWAT) and epididymal WAT (eWAT) depots were dissected into 50-100 mg pieces and incubated in DMEM culture media supplemented with 10% fetal bovine serum and antibiotics (penicillin-streptomycin, #03-031-1B, Biological Industries) at 37°C and 5% CO 2 for 24 h (The medium was changed twice during this period). Next, WAT explants were treated with vehicle (saline), 100 mM isoproterenol (ISO), or 2.5 nM EX4 for 24 h, followed by KRH buffer (KRBH) plus free fatty acid bovine serum albumin (BSA) 4% for one h. Then, we quantified the release of glycerol into the KRBH medium with a glycerol colorimetric test (#F6428, Sigma). Glycerol release data were normalized by the protein concentration of the tissue explants.
Experiment 1. Chow and palatable food intake after EX4 IP administration during short-term intermittent exposure to CAF diet
To determine whether early exposure to CAF diet altered the anorectic effects of peripheral EX4 administration, we measured food intake in response to EX4 IP administration using a withinsubjects design in mice with only chow access and then short and intermittent exposure to CAF diet ( Figure 1A). Mice with only chow access (n = 8) were acclimated to IP injections and then injected with vehicle or EX4 IP (10 mg/kg) randomized over days with 48 h between injections. Food intake was measured 3 h after each IP injection. Next, after a 10-day wash-out period without interventions, mice were acclimated to 4 palatable snacks for 3 days (3h/d; Milk Chocolate, Sugar Cookies, Cheese snacks, and Potato Chips; see Supplementary Table 1 for commercial names and detailed nutritional information) while receiving a single daily IP saline injection. Starting on the fourth day, mice received a vehicle or EX4 IP injection (10 mg/kg) randomized over days with 48 h between injections. After IP injections, mice had access to the palatable snacks for 3 h. Chow and palatable food intake were measured 3 h post-injection. Thus, including acclimation, mice had access to the CAF diet for a total of 5 days.
To determine whether the anorectic effects of EX4 IP administration at 10 mg/kg required activation of central GLP1 receptors, we measured food intake after co-administration of IP EX4 and ICV administration of the GLP1R antagonist EX3 ( Figure 1D). A separate group of mice (n = 5) with only access to chow were prepared with an ICV cannula and were acclimated for three consecutive days by receiving an ICV injection of aCSF 15 min. before an IP saline injection. Starting on the fourth day, mice received a vehicle or EX3 ICV injection (1, 5, and 10 ng) 15 min. before a vehicle or EX4 IP injection (10 mg/kg) with the A B D E C FIGURE 1 Chow and palatable food intake after EX4 IP administration during short-term intermittent exposure to CAF diet. (A) Experimental design for panels B-C. Mice were exposed to palatable snacks for 3h/d for 5 days, including acclimation and EX4 administration. combination of ICV and IP injections randomized over days with 48 h between the injection days. Chow intake was measured 3 hours after the EX4 IP injection.
2.8 Experiment 2. Chow and palatable food intake after EX4 ICV or PVN administration during short-term exposure to CAF diet To determine whether early exposure to CAF diet altered the anorectic effects of central EX4 administration, we measured food intake in response to EX4 ICV or PVN administration using a within-subjects design in mice with only chow access and then with short-term and intermittent exposure to a CAF diet as described in experiment 1 (Figure 2A). Mice prepared with ICV (n = 10) or PVN cannulae (n = 6) were acclimated to intra-cannula injections and then injected with vehicle or EX4 ICV (10, 25, 100 ng) or into the PVN (3, 10, 30, 100 ng) with vehicle and EX4 doses randomized over days with 48 h between injections. The EX4 dosage was known to reduce intake after ICV or PVN administration (19,37). After a 10-day washout period without interventions, mice were acclimated to 4 palatable snacks as in experiment 1 (3h/d for 3 days). Starting on the fourth day, mice were injected with vehicle or EX4 ICV or intra-PVN as before. After ICV or PVN injections, mice had access to palatable foods for 3 h after injections. Chow and palatable food intake were measured 3 h post-injection. Thus, including acclimation, mice had access to the CAF diet for a total of 7-8 days. Two mice lost their ICV cannulae during acclimation to ICV injections. Thus, the final sample size was 8 mice with ICV cannulae and 6 mice with PVN cannulae. Chow and palatable food intake after EX4 ICV or PVN administration during short-term intermittent exposure to CAF diet. (A) Experimental design. Mice were exposed to palatable snacks for 3h/d for 7-8 days (ICV: 7 days, PVN: 8 days), including acclimation and EX4 administration. 2.9 Experiment 3. Body weight, food intake, and WAT metabolism and protein expression after repeated EX4 ICV and IP administration during continuous exposure to CAF diet before induction of obesity To determine whether longer and continuous exposure to CAF diet before manifesting obesity reduced the anorectic effects of EX4 ICV or IP administration, we measured food intake after ICV or IP EX4 injections in mice exposed continuously to CAF or control diet (i.e., 24 h/d chow access) for 15 days ( Figure 3A). Mice prepared with (n = 20) or without (n = 30) an ICV cannulae were singly housed and randomly assigned to CAF diet (ICV: n = 10, no surgery: n = 15) or control diet (ICV: n = 10, no surgery: n = 15) for 15 days. The CAF diet consisted of continuous access to 4 palatable snacks made for human intake that were randomly selected from 20 snacks (Supplementary Table 1) in addition to chow. The palatable snacks were changed every Monday, Wednesday, and Friday. After 15 days of the dietary intervention, mice were randomly assigned within each diet to receive either a daily EX4 ICV (100 ng, n = 12) or IP (10 mg/kg, n = 16) injection or their respective vehicles (aCSF for ICV, n = 8; saline for IP injections, n = 14) for 10 days while maintaining their respective diets. On the tenth day, mice were euthanized two hours after the last injection, and samples from the iWAT, eWAT, and BAT were collected to measure pHSL and UCP1 protein levels (Figure 4). The final group sample for mice with ICV injections was 4-6 mice per group, and for IP-injected mice was 7-8 mice per group. For mice with ICV cannulae, brains were processed to determine cannula placement. No mice were eliminated due to misplaced cannulae.
2.10 Experiment 4. WAT lipolysis after EX4 ex-vivo in mice fed CAF or control diet for 12 weeks To determine whether WAT lipolysis caused by EX4 administration was mediated by GLP1R expressed in WAT and Body weight and food intake after repeated EX4 ICV and IP administration during continuous exposure to CAF diet before induction of obesity. modulated by CAF diet, we examined ex-vivo lipolysis caused by EX4 in WAT tissue explants from mice fed CAF or control diet to EX4 ( Figure 5A). Mice (n = 26) were group-housed and fed CAF or control diet for 12 weeks (n = 13 per diet) using the CAF diet described in experiment 3. After completing the dietary intervention, mice were euthanized during the first half of the light cycle (07:00-12:00) with excess isoflurane and the mediobasal hypothalamus, iWAT, eWAT, and mesenteric fat depots were collected. Samples from the hypothalamus were stored in RNA later ® (#AM7021M, Thermo Fischer), and WAT depots were snap frozen, stored at -80°C degrees, and then used for qPCR analysis of GLP1R. Epididymal and iWAT samples were also tested for ex-vivo effects of EX4 on lipolysis and expression of pHSL by Western Blot. The final sample size was n = 11 and n = 12 for the CAF and control diet, respectively. One mouse per diet was removed from the study based on veterinary advice due to injuries, and one mouse fed CAF diet was excluded from the analysis because we failed to collect inguinal WAT.
Statistical analysis
Statistical analyses were performed using R v4.1.2. All data are presented as mean and SEM. Statistical significance was set at P < 0.05. Normality for data was examined by reviewing residual plots. For experiment 1, chow and snack intake (expressed as calories) were analyzed as separate endpoints with repeated measures ANOVA with the dose of EX4 IP as the independent variable or with the combination of EX3 ICV and EX4 IP with mice as the experimental subject. For experiment 2, chow and snack intake (expressed as calories) were analyzed as separate endpoints with repeated measures ANOVA with the dose of EX4 ICV or intra-PVN as independent variables and mice as the experimental subject. For experiment 3, the effects of the CAF diet on change in body weight, percent adiposity, and expression of GLP1R were analyzed with unpaired Student's t-tests. Changes in glycerol release were analyzed separately for each dietary intervention (CAF vs. control) and WAT depot (inguinal and epididymal) with a repeated measures ANOVA with treatment (vehicle, EX4, and isoproterenol) as the independent variable and mice as the experimental subject. For experiment 4, daily intake and change in body weight during CAF diet feeding were analyzed with a twoway ANOVA with diet (CAF vs. control) and route of administration (IP and ICV) as independent variables. Intake, change in body weight, and percent of eWAT relative to body weight were analyzed separately with a three-way ANOVA with the interaction between dietary treatment (CAF vs. control), route of administration (ICV and IP), and EX4 dose (vehicle vs. dose) as independent variables. Changes in protein expression were analyzed separately for each protein of interest and route of administration with a two-way ANOVA with dietary intervention (CAF vs. control) and EX4 dose as independent variables. For all analyses, pairwise comparisons were done with estimated marginal means and adjusted with false discovery rate.
A B D C FIGURE 4 Expression of pHSL and UCP1 after repeated EX4 ICV and IP administration during continuous exposure to CAF diet before induction of obesity.
Results
3.1 Experiment 1. EX4 IP administration reduced palatable food intake during short-term exposure to a CAF diet After acclimation to IP injections ( Figure 1B left), EX4 IP administration reduced chow intake by 60% compared to vehicle (P < 0.05, Figure 1B right). During acclimation to the CAF diet (3 h/ d), mice progressively ate more palatable snacks compared to chow ( Figure 1C left; interaction between diet and days, F 2,35 = 7.23, P < 0.01). After acclimation, EX4 IP reduced intake of snacks by 75% compared to vehicle (t 21 = 10.64, P < 0.01) without altering chow intake (P = 0.64, Figure 1C right). Body weight did not change during the experiment (t 14 = 1.58, P = 0.13). In a separate set of mice, the reduction of chow intake by EX4 IP administration was decreased by approximately 50% by an ICV pre-treatment with EX3 ( Figure 1E). Together, these data indicate that EX4 IP administration can reduce short-term (3 palatable snacks, an effect mediated by peripheral and central GLP1R receptors.
3.3 Experiment 3. Repeated EX4 ICV, but not IP administration, reduces food intake, causes body weight loss, and alters expression and regulation of thermogenic proteins in WAT and BAT during long-term and continuous exposure to a CAF diet before the onset of obesity In mice prepared to receive EX4 ICV or IP administration, exposure to a CAF diet for 15 days increased daily calorie intake ( Figure 3B; ICV: t 18 = 5.39, P = 0.01; IP: t 28 = 3.79, P = 0.01) and caused a body weight gain of~1 gram ( Figure 3C; ICV: t 18 = 2.78, P = 0.01; IP: t 28 = 3.79, P = 0.01) compared to mice fed control diet. Still, diet did not affect the final body weight on day 15 ( Figure 3D; ICV: t 18 = 1.82, P = 0.09; IP: t 28 = 0.49, P = 0.62). After 10 days of single daily EX4 ICV or IP injections, while mice maintained their respective diets, only EX4 ICV administration reduced total intake regardless of diet ( Figure 3E; EX4: F 1,16 = 21.69, P < 0.01; interaction between diet and EX4: F 1,16 = 1.05, P = 0.32). The same effects were observed for body weight gain and eWAT mass, as only EX4 ICV administration caused weight loss ( Figure 3F; main effect of EX4: F 1,16 = 72.18, P < 0.01; interaction between diet and EX4: F 1,16 = 1.74, P = 0.29) and reduced eWAT ( Figure 3G; main effect of EX4: F 1,16 = 11.86, P < 0.01; interaction between diet and EX4: F 1,16 = 0.91, P = 0.35). EX4 IP did not affect weight gain or eWAT mass (Figures 3F, G; P > 0.05 for all effects). These data indicate that repeated EX4 ICV, but not IP administration, can reduce intake, body weight, and adiposity during continuous access to a CAF diet.
We next examined whether the route of EX4 administration (ICV vs. IP) altered the expression of proteins related to lipolysis and thermogenesis in WAT and BAT. We selected HSL, a key enzyme in triglyceride hydrolysis that is activated by phosphorylation (38), and UCP1, an essential protein for thermogenesis (39). EX4 ICV administration increased the pHSL/ HSL protein levels in mice fed with either CAF or the control diet, while EX4 IP had no effect regardless of dietary intervention (Figures 4A, B). Yet, in mice with EX4 ICV or IP administration, we failed to find differences in the plasma-free fatty acids independent of diet (Supplementary Figure 1). However, EX4 ICV administration reduced plasma triglycerides and glucose in mice fed CAF diet (Supplementary Figure 1C-D). EX4 ICV, but not IP administration, also increased UCP1 in BAT in mice fed control but not CAF diet ( Figures 4C, D). However, for mice who received vehicle IP injections, UCP1 BAT levels were higher in mice fed CAF diet compared to those fed chow. Together, these data indicate that EX4 ICV administration increases pHSL in eWAT of mice fed control or CAF diet and that EX4 ICV administration increases UCP1 in BAT in mice fed control but not CAF diet.
Experiment 4. EX4 ex-vivo does not induce lipolysis in WAT explants of mice fed control or CAF diet
Mice fed a CAF diet for 12 weeks had a significantly larger body weight gain (ΔBW) and percent adiposity compared to mice fed control diet (Figures 5B, C; ΔBW: t 21 = 6.17, P < 0.01; %adiposity: t 21 = 9.52, P < 0.01). Expression of GLP1R in the hypothalamus was significantly greater by 10-fold compared to either eWAT or iWAT depots ( Figure 5D; F 2,21 = 21.11, P < 0.01) and GLP1R mRNA was not affected by diet ( Figure 5E; F 1,10 = 0.89, P = 0.37). In an ex-vivo assay for lipolysis with isoproterenol (ISO) as a positive control for lipolytic activation, EX4 failed to increase glycerol release in eWAT and iWAT explants from mice fed CAF or control diet despite that the CAF diet enhanced the lipolytic response to isoproterenol in eWAT ( Figure 5E). EX4 ex-vivo also failed to increase pHSL expression in the explants from both WAT depots independent of diet. In contrast, ISO increased pHSL in all WAT depots without difference between diets (Figures 5F, G). Overall, consistently with low levels of GLP1R expression in eWAT and iWAT regardless of diet, direct application of EX4 failed to increase lipolysis in eWAT and iWAT from mice fed either CAF or control diet.
Discussion
This study aimed to understand how exposure to an obesogenic food environment could alter the effects of EX4 on food intake, weight loss, and adipose tissue metabolism. We first tested whether different schedules of CAF diet before the onset of obesity could reduce the anorectic, lipolytic, and thermogenic effects of GLP1R activation by peripheral and central EX4 administration. Second, we tested whether ex-vivo EX4 would increase WAT lipolysis and if long-term exposure to a CAF diet influenced lipolysis. Our data show that, while EX4 ICV and IP administration can reduce intake of palatable snacks during short-term exposure to a CAF diet (3 h/d for five to eight days), only EX4 ICV can reduce intake and body weight in mice with continuous (24 h/d for 15 days) access to a CAF diet. Finally, ex-vivo EX4 does not increase lipolysis in eWAT and iWAT explants from lean mice fed a control diet or obese mice fed a CAF diet for 13 weeks.
We show that EX4 ICV or IP administration, but into PVN, can reduce palatable food intake during intermittent short-term access to a CAF diet (Figures 1B, 2B). However, EX4 reduced chow intake regardless of the administration route. The anorectic effect of EX4 on chow was reduced by a previous ICV administration of the GLP1R antagonist EX3 ( Figure 1C). However, as EX3 was injected into the ventral third ventricle, its diffusion most likely only reached periventricular hypothalamic nuclei (40,41), while the anorectic actions of EX4 involve vagal-mediated effects and central effects that engage hypothalamic and extra-hypothalamic GLP1R (15,42). Thus, it is unlikely that EX3 administration could have blocked all central effects of EX4 in this study.
The effects of EX4 in PVN on chow intake were smaller relative to both EX4 ICV and IP, which is consistent with the expected engagement of more brain sites that express GLP1R by ICV or IP EX4 administration. We previously showed that EX4 IP in Balb/c male mice reduced chow intake but could not block palatable food intake during short-term access (3 h/d for seven days) to a CAF diet (21). In the studies presented here, we used male C57BL6/J mice; thus, strain differences might account for the differences in sensitivity to EX4 IP administration. The lack of effects of EX4 into the PVN on intake of CAF diet was unexpected, as PVN receives strong innervation of PPG-NTS neurons and activation of the GLP1R receptors in this brain region can inhibit chow intake (16)(17)(18). Because ICV injection into the ventral 3 rd ventricle would be expected to engage more brain sites than just PVN, this result suggests that the activation of the GLP1R over several brain sites is necessary to reduce palatable food intake.
Although EX4 ICV or IP administration can reduce the intake of chow and palatable snacks during short-term intermittent access to a CAF diet, only repeated administration of the same ICV EX4 dose reduced intake during continuous and longer exposure to CAF diet for 15 days. We previously showed that exposing mice to a CAF diet for ten weeks caused obesity and blocked the anorexigenic effects of EX4 IP administration on the intake of palatable snacks (21). In this study, mice only had access to a CAF diet for two weeks, which caused hedonic intake (indicated by increased intake of palatable food relative to chow) and a small (~1 g) weight gain ( Figure 3C). Still, this exposure to the CAF diet failed to induce absolute differences in body weight ( Figure 3D), which suggests that mice were in the early stages of obesity development compared to mice fed a CAF diet for 12 weeks (Figure 5). Although a higher dose of EX4 IP in mice with continuous access to a CAF diet might have reduced intake and caused body weight loss, we highlight the contrasting results between the effects of EX4 IP during short-term exposure to CAF (3 h daily every other day for a total of 5 days including acclimation) and longer exposure to CAF (continuous access for a total of 25 days including acclimation). This result supports that chronic intake of palatable foods can alter the response to EX4 during the early stages of obesity, which might contribute to further increases in food intake and body weight over time.
It is possible that the extended period of the CAF diet could have also reduced the transport of EX4 into the central nervous system, which depends on passive mechanisms (43) as well as active endothelial transport dependent on GLP1 receptor activity (44, 45). However, whether tanycytes mediate EX4 transport into the brain has not been examined (46). Future studies should assess whether CAF diet feeding reduces EX4 transport across the bloodbrain barrier.
Consistent with the differences in their effects on adiposity, EX4 ICV but not IP administration increased pHSL expression regardless of diet, and only EX4 ICV in mice fed chow increased BAT UCP1 expression ( Figure 4). The latter result is consistent with data showing that activation of the GLP1R in the dorsal medial hypothalamus (a brain site likely reached by EX4 ICV administration into the third ventricle) can increase BAT thermogenesis (47). Our data show that CAF diet exposure blocked this effect, suggesting that the weight loss caused by EX4 ICV is primarily due to a reduction in intake rather than the thermogenesis mediated by UCP1 in BAT. However, confirming this hypothesis would require direct measurements of energy expenditure. In mice fed a CAF or control diet for 15d, EX4 ICV reduced plasma TG and glycemia. However, EX4 ICV failed to reduce plasma FFA regardless of diet. While in mice fed the control diet, this could suggest higher clearance of FFA from plasma due to increased UCP1 expression in BAT (25,27,28,31), this effect is absent in mice fed CAF diet. Further examination of the effects of EX4 and CAF diet on WAT metabolism would be needed to support this hypothesis. Still, these effects are observed without the manifestation of obesity, suggesting they might reflect a metabolic dysfunction in WAT associated with inflammation caused by poor dietary quality (22, 23, 48) as mice fed CAF diet continued to consume more calories from palatable snacks even during EX4 injection.
We demonstrated that EX4 failed to increase lipolysis in eWAT or iWAT explants ex-vivo. This is consistent with a lack of effects in pHSL and the low levels of GLP1R mRNA detected in these tissues compared to those in the hypothalamus. These data align with data from transgenic mice lacking GLP1R WAT expression (49) and the hypothesis that the effects of activating GLP1R on WAT metabolism are mediated by activation of the sympathetic nervous system (33). The lack of effect of EX4 on pHSL expression in eWAT or iWAT ex-vivo ( Figure 5) is also consistent with the lack of effects of repeated EX4 IP on pHSL in either tissue, but that EX4 ICV increased pHSL in both WAT depots regardless of diet. Together, these data support the conclusion that eWAT and iWAT do not express a functional GLP1R (33).
Our data have clinical implications. Our data support that exposure to the obesogenic environment, without the development or manifestation of obesity, is sufficient to alter the response to GLP1R agonists. Further, these data suggest that the anorectic effects of GLP1R activation also depend on the duration of the exposure to the obesogenic environment. Together, these data highlight the need to consider the individual food environment and personal history in the success of anti-obesity interventions for GLP1R agonists. Our data also suggest that central administration of GLP1R agonists is more effective in causing weight loss than peripheral administration, which might impact the pharmacological designing of new GLP1R agonists over the following years to favor their brain actions. Similarly, this study, and future ones, will help to understand which tissues expressing GLP1R are essential in the anorectic effects of GLP1R agonists. Overall, our findings have future implications in therapeutical modifications of current GLP1 agonists, considering exposure, administration route, and tissue specificity.
In conclusion, we showed that either ICV or peripheral EX4 administration reduced palatable food intake during intermittent and short exposure to a CAF diet. However, EX4 administration into PVN failed to reduce CAF diet intake. Yet, during continuous and longer exposure to a CAF diet over 15 days before the development of obesity, only central activation of GLP1R reduced intake and caused weight loss. Yet, the same duration of exposure to a CAF blocked the increase in UCP1 in BAT caused by EX4 ICV administration. Finally, we demonstrated that WAT expression of GLP1R and activation are marginal in mice regardless of whether the animals are fed control or CAF diet. Overall, these results support the concept that exposure to the obesogenic food environment without or before the development or manifestation of obesity can alter the response to GLP1R agonists and that the effects of GLP1R activation on WAT are not mediated by the direct action of GLP1R agonists on this tissue regardless of obesity.
Data availability statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
Ethics statement
The animal study was reviewed and approved by Institutional Animal Care and Use Committee at Pontificia Universidad Catolica de Chile. | 8,143.8 | 2023-05-24T00:00:00.000 | [
"Biology",
"Medicine"
] |
Multifunctional liposomes Co-encapsulating epigallocatechin-3-gallate (EGCG) and miRNA for atherosclerosis lesion elimination
Atherosclerosis (AS) is a chronic inflammatory disease, characterized by a lipid accumulated plaque. Anti-oxidative and anti-inflammation and lipid metabolism promoting therapeutic strategies have been applied for atherosclerosis treatment. However, the therapeutic effect of a single therapeutic method is limited. It is suggested that a combination of these two strategies could help prevent lipid accumulation caused by inflammation and oxidative stress, and also promote lipid efflux from atherosclerotic plaque, to normalize arteries to the maximum extent. Hence, a strategy involving a multifunctional liposome co-encapsulating an antioxidant and anti-inflammatory drug epigallocatechin-3-gallate (EGCG) and a lipid-efflux-promoting gene miR-223 was established. The system (lip@EGCG/miR-223) could encapsulate miR-223 in core areas of the liposomes to provide a protective effect for gene drugs. Moreover, lip@EGCG/miR-223 was smaller in size (91.28 ± 2.28 nm characterized by DLS), making it easier to target AS lesions, which have smaller vascular endothelial spaces. After being efficiently internalized into the cells, lip@EGCG/miR-223 exhibited excellent antioxidant and anti-inflammatory effects in vitro by eliminating overproduced ROS and decreasing the level of inflammatory cytokines (TNF-α, IL-1β, and MCP-1), which was due to the effect of EGCG. Besides, the lipid-efflux-promoting protein ABCA1 was upregulated when treated with lip@EGCG/miR-223. Through the two therapies mentioned, lip@EGCG/miR-223 could effectively inhibit the formation of foam cells, which are a main component of atherosclerotic plaques. In AS model mice, after intravenous (i.v.) administration, lip@EGCG/miR-223 was effectively accumulated in atherosclerotic plaques, and the distribution of drugs in the heart and aorta compared to that in the kidney was significantly increased when compared with free drugs (the ratio was 6.27% for the free miR-223-treated group, which increased to 66.10% for the lip@EGCG/miR-223-treated group). By decreasing the inflammation level and lipid accumulation, the arterial vessels in AS were normalized, with less macrophages and micro-angiogenesis, when treated with lip@EGCG/miR-223. Overall, this study demonstrated that lip@EGCG/miR-223 could be developed as a potential system for atherosclerosis treatment by a combined treatment of antioxidant, anti-inflammatory, and lipid-efflux-promoting effects, which provides a novel strategy for the safe and efficient management of atherosclerosis.
Introduction
Cardiovascular disease (CVD) is the leading cause of mortality worldwide. 1Atherosclerosis (AS) is a leading cause of myocardial infarction, stroke, peripheral artery disease, and other CVDs, which is characterized by the accumulation of lipids, cells, and the extracellular matrix in arterial intima. 2 Since AS is a disease caused by lipid disorder, the current clinical treatments against AS mostly focus on preventing the growth of atherosclerotic plaque by lowering the levels of lowdensity lipoprotein (LDL) and cholesterol. 3However, this treatment cannot eliminate the lipid plaques that have already formed.8][9] On the other hand, excess ROS production is the leading cause and a key feature of AS. 10,11 As reported, ROS will induce LDLs to oxidize and form ox-LDLs, which exhibit proinammatory and toxic properties, and aggravate endothelial damage. 12Therefore, a promising strategy for the management of AS and other CVDs is the suppression of the inammation and oxidation environment followed by the elimination of the lipids in foam cells at the same time.
The natural compound epigallocatechin-3-gallate (EGCG) is the major catechin in green tea and accounts for 50-80% of the total catechins. 13EGCG has multiple phenolic hydroxyl groups that can be easily oxidized into phenols, endowing it with a high antioxidant activity and free-radical-scavenging capacity. 14GCG also possesses high anti-inammatory efficacy by inhibiting the secretion of proinammatory cytokines (TNF-a, IL-1b, and MCP-1). 15Despite its advantages, EGCG has still found limited application as an antioxidant and anti-inammatory in vivo.The reasons for this are rst, EGCG is not stable and can be easily oxidized at high temperature and in neutral or alkaline solutions; 16 second, the bioavailability of EGCG is low, with its poor absorption, rapid metabolism, and fast elimination in vivo. 17Therefore, it is critical to design an effective strategy to increase the stability and bioavailability of EGCG for exertion of its biological activities efficiently.
Gene therapy has shown prospects in the treatment of various diseases, such as tumors, and cardiovascular diseases. 18,19Among the genes, miR-223 is a microRNA involved in the progression of AS, and treatment with miR-223 could inhibit the formation of foam cells by increasing the level of the ATP-binding cassette transporter A1 (ABCA1), whose main function is to efflux cholesterol by the reverse cholesterol transport pathway (RCT), thus reducing the progress of AS. 20 Due to its large size and negative charge, it is difficult to introduce naked miR-223 into cells and produce therapeutic effects at specic sites.Therefore, developing safe and efficient gene-delivery systems in vivo has become a key challenge for gene therapy in clinic.
Nano-drug delivery systems have been applied for AS treatment because of the permeability and retention (EPR) effect developed during the formation of atherosclerotic lesions. 21,22iposomes are a widely applied nano-drug delivery system, which have the advantages of high biocompatibility, excellent drug-loading capacity, and controllable size. 23,24Moreover, as nanoparticles composed of lipid molecules, they have a natural ability to spontaneously accumulate in AS, making them an ideal drug-delivery system for AS treatment. 25Liposomes are also an attractive platform for creating gene-delivery systems.However, most liposomes applied for gene delivery are cationic particles, whereby genes are absorbed on the surface of the liposomes.So the exposed cationic membranes could induce aggregation, increase cellular and systemic toxicity, as well as fall short of offering effective protection for genes. 26n this study, a novel lipid nano-drug-delivery system was prepared by an improved thin lm hydration method, which involved co-encapsulation with EGCG and miR-223.It was intended that, by the dual effects of antioxidant and anti-inammatory as well as lipid metabolism promotion, the nano-drug delivery system could eliminate atherosclerotic lesions efficiently.Our experimental results proved that lip@EGCG/miR-223 could signicantly alleviate the accumulation of lipid plaques at the site of atherosclerotic lesions and restore diseased blood vessels into a normal state.
The RAW 264.7 cell line was obtained from the Cell Bank of Chinese Academy of Medical Sciences.miR-223, Cy5-miR-223, and the negative control miR-223 (NC) were purchased from GenePharma Co., Ltd (Shanghai, China).DMEM medium and trypsin were purchased from HyClone Laboratories Inc (Logan, UT, USA).Fetal bovine serum (FBS) was purchased from Gibco (Australia).Penicillin and streptomycin were obtained from Sigma-Aldrich (St Louis, MO, USA).Lipopolysaccharide (LPS) and oxidized low-density lipoprotein (ox-LDL) were purchased from Shanghai Yuanye Biotechnology Co., Ltd (Shanghai, China).DPPH radical and Oil red O were purchased from Shanghai Aladdin Reagent Co., Ltd (Shanghai, China).Dichlorouorescin diacetate (DCFH-DA) was obtained from MedChemExpress (New Jersey, USA).LysoTracker Green and Hoechst 33 342 were purchased from Invitrogen (USA).
The sense chain sequence of miR-223 was UGUCAGUUU-GUCAAAUACCCCA; while the anti-sense chain sequence of miR-223 was GGGUAUUUGACAAACUGACAU.
All other chemicals and reagents were of analytical grade.
Synthesis of lip@EGCG/miR-223
Liposome was prepared by an improved thin lm hydration method.First, 0.033 mg (1OD) of miR-223 was dissolved in 200 mL deionized water.Next, 0.044 mg DOTAP (DOTAP : miR-223 = 1 : 0.75, mass ratio) was dissolved in 200 mL chloroform, and 400 mL methanol was added, and the mixture was gently mixed to form a monophase.Aer 30 min incubation at room temperature, 200 mL deionized water and 200 mL chloroform were added to form two separate phases.Upon centrifugation at 800g for 8 min at 5 °C, the organic phase containing the DOTAP-miR-223 complex was extracted.PI (0.117 mg), DSPE-PEG2k (0.167 mg), Nanoscale Advances Paper chol (0.217 mg), and Tween-80 (0.167 mg) were added into the extracted organic phase.Then, 1 mL chloroform was added, and the organic phase was evaporated at 40 °C in a rotary evaporator for 20 min to form a lm on the wall of the bottle.Next, EGCG solution (2 mg EGCG in 1 mL deionized water) was added into the bottle.The mixture was kept under vacuum condition in the rotary evaporator for another 30 min.The obtained solution was ultrasonicated in a water bath for 5 min, probe ultrasonicated for 2 min (power: 25%, stop for 3 s aer 3 s intervals) and extruded through a 100 nm polycarbonate membrane 11 times to prepare lip@EGCG/miR-223.
Characterization of lip@EGCG/miR-223
The surface morphology of lip@EGCG/miR-223 was examined by transmission electron microscopy (TEM, JEM-2100, JEOL, Japan).Aer the liposome was dispersed in deionized water, the hydrodynamic diameter, PDI, and zeta potential of the liposome were measured by dynamic light scattering (DLS, Zetasizer-Nano-ZS90, Malvern, UK).
Evaluation of the miR-223 and EGCG encapsulation efficiency
The miR-223 and EGCG encapsulation efficiency (EE) of the liposomes was determined by ultraltrating the liposomes entrapping Cy5-miR-223 and EGCG with DEPC water.The encapsulated Cy5-miR-223 and EGCG were collected by ultra-ltration, and quantied by constructing a standard curve.The uorescence intensity of Cy5-miR-223 was measured by a uorescence spectrophotometer (ex 650 nm, em 670 nm, RF-6000, SHIMADZU, Japan).The absorbance of EGCG was measured by a UV spectrophotometer (UV-2600, SHIMADZU, Japan) at a wavelength of 276 nm.The miR-223 and EGCG EEs were calculated using the following formula: where C f and C t denote the mass of unencapsulated drug and total drug in the liposome, respectively.
Anti-RNase a degradation assay
Naked miR-223 (1 mg) and lip@EGCG/miR-223 (loaded with 1 mg miR-223) were added into RNase A solution (10 mg mL −1 ) separately at 37 °C for 30 min.Then, 5 mL EDTA solution (5 mM) was applied to terminate RNase A degradation at a set point time.Finally, 20 mL heparin sodium (0.8 mg mL −1 ) was added at 37 °C for 30 min to replace the miR-223 in the<EMAIL_ADDRESS>samples at different time points were manipulated according to the agarose gel retardation assay mentioned above.
DPPH scavenging capability
To evaluate the free-radical-eliminating capability, lip@EGCG/ miR-223 with different concentrations of EGCG (0, 10, 20, 50, 100, and 200 mg mL −1 ) was dispersed in 2 mL of methanol, and 1 mL DPPHc (100 mg mL −1 ) was added and incubated for 30 min at room temperature in the dark.Subsequently, the concentration of the residual free radicals was determined using a UV spectrophotometer at 517 nm, and the scavenging rate (I) was calculated as follows: where A 0 is the absorbance of DPPHc when the solution to be measured was not added, A j is the absorbance of the solution to be measured, and A i is the absorbance of DPPHc aer adding the solution to be measured.
Cell culture
Raw264.7 cells were cultured in DMEM medium containing 10% FBS, and 1% penicillin and streptomycin, and kept at 37 °C with 5% CO 2 .The cells were subcultured 2-3 times a week till they reached 80% conuence.
Cellular uptake and lysosomal escape
To evaluate cellular uptake and lysosomal escape, Cy5-labeled miR-223 was applied.First, Raw 264.7 cells (2 × 10 5 cells) were planted in a confocal dish and incubated for 12 h.The cells were then treated with free Cy5-miR-223 (160 nM) and lip@EGCG/Cy5-miR-223 (contain 160 nM miR-223) separately for 2, 4, and 6 h.The medium was discarded and the cells were washed with PBS three times.Then the cells were xed with 4% paraformaldehyde for 10 min and stained with LysoTracker Green and Hoechst 33 342 to observe the cell nuclei and lysosomes, respectively.Finally, the cells were observed by confocal laser scanning microscopy (CLSM, TCSSP5, Leica, Wetzlar, Germany).
ROS levels in macrophages
RAW 264.7 cells at a density of 2 × 10 5 cells per well were seeded in a 12-well plate and incubated for 12 h.Aer the cells were treated with different concentrations of lip@EGCG/miR-223 containing 10, 50, and 100 mg mL −1 EGCG for 6 h, the cells were stimulated with LPS (20 mg mL −1 ) for 8 h.The cells treated with medium were considered as the negative control group, and the cells without treatment with lip@EGCG/miR-223 were considered as the positive control group.Then the medium was discarded and the cells were washed three times with PBS.Subsequently, DCFH-DA (50 mM) diluted with serum-free DMEM was added and incubated for 30 min.Aer washing with PBS three times, the cells were inltrated with PBS and observed by uorescence microscopy (Primovert, Zeiss, Germany).
Investigation of anti-inammatory effect in vitro
RAW 264.7 cells were seeded in a 24-well plate at the density of 1 × 10 5 cells per well and incubated for 12 h.The negative control group was treated with fresh medium, and the other groups were stimulated with LPS (20 mg mL −1 ) for 8 h.Then the cells were treated with free EGCG (100 mg mL −1 ), lip@EGCG
Paper
Nanoscale Advances (containing 100 mg mL −1 EGCG), or lip@EGCG/miR-223 (containing 100 mg mL −1 EGCG) for 6 h, and the cells without treatment were considered as the positive control group.The supernatant of each experimental group was collected, and the typical inammatory cytokines, including tumor necrosis factor-a (TNF-a), interleukin-1b (IL-1b), and monocyte chemoattractant protein-1 (MCP-1), were determined by an ELISA kit (Beyotime, Shanghai, China), while the levels of total protein were quantied by a BCA protein assay kit (Beyotime, Shanghai, China).
2.12.Promoting foam cell lipid efflux in vitro RAW 264.7 cells at a density of 2 × 10 5 cells per well were seeded in a 12-well plate and incubated for 12 h as above.The negative control group was treated with fresh medium, and the other groups were stimulated with ox-LDL (25 mg mL −1 ) for 48 h.Then, the cells were treated with free miR-223 (10 mg mL −1 ), lip@miR-223 (containing 10 mg mL −1 miR-223), or lip@EGCG/ miR-223 (containing 10 mg mL −1 miR-223) for 8 h, and the cells without treatment were considered as the positive control group.Aer the medium was discarded, the cells were rinsed with PBS three times, xed with 4% paraformaldehyde for 30 min at 4 °C, and washed with PBS three times again.Subsequently, the cells were stained with 0.3% ORO in 60% isopropanol for 30 min, and observed by uorescence microscopy.In addition, intracellular ORO was extracted by isopropanol and the ORO concentration was determined by measuring the absorbance of the solutions at 492 nm via UVvisible spectrometry.
Expression of ABCA1 mRNA tested by RT-qPCR
Investigation of miR-223 expression was performed by RT-qPCR.RAW 264.7 cells were seeded in a 12-well plate at a density of 2 × 10 5 cells per well and incubated for 12 h.The cells were treated with free miR-223, lip@miR-223 or lip@EGCG/miR-223 with different concentrations of miR-223 (2 mg mL −1 ) for 6 h.The negative control group was treated with fresh medium.Then, the medicated medium was discarded and replaced with complete fresh medium of equal volume, and the cells were further incubated for 42 h.Subsequently, Trizol reagent was added to extract the total RNA from the cells, and the RNA concentration was examined by a Nano Drop 1000 system (Thermo Scientic, USA).Also, 2 mg RNA was reverse transcribed into cDNA in the GeneAmp® PCR System 9700 (Applied Biosystems, USA) and the cDNA concentration was examined by the Nano Drop 1000 system again.Finally, RT-qPCR was performed to amplify the cDNA in the Real Time PCR System 7500 (Applied Biosystems, USA), and the mRNA expressions levels were compared with the housekeeping gene GAPDH as the internal control.The relative quantity of mRNA was calculated with the average threshold cycle (C t ) by the deltadelta C t (2 −DDCt ) method.First, the atherosclerosis (AS) mice model was established by feeding the mice a high fat diet (HFD) containing 20% fat, 20% sugar, and 1.25% cholesterol for 8 weeks.Then, the mice were divided into three groups and treated with 5% glucose solution, and a mixture solution of miR-223 and EGCG and lip@EGCG/ miR-223 by intravenous injection (the concentration of miR-223 was 1 mg kg −1 , and the concentration of EGCG was 10 mg kg −1 ).The frequency of drug administration was once every four days, for a total of ve doses.Aer three weeks of treatment, all the mice were sacriced for the subsequent experiments.
Targeting effect of lip@EGCG/miR-223 in vivo
To test the targeting effect of lip@EGCG/miR-223 in the AS mice model, Cy5-labeled miR-223 was applied for animal uorescence imaging.The model mice were treated by intravenous injection with free Cy5-miR-223 or lip@EGCG/Cy5-miR-223 for each mouse.Aer 8 h, the mice were perfused transcardially with 30 mL PBS and 30 mL 4% paraformaldehyde under anesthesia, and the aorta and the primary organs, including the heart, liver, spleen, lung, and kidney, were isolated and imaged by a uorescence imaging system (IVIS Spectrum, Waltham, MA, USA).Finally, the uorescence results were quantied by Image J soware.
Investigation of the lipid-efflux promotion in vivo
Aer the AS model mice were euthanized, the aorta was excised and the surrounding adipose tissue was removed.Subsequently, the aorta was xed with 4% paraformaldehyde overnight.Aer that, the aorta, which was rst washed with PBS, was cut longitudinally and stained with 0.3% ORO for 1 h to quantify the plaque area.Then the aorta was made into 5 mm frozen sections by freezing microtome, and stained with ORO to observe the distribution of lipids in the aorta.
Histology and immunohistochemistry
Aer xing in 4% paraformaldehyde overnight, aortic arch sections (5 mm thickness) in paraffin were prepared.Antibodies to IL-1b and TNF-a were incubated with the sections to investigate the anti-inammatory effect of lip@EGEG/miR-223, and antibodies to CD68 were applied to explore the distribution of macrophages in the aortic arch areas, and antibodies to CD31 were applied to observe the morphology of the aortic vessels.All of the sections were stained with hematoxylin to observe the structures of the tissues.Images of the stained sections were observed by a uorescence microscope (Nikon Eclipse Ti-SR, Japan), and the semiquantitative analysis of the histological images was performed with the ImageJ soware.
H&E staining
Aer the mice were euthanized, the main organics, including the heart, liver, spleen, lung, and kidney, were collected and made into sections in paraffin.Then the sections were stained with Hematoxylin-Eosin (H&E) and observed to investigate the organic toxicity of lip@EGCG/miR-223 by uorescence microscopy.
Statistics
The data were expressed as the mean ± SD (standard deviation).The t-test was used for statistical analysis.A value of p < 0.05 was considered statistically signicant, and a value of p < 0.01 was considered very signicant.
Synthesis and characterization of lip@EGCG/miR-223
The preparation process of lip@EGCG/miR-223 is shown in Fig. 1.First, the positively charged cationic lipid DOTAP and the negatively charged miR-223 were bound together by electrostatic interaction, and DOTAP was used to extract miR-223 from the aqueous to the organic phase by the method described in Bligh and Dyer in the monophase. 27The binding capacity of DOTAP and miR-223 was determined by agarose gel retardation assay.As shown in Fig. 2A, the intensity of the miR-223 band gradually decreased when the mass ratio of DOTAP to miR-223 increased.
The band totally disappeared when the ratio was 1 : 0.75, which signied that miR-223 was completely combined with DOTAP.
According to the results, we determined that the optimal mass ratio of DOTAP to miR-223 was 1 : 0.75 for the following experiments.Then the other lipid materials (PI, DSPE-PEG2k, Chol, and Tween-80) covered outside were added into the organic phase containing DOTAP-miR-223.Following rotary evaporation, the EGCG solution was added and the liposomes were formed.The liposomes were extruded through a polycarbonate lm to form nanoparticles with a uniform size.The nanoparticle size of lip@EGCG/miR-223 was 91.28 ± 2.28 nm, and the zeta potential was −36.21 ± 1.82 mV as measured by DLS (Fig. 2B).The results suggested that the liposomes prepared in this way had a smaller size, making it easier to target AS lesions which have a smaller vascular endothelial space. 28Also, the liposomes prepared were negatively charged as shown by the results, which ensured they had a high stability and low cytotoxicity both in vitro and in vivo.The morphological characteristics were observed by TEM (Fig. 2C), displaying that lip@EGCG/miR-223 was spherical in shape and evenly distributed in size.
The miR-223 protection effect of lip@EGCG/miR-223 was investigated by agarose gel retardation assay.As shown in Fig. 2D, no bands could be observed when naked miR-223 and RNase A were co-incubated for 5 min, which suggested that the miR-223 was easily degraded by RNase A when it was not encapsulated in the liposomes.The miR-223 band of lip@EGCG/miR-223 was still bright when incubated with RNase A for 6 h, and the result demonstrated that lip@EGCG/miR-223 could effectively protect miR-223 from the degradation by RNase A, which should make it possible for miR-223 to target tissues or cells precisely in vivo.
In addition, the encapsulation efficiency (EE) of lip@EGCG/ miR-223 for EGCG was calculated to be 81.1% (1.622 mg mL −1 ), and for miR-223 was measured to be 98.8% (0.033 mg mL −1 ).These results suggested that EGCG and miR-223 were successfully encapsulated into the liposomes with a high encapsulation efficiency.
Antioxidant and anti-inammatory effects of lip@EGCG/miR-223 in vitro
The overproduction of ROS leads to oxidative stress, which can induce tissue and cell injury that further initiates an inammatory cycle and results in the amplication of oxidative stress. 29Therefore, we investigated the antioxidant effects of lip@EGCG/miR-223 at the solution level and the cellular level, respectively.To investigate the antioxidant ability of lip@EGCG/ miR-223 in solution, the DPPHc scavenging assay was performed.DPPHc is a stable, nitrogen-centered free radical with characteristic absorption at 517 nm.When the antioxidant was added into DPPHc methanol solution, the violet DPPH radical was reduced to stable yellow DPPH molecules. 30The remaining DPPH radical was measured to determine the DPPHc scavenging capability.As shown in Fig. 3A, the color of DPPHc methanol solution changed from violet to yellow as the concentration of EGCG in lip@EGCG/miR-223 increased.These results indicated that lip@EGCG/miR-223 exhibited an excellent DPPH-radical-scavenging capacity.
In addition, the generation of ROS in RAW 264.7 macrophages was detected by staining with DCFH-DA, a uorescent probe used to detect intracellular ROS.As shown in Fig. 3B, the uorescence intensity in the cells treated with LPS (positive control) was remarkably enhanced when compared with the cells without treatment (negative control), which suggested the successful construction of the oxidation stress model.The uorescence intensity in the cells treated with lip@EGCG/miR-223 gradually decreased as the concentration of EGCG increased.When the EGCG concentration increased to 100 mg mL −1 , the intensity of uorescence was similar to that of the negative control group, which suggested lip@EGCG/miR-223 effectively reduced the generation of ROS in macrophages.These results above demonstrated that lip@EGCG/miR-223 had excellent antioxidant capacity in vitro.
Paper Nanoscale Advances
Proinammatory cytokines (such as TNF-a, IL-1b, and MCP-1) produced by immune cells induce the formation of atherosclerotic plaques. 31In order to investigate the anti-inammatory properties of lip@EGCG/miR-223 in vitro, the levels of typical inammatory cytokines were measured by ELISA, and the results are displayed in Fig. 3C.Compared with the negative control group, the levels of inammatory factors in the positive control group were signicantly increased, indicating that the inammatory model was successfully constructed.Whereas, when treated with lip@EGCG or lip@EGCG/miR-223, the amount of inammatory factors decreased signicantly.Also, the relative levels of IL-1b, MCP-1, and TNF-a for lip@EGCG were 0.624, 0.880, and 0.312 as compared with the positive control, which was also relatively lower than that of the EGCGtreated groups.These results suggested that the anti-inammatory effect was improved when EGCG was encapsulated in liposomes, which may contribute to an enhanced uptake in RAW 264.7 cells.The inammatory factor levels for lip@EGCG/miR-223 were 0.562, 0.744, and 0.212, respectively, suggesting that the co-loading with miR-223 had a better therapeutic effect on AS.All of the above results demonstrated that lip@EGCG/miR-223 had excellent anti-inammatory effects in vitro, and the effect was improved when EGCG was loaded in liposomes.
Investigation of the cellular uptake behavior, expression of miR-223, and lipid-efflux promotion in vitro
As reported, a low cellular uptake and lysosomal degradation are the main obstacles for the application of nanocarriers in gene Fig. 1 Schematic illustration of the mechanisms and preparation of lip@EGCG/miR-223, in which lip@EGCG/miR-223 was prepared by a thin film hydration method, and could be effectively delivered to the AS lesion site by an EPR effect.EGCG in the lip@EGCG/miR-223 can then reduce the level of ROS and the expression of inflammatory factors to exert antioxidant and anti-inflammatory effects.At the same time, lip@EGCG/miR-223 absorbed by macrophages can promote lipid efflux and eliminate lipid plaques through miR-223 expression.
Nanoscale Advances Paper
drug delivery. 32So, the intracellular uptake and lysosomal escape capacity of lip@EGCG/miR-223 were investigated here in RAW 264.7 cells by CLSM.Cy5-labeled miR-223 was applied to trace the position of miRNA in RAW 264.7 cells, and the nuclei and lysosomes were stained with Hoechst 33 343 and LysoTracker Green, respectively.As shown in Fig. 4A, the red uorescence of Cy5 was increased both in the free Cy5-miR-223-and lip@ECGC/Cy5-miR-223-treated groups with the increase in incubation time.However, at 4 and 6 h, the intensity of red uorescence in the lip@EGCG/Cy5-miR-223 group was signicantly higher than that in the free Cy5-miR-223 group (Fig. 4B), as analyzed by ImageJ soware.The overlap coefficient of miR-223 and the lysosomes was also analyzed by ImageJ.As shown in Fig. 4C, for the free Cy5-miR-223-treated group, the overlap coefficient increased with the extension in experiment time, suggesting that the miRNAs were trapped in the lysosomes.The overlap coefficient in the lip@EGCG/Cy5-miR-223 group was relatively high at 2 h, but decreased at 4 and 6 h.This result indicated that lip@EGCG@Cy5-miR-223 was taken up by the lysosomal pathway, and could escape from the lysosomes rapidly, which may contribute to the cationic lipid material DOTAP.All of the results above suggest that lip@EGCG/miR-223 was an excellent carrier for miRNA delivery, which could not only deliver miRNAs into cells effectively, but also promote the escape of miRNAs from lysosomes to perform a gene silencing effect in the cytoplasm.The expression of miR-223 target genes regulated by lip@EGCG/miR-223 was evaluated at both mRNA levels and protein levels.First, the upregulation of ABCA1 mRNA treated with different drugs was determined by RT-qPCR analysis.As shown in Fig. 4D, the relative level of ABCA1 mRNA was 1.35 in the free miR-223 group compared with the negative control group, and it was signicantly increased to 2.27 for the lip@EGCG/miR-223-treated ones, which had a similar effect as the positive control (Lipo/miR-223).
Also, the upregulation of the ABCA1 protein treated with different drugs was determined by ELISA analysis.As shown in Fig. 4E, the relative level of ABCA1 mRNA was 1.39 in the free miR-223 group compared with the negative control group, and it was signicantly increased to 2.76 for the lip@EGCG/miR-223-treated ones, which had a similar effect as the positive control (Lipo/miR-223).All of these results demonstrated that lip@EGCG/miR-223 could upregulate the expression of ABCA1, which is an important transporter for lipid efflux. 33acrophages phagocytose could uptake large amounts of ox-LDL to form foam cells and thus lead to an abnormal accumulation of lipids, which is an essential hallmark of atherosclerotic lesions. 34Also, eliminating lipids in foam cells has been conrmed to be of great signicance for atherosclerosis treatment.In this section, we thus examined the ability of lip@EGCG/ miR-223 to prevent foam cell formation and lipid accumulation.As shown in Fig. 4F, considerable lipids were observed in cells incubated with ox-LDL, which was treated as the positive control group.The results indicated that a large number of foam cells were formed aer ox-LDL induction.The free miR-223 group showed a limited ability to inhibit foam cell formation, whereas the lip@miR-223 group displayed a signicant reduction in foam
Paper
Nanoscale Advances cells.Also, lip@EGCG/miR-223 had a similar foam cell formation inhibiting effect as lip@miR-223, suggesting that the co-delivery of EGCG and miR-223 had no inuence on the treatment effect of miR-223.Collectively, lip@EGCG/miR-223 could promote lipid efflux by inhibiting the formation of foam cells due to the gene therapy effect of miR-223.
Targeting and pharmacodynamic investigations of lip@EGCG/miR-223 in vivo
As atherosclerosis progresses, the connection between endothelial cells is destroyed, and neoangiogenesis occurs, making them leaky and fragile, and resulting in an EPR effect in AS lesions. 35,36herefore, lip@EGCG/miR-223 could be passively targeted at AS lesions.In this section, the targeting and tissue distribution behavior of lip@EGCG/miR-223 was investigated by the AS mice model.As illustrated in Fig. 5B, for the free Cy5-miR-223-treated group, the uorescence was mainly distributed in the kidney, and could not observed in other organs.This suggested that the free miR-223 was excreted by the kidney, and could not be delivered to the AS lesions to exert a treatment effect.However, for the lip@EGCG/miR-223-treated group, strong uorescence intensities were even observed in the kidney of the AS model mice, while there was still an obvious uorescence distribution in the heart and aorta area.The semiquantitative results showed that the uorescence intensity of the heart and aorta to that of the kidney was 6.27% in the free Cy5-miR-223-treated group, while the ratio was 66.10% in the lip@EGCG/miR-223-treated group.The results displayed that the distribution of miR-223 in the heart and aorta was signicantly increased when it was encapsulated in lip@EGCG/miR-223, making it a promising targeted agent for atherosclerosis therapy.
Based on the promising results above, the treatment effects of lip@EGCG/miR-223 for AS were examined by Oil red O (ORO) staining test to reveal the formation of atherosclerotic lipid plaque in the aorta.The AS model mice were administrated with glucose solution, free EGCG, and miR-223, or<EMAIL_ADDRESS>the aortas and aortic sections were stained with ORO.As shown in Fig. 5C, the glucose solution group showed high ORO-stained areas, suggesting distinct lipid plaques were formed by the high fat feeding.By contrast, the plaque area was slightly decreased in the free EGCG and miR-223 groups, and the plaque area was signicantly decreased in the lip@EGCG/miR-223 group.In addition, consistent with this result (Fig. 5E), compared with the glucose solution group and free EGCG and miR-223 groups, observation of the ORO-stained aortic cryosections also revealed the lowest ORO-stained areas in the lip@EGCG/miR-223 group.These results revealed that lip@EGCG/miR-223 had a strong inhibitory effect on the progression of atherosclerotic plaques by promoting lipid efflux.The platelet endothelial cell adhesion molecule-1 (CD31) is a molecule expressed in hematopoietic and immune cells and endothelial cells, and is involved in angiogenesis, platelet aggregation, and thrombosis, which are closely related to the occurrence of AS. 37 So antibodies to CD31 were applied here to detect the development of neovascularization and to observe the morphology of the aortic vessels in the aorta.As shown in Fig. 5G, in the glucose-solution-treated group as well as free EGCG and miR-223 treated groups, the CD31 staining level was signicantly higher than that in the lip@EGCG/miR-223-treated group.Also, compared with the lip@EGCG/miR-223-treated group, much more micro-angiogenesis was observed in the glucose-solution-treated group.These results proved that lip@EGGC/miR-223 could promote the normalization of vessels by inhibiting neovascularization.
Nanoscale Advances Paper
3.5.Mechanisms of lip@EGCG/miR-223 in the treatment of AS in vivo In order to reveal the mechanisms responsible for the in vivo anti-atherosclerotic activity of lip@EGCG/miR-223, histology and immunohistochemistry tests were performed.CD68 was the most reliable marker for macrophages, so anti-CD68 antibodies were applied to detect the accumulation of macrophages in aortic areas. 38As shown in Fig. 6A, compared with the glucose-solution-treated group as well as free EGCG-and miR-223-treated groups, the group treated with lip@EGCG/miR-223 displayed the lowest level of foam cells in aortic vessels, which were macrophages full of lipid material.This indicated that lip@EGCG/miR-223 could effectively reduce the accumulation of macrophages in aortic areas, which plays a critical role in the development of AS.
In addition, we investigated the anti-inammatory effect of lip@EGCG/miR-223 in vivo by histochemistry analysis of two typical inammatory cytokines (IL-1b and TNF-a) in the aorta.As shown in Fig. 6B, the lowest expressions of IL-1b and TNFa were observed in the lip@EGCG/miR-223 group, which suggested that lip@EGCG/miR-223 exhibited excellent anti-inammatory activity.Since atherosclerosis has been widely reported as a chronic inammatory disease, 39 the outstanding
Fig. 5
Fig. 5 Pharmacodynamic evaluation in vivo.(A) Establishment of the atherosclerosis model in ApoE −/− mice.(B) Fluorescence images of liposome accumulation in various organs 8 h post-injection and the relative ratio of fluorescence in the heart/aorta to the kidney.(C) OROstained images of aortic sections.(D) Quantitation of lesion regions (oil-red area) in the aorta tissue.(E) ORO-stained images of aortic sections.The scale bar is 50 mm.(F) and (G) Images of aortic sections stained with antibody to CD31.The scale bar is 100 mm.(H) Quantitative analysis of the CD31 positive areas relative to the total arterial wall area using ImageJ.*p < 0.5, **p < 0.01, ***p < 0.001, n = 3.
Fig. 6
Fig. 6 Histochemistry analyses of aortic sections after different treatments.The scale bar is 100 mm.(A) Images of aortic sections stained with antibody to CD68 and quantitative analysis of the (CD68) positive areas relative to the total arterial wall area using ImageJ.(B) Images of aortic sections stained with an antibody to IL-1b, and antibody to TNF-a, and quantitative analysis of the (IL-1b, TNF-a) positive areas relative to the total arterial wall area using ImageJ.*p < 0.5, **p < 0.01, ***p < 0.001, n = 3.
10 5 cells per well and incubated for 12 h.The cells were treated with free miR-223, lip@miR-223, or lip@EGCG/miR-223 with different concentrations of miR-223 (2 mg mL −1 ) for 6 h.The negative control group was treated with fresh medium.Then the medicated medium was discarded and replaced with complete fresh medium of equal volume, and the cells were further incubated for 42 h.Aer washing with PBS, the cells were obtained and extracted by RIPA solution.The concentration of total protein was measured using a BCA protein assay kit, and the amount of ABCA1 protein was measured using a Mouse ABCA1 ELISA kit.2.15.AnimalsAll animal care and experiments were conducted in line with the Guide for the Care and Use of Laboratory Animals proposed by the National Institutes of Health.All procedures and protocols were approved by the (Institutional Animal Ethics Committee of Capital Medical University).Apolipoprotein E-decient (ApoE −/ − ) mice (about 8 weeks old, 20 g) were purchased from the Animal Department of Capital Medical University (Beijing Laboratory Animal Center, Beijing, China).
Investigation of miR-223 expression was performed by ELISA.RAW 264.7 cells were seeded in a 24-well plate at a density of 1 × | 7,819 | 2023-10-30T00:00:00.000 | [
"Chemistry",
"Medicine"
] |
Flexible large-area ultrasound arrays for medical applications made using embossed polymer structures
With the huge progress in micro-electronics and artificial intelligence, the ultrasound probe has become the bottleneck in further adoption of ultrasound beyond the clinical setting (e.g. home and monitoring applications). Today, ultrasound transducers have a small aperture, are bulky, contain lead and are expensive to fabricate. Furthermore, they are rigid, which limits their integration into flexible skin patches. New ways to fabricate flexible ultrasound patches have therefore attracted much attention recently. First prototypes typically use the same lead-containing piezo-electric materials, and are made using micro-assembly of rigid active components on plastic or rubber-like substrates. We present an ultrasound transducer-on-foil technology based on thermal embossing of a piezoelectric polymer. High-quality two-dimensional ultrasound images of a tissue mimicking phantom are obtained. Mechanical flexibility and effective area scalability of the transducer are demonstrated by functional integration into an endoscope probe with a small radius of 3 mm and a large area (91.2×14 mm2) non-invasive blood pressure sensor.
Ultrasound offers real-time imaging of deep-lying tissues, organs, and blood flow, in a safe and non-invasive way.It is the most widely used medical imaging modality in terms of number of images created annually [1][2][3] .Where current ultrasound systems require pointing and positioning by a sonographer, patches of flexible and large-sized ultrasound arrays enable hands-free imaging and offer a solution for short and long-term monitoring applications.With some notable exceptions 4,5 , most prototypes of ultrasound patches are typically made by micro-assembly of individual piezoelectric transducer materials onto a flexible or stretchable substrate [6][7][8][9][10] .The rigid islands contain the functional ultrasound transducers, while thin electrodes inbetween provide mechanical flexibility allowing the patch to conform to nonplanar surfaces.Spring-like metal interconnect lines and/or liquid metals can be used to provide stretchability.This approach suffers from a number of fundamental trade-offs.Firstly, typically less than 50% of the total area contains functional ultrasound transducers.This compromises the achievable image quality.Secondly, the use of PZT and PZT-based composites requires backing layers that are currently not includedthis results in a relatively low bandwidth and thus axial (i.e., depth) resolution.Thirdly, the component assembly fabrication technique makes scaling the arrays to larger sizes and higher densities prohibitively expensive.Whereas conventional ultrasound transducers nowadays consist of 100 to 10,000 elements, the number of elements in flexible and wearable ultrasound transducers published to date ranges from 10 to 256.Finally, the use of lead in these ultrasound transducers is to be avoided.We have developed an inherently flexible ultrasound transducer technology based on thin films of the biocompatible, lead-free piezoelectric polymer, P(VDF-TrFE) 11,12 , to circumvent these shortcomings.By using thermal embossing in combination with foil lamination, we create densely packed (>64%) pillar structures that are 40 µm wide and ~80 µm high.Each pillar can be individually addressed or grouped, depending on the electrode layout.The pillar structure brings about two advantages.(1) It mechanically isolates neighboring elements.This strongly reduces acoustical crosstalk between neighboring acoustic elements, which is highly beneficial from a performance perspective 13 .(2) It increases the mechanical flexibility of the overall array 14 (see Supplementary Information (SI) Note 1 on mechanical considerations).Lateral dimensions of the P(VDF-TrFE) pillars were chosen to avoid Lamb waves.By varying the height of the P(VDF-TrFE) pillars, we can tune the operating frequency.In this work, we focus on the 5-10 MHz range, and specifically target the carotid artery 15 , but note that this frequency range is also used in a wide range of conventional (e.g., for the imaging of organs in the abdomen, the neck and breast, and of children) 16 and endoscopic ultrasound applications (transesophageal, transrectal and transvaginal imaging) 2 .The resulting total thickness of our PillarWave TM ultrasound transducer is only 100 μm.With a value of 3.7 MRayl, the measured acoustic impedance of the P(VDF-TrFE) pillars is close to that of human tissue.Hence, the transmission coefficient of a bare piece of regular piezomaterial (35 MRayl) into tissue is 0.082, whereas for bare P(VDF-TrFE) on tissue, the transmission coefficient is 0.58.Therefore, the bandwidths typical for medical imaging can be reached without the need for matching layers or backings and the ringing of the transducer will be very low.Integrated into a linear array containing 2 × 64 elements (element size 175 µm × 2.5 mm) with a transmit aperture surface area of 11.5 × 2.5 mm 2 , pulse-echo efficiencies of 0.2 are achieved-on par with competing approaches-and peak pressures just above 1 MPa are achieved (see Supplementary Table 1 and Note 2 for benchmarking).The imaging performance of the linear array is evaluated using a commercial tissue-mimicking ultrasound phantom.High-quality ultrasound images are obtained, from which important parameters such as the axial and lateral resolutions are extracted.The mechanical flexibility is shown by simple pulse-echo measurements of an array wrapped around a 6-mm endoscope.Finally, to demonstrate the large-area array fabrication possibilities of our manufacturing technology, an array transducer is fabricated with an aperture size of 9.1 × 1.4 cm 2 aimed at non-invasive ultrasonic blood pressure sensing on the carotid artery.Its performance is shown on a carotid phantom and the in vivo performance on a volunteer.
Fabrication of P(VDF-TrFE) transducers on thin and flexible plastic substrates
In this work, we have used commercially available P(VDF-TrFE) as the piezoelectric material for the transducer.P(VDF), its copolymers (including P(VDF-TrFE)), and composites are well-known for their strong electroactive responses (piezoelectric, ferroelectric, pyroelectric), high dielectric breakdown strength and high dielectric constant 11 .This has resulted in numerous device prototypes, including hydrophones 17 , energy harvesting devices 18,19 , proximity sensors 20 , ferroelectric memories 21 , and actuators [22][23][24][25] .Recently, Qualcomm commercialized under-display fingerprint sensors using ultrasonic detection with P(VDF-TrFE) 26 .In this work, we specifically use the 80/20 TrFE copolymer of P(VDF) as this ratio gives the largest piezoelectric response for these materials.
A ca. 40-μm-thick P(VDF-TrFE) film was laminated on a polyimide substrate (thickness 14 μm) containing thin bottom electrodes.This P(VDF-TrFE) film is structured through hot embossing with a PDMS stamp, resulting in highly uniform pillars with a height of 70 μm on top of a residual P(VDF-TrFE) film of about 10 μm (Fig. 1a).The embossing step was followed by laminating a second P(VDF-TrFE) film of about 10 µm on top, providing a flat surface for the deposition of a patterned top electrode (Fig. 1b).
A cross section of the transducer is shown in Fig. 1c.Both corona and contact poling were used to electrically polarize the structured piezoelectric layer.Corona poling gave less electrical breakdown and higher d 33 values compared to contact poling.To promote uniformity over large areas, we employ a corona procedure where the sample constantly moves back and forward under a set of corona wires.The device fabrication is finished by the physical vapor deposition of a patterned common top electrode.Optionally, a parylene C encapsulation layer was used.All steps are performed on substrate sizes up to 15 × 15 cm 2 or larger (32 × 35 cm 2 ) using glass as a temporary carrier.
The transducer properties strongly depend on the geometry of the structured piezoelectric, and can be tuned by both the thickness of the initially laminated P(VDF-TrFE) and the stamp design.The piezoelectric thickness (~80 µm), and the thicknesses of the substrate and encapsulation determine the resonance frequency.The kerf, i.e., the distance between the pillars, is ideally minimized through the stamp design, as it increases the active area and, therefore, the transmit and receive efficiency of the transducer area.We have found that a kerf smaller than 10 µm compromises pattern reproducibility over larger areas as a result of the so-called wall collapse of the PDMS stamp 27 , see also SI Note 3. We therefore investigated the pillar shape and found that rectangular structures are more sensitive to wall collapse than hexagons, due to the decreased rib-length of the hexagonal pattern.Because of this, we have used hexagonal pillar arrays (40 µm diameter, 10 µm kerf) throughout this work, with a resulting active area of >64%.Further reduction of the kerf should be possible but would require additional process optimization.
The piezoelectric response is electrically measured using a Berlincourt setup, with which we consistently measured d 33 values of 25-29 pC/N, on par with state-of-the-art P(VDF-TrFE) piezoelectric performance.This indicates that the embossing, lamination, and heating steps do not significantly degrade the piezoelectric response of the polymer material.The measured compressional wave speed of 2100 m/s was slightly lower than the typically reported value of 2400 m/s for P(VDF-TrFE) 24 .This leads to a better acoustic impedance match with tissue, resulting in a higher transmission coefficient and increased bandwidth but also in a slightly lower piezoelectric coupling factor.Details of the fabrication are provided in the Methods section.
With a total thickness of around 100 μm, the transducer is inherently bendable due to the pillar structure of the array and the absence of ceramic layers (Fig. 1d).It can be bent to radii far below 1 mm. Figure 1e shows a transducer that is wrapped around a balloon catheter aimed at intravascular use.In Fig. 1f, an array transducer is directly attached to the human body, i.e., the neck, for carotid blood pressure measurements (see below for more information).The mechanical flexibility and low weight also makes it comfortable to wear.
Characteristics of single-element transducers
As a first step, we characterized large circular single elements.By patterning the top and bottom electrodes in the form of circles with a diameter of 12 mm, we capture the collective response of over 52,000 pillars.Figure 2a shows the transmit efficiency, measured by scanning the produced acoustic field in water using a needle hydrophone and applying inverse wavefield extrapolation to obtain the produced acoustic field at the surface of the transducer 28 , as a function of frequency.Figure 2b shows the frequency response of the receive sensitivity as determined from the transmit transfer function using reciprocity theory and electrical impedance measurements 29 .The transmit transfer function showed a peak of 1.4 kPa at 8.9 MHz, whereas the receive transfer function showed a maximum of 67 µV/Pa.
The −6 dB frequency bandwidths, were measured to be 65 and 64% for the transmit and receive transfer functions, respectively.
By scanning the pressure generated by applying voltage pulses with different magnitudes along the x-axis (lateral axis) and the y-axis (elevation axis), we obtain a good impression of the surface uniformity as well as the linearity of the response.Figure 2c shows the peak transmit efficiency measured at 8.9 MHz as a function of the location at the transducer surface.The surface averaged peak transmit transfer was 1.4 kPa/V.The peak-to-peak efficiency variation over the total element area of 113 mm 2 was within ±12%, except for two local spots with lower efficiencies.These spots result from small air bubbles on the surface of the transducer during the actual characterization measurement.Long-range random variations comes from process imperfections, yielding a spread in pillar geometry and/or poling efficiency.The circular ripple pattern in Fig. 2c is an artifact of the wavefield extrapolation algorithm that was used in combination with the finite transducer area 28 .
Modeling of transducer elements
To predict the acoustic device performance, modeling was performed using a modified KLM model 30 .A number of properties and input parameters were directly measured or taken from literature: the density of P(VDF-TrFE), the film thickness, the compressional wave speed, and the active diameter.Using these parameters together with reported values of the coupling factor, the dielectric permittivity, the dielectric loss, and the mechanical loss, the model did not accurately describe the electrical and acoustical behavior of our transducers.We found that the large and frequency-dependent losses of P(VDF-TrFE) should be taken into account, in line with the results of refs.31,32.For more details of the simulations as well as the experimental input parameters, including the derivation of their frequency response, we refer to Supplementary Information Note 3.
Figure 2a shows the modeled and measured transmit transfer functions in water.The modeled and experimental resonance frequencies were 8.6 and 8.8 MHz respectively, whereas the modeled and experimental mean peak transmit transfers were 1.5 and 1.4 kPa/V.The modeled and experimental frequency bandwidths at the −6 dB level were 58 and 65%, respectively.A good match was obtained between the modeled and measured transfer functions.The measured transfer function averaged over the active surface area was, on average, 15% lower than the modeled transfer function.The obtained coupling factor (k t ) was 0.185, the relative permittivity (K S 33 ) was 6.5-0.0088*frequency(MHz), the dielectric loss tangent (tan(δ e )) was 0.075 + 0.04*frequency (MHz), and the mechanical loss tangent (tan(δ m )) was found to be 0.125.The pulse-echo insertion loss was calculated to be -20.4dB.For more information, see Supplementary Information Note 3. Figures 2d, e show the modeled and measured magnitude and phase of the electrical impedance as a function of frequency for the condition where the transducer was in air or the front face was in contact with oil.Here, electrically isolating oil was used instead of water, since the electrical isolation of this early prototype was suboptimal.The acoustic impedance of the oil was similar to that of water.The shape of the electrical impedance curves are dominated by the high amplitude transmission coefficient (0.58) from P(VDF-TrFE) to water/oil, due to the low acoustic impedance of P(VDF-TrFE) (3.7 MRayl) relative to the acoustic impedance of water/oil (1.5 MRayl) and therefore center frequency and transfer function of the measurements in oil are considered representative.The higher acoustic attenuation in the oil does add some damping, but it is a very minor effect.Part of the effective properties of the P(VDF-TrFE) based PillarWave TM transducer was determined based on the electrical impedance measurements in the air using the fitting procedure described in the Methods section (and Supplementary Information Note 4).In the case of oil, the match between model and experiments was also good.Regarding the phase of the electrical impedance, the difference between the model and experimental results was <1°.With respect to the magnitude of the electrical impedance, the difference between model and experimental results is <16%.This validated model was used to design the other transducers described next.
Pulse-echo measurements while bent around a 6-mm endoscopic probe
Linear array transducers optimized for the ultrasonic imaging of the carotid artery were fabricated using the manufacturing process described above.The array consists of separate parallel transmit and receive apertures with 64 elements each.Each aperture had a size of 11.5 × 2.5 mm 2 , the element pitch was 180 µm.Its performance was measured in water using a hydrophone setup.Figure 3a shows a picture of the slightly bent flexible array.Figure 3b shows a photograph of the 128-channel array integrated on a 6-mm ultrasound probe for endoscopic ultrasound (EUS).Figure 3c shows pulse-echo measurement of the array while bent around the 6-mm EUS probe.The signal at 25 µs is an echo of a 2.5 mm diameter metal cylinder, whereas the response at 35 µs originates from the edges of the water tank.Low voltages (<20 V pp ) were used.These measurements illustrate that the array continues to function in pulse-echo mode while strongly bent.Preliminary experiments show an effect of the curvature on the wavefield, and a design of an optimized EUS device will need to take into account this effect.
High-resolution imaging with the 128-element linear array transducer
Figure 4a shows the measured average transmit and receive transfer functions as a function of frequency using the 128-element linear array transducer described above and shown in Fig. 3a.In transmission, the center frequency of the array transducer was 8.2 MHz, and the frequency bandwidth was measured to be 78% at the −6 dB level.The peak transmit transfer was 1.3 kPa/V.In reception, a peak receives a transfer Figure 4c shows a typical ultrasonic B-mode image obtained using a tissue-mimicking phantom (040GSE, CIRS, Norfolk, Virginia, USA).Such phantoms provide an invaluable approach to objective, quantitative evaluation of image quality characteristics.The image was obtained using the 128-linear array on so-called plane wave compounding (of nine plane wave transmissions) 33 in combination with delay-and-sum beamforming in reception 34 .The reflections of the nylon wires are clearly visible with smooth and sharp point-spread functions (PSFs).The imaging capabilities of PillarWave in Fig. 4c were quantified using international standardized methods IEC 61391.1:2006and IEC 61391-2:2006.For the nylon wire at [−8, 20] mm, the lateral width of the PSF was 0.63 and 1.3 mm at the −6 and −20 dB levels, respectively.The corresponding axial lengths (along the depth axis) of the PSF were 0.23 and 0.92 mm at the −6 and −20 dB levels, respectively.The echoes of the wires at [20, 30] mm that are typically used to determine the axial/lateral resolution are also clearly visible and well separated.The hyperechoic region at [−20, 30] mm is clearly visible, as is the 10 kPa elasticity target at [5, 15] mm.The image was obtained with a frame rate of 15 frames per second (fps)-no averaging was applied.The frame rate was limited by the real-time processing implemented in the Verasonics Vantage machine.B-mode imaging was performed with frame rates up to 4 kHz with two angles used for the plane wave compounding.These results demonstrate the capability of the fabricated flexible array to produce high-quality real-time medical ultrasound B-mode images.Supplementary Data 1 shows a comparison between our flexible array and previously reported work.The comparison is based on a large number of geometrical (thickness, number of transducer elements, their pitch and density, etc.) and performance parameters (bandwidth, penetration depth, image resolution, etc.) as ultrasound arrays are notoriously difficult to compare.
Large aperture array transducer (91.2 × 14 mm 2 ) for blood pressure monitoring applications
To demonstrate the scalability to a large area of our technology, an array with an exceptionally large active aperture of 91.2 × 14 mm 2 was realized (see Fig. 5a).The array consisted of four staggered rows of 32 transducer elements.Each transducer element had a size of 1.6 × 3.2 mm 2 and comprised approximately 2365 pillars.The element pitch was 2.8 and 3.6 mm in the lateral and elevation directions, respectively.The design of the array was optimized to make sure at least one combination of transmitter and receiver elements would be located optimally with respect to the carotid artery (i.e., the acoustic beam will cross-sect the center of the carotid), independent of the patient's head movement.The fabrication process steps are identical to the imaging array discussed earlier, only a different electrode design is required.This illustrates the versatility of our technology.As shown in Fig. 1f, the ca. 10 cm long array conforms well to the shape of the human neck.This would not be possible with a rigid ultrasound transducer.It would suffer from poor acoustic coupling because of its rigid surfaces not being able to accommodate the curvilinear shape of a human neck without the use of excessive pressure.
Rows 1 and 3 were simultaneously excited in transmission, and row 2 was read-out in parallel.The center frequency of the device was 8.2 MHz.The measurements shown in Fig. 5b indicate that 95% of the elements were working, with a 50% variation in peak transfer functions (transmit: 0.6-1.3kPa/V, receive 50-100 µV/Pa).The array was tested on a home-built in vitro carotid phantom (see Methods for more details) 35 .The carotid vessel wall was tracked by cross-correlating the echoes of the anterior and posterior walls over time.The resulting vessel diameters were subsequently converted into blood pressure waveforms using a calibration with a blood pressure sensor, see Fig. 5c, using an established conversion method 6 .A high correspondence between the measured and reference pressures was observed: the difference is less than 5%.Figure 5d shows in vivo data of the carotid artery.Clear echoes of the vessel walls are observed.The temporal variation, or pulsation, of the proximal and distal wall echoes resulting from the heart beating, can clearly be discerned.
Discussion
We reported a flexible transducer technology for wearable ultrasound applications applicable to large-area.The technology is based on microstructured thin films of P(VDF-TrFE) with a thickness of only ca.80 micrometers.The pillar structure leads to low acoustical crosstalk.It furthermore increases the mechanical flexibility compared to an unstructured P(VDF-TrFE) film.The low acoustic impedance of P(VDF-TrFE) enables a large frequency bandwidth and a high axial resolution without the necessity of matching or backing layers.The potential of the technology is demonstrated in three applications, each showing a specific unique advantage of the PillarWave TM technology-an endoscopic ultrasound (EUS) transducer that remains functional while mechanically curved over a small radius of 3 mm, b-mode imaging of the carotid showing good spatial resolution, and a blood pressure sensor that illustrates the potential to scale to a large area.We point out that in all cases, the same fabrication process was used, using a single P(VDF-TrFE) pillar geometry.The size of the ultrasound elements is actually determined by the design of the top and bottom electrodes.This permits the simultaneous fabrication of multiple transducers with different designs on a single substrateas long as these use equal pillar heights (i.e., operating frequencies).Our transducer technology is expected to scale over the full range of medically relevant ultrasound frequencies (1-60 MHz).Moreover, the manufacturing technology is based on a large-area fabrication process and avoids complicated assembly technology.Currently, the maximum substrate size is 32 × 35 cm 2 ; however, it is expected that-similar to display production-substrate sizes of a few square meters may be used in the future.This could dramatically lower production costs and allow for a rapid, costeffective route to mass-manufactured flexible large-area ultrasound arrays.Currently, we are in the process of integrating the polymer transducer technology reported here with a thin-film transistor backplane, with the aim to reduce the number of interconnections and, thus, the cost of the addressing electronics.
Our flexible large-area transducer technology is aimed at wearable ultrasound applications.Initially, these could be mainly inside clinics.However, applications of medical ultrasound outside of clinics are emerging, such as preventive examinations at the general practitioner, monitoring during (extreme) exercise, or monitoring in the home environment (e.g., of pregnant women).The combination of high performance, low cost, scalability, flexibility, and lead-free components makes this technology uniquely suited for these new applications.
Stamp fabrication and embossing process
The stamp master is created on a separate glass substrate by lithographically structuring SU-8 2050 with pillar structures with the desired shape (e.g., square, hexagon, and circle), pattern, and height.An inverse soft stamp of the pillar structure is made using PDMS (Sylgard 184, Dow Corning, Michigan, USA).The use of a soft stamp is generally favored over a hard stamp for large-area embossing 36 .Using a soft (PDMS) stamp, it was possible to gently release the stamp, starting from the edge, and slowly working our way across the panel.When a hard (SU-8) stamp was used, release became problematic, even for small substrate sizes.
Transducer fabrication
The PillarWave TM transducer is fabricated through a series of patterned electrode deposition, lamination, and embossing steps.On a temporary glass carrier with a 14 µm thick spin-on polyimide film (PI 2611, HD Microsystems, Neu-Isenburg, Germany), a bottom electrode of 500 nm MoCr-Al-MoCr is sputtered and structured lithographically.Subsequently, a 40 µm sheet of P(VDF-TrFE) (80/20 mol, PolyK, Philipsburg, USA) is laminated on the polyimide/electrode substrate.A short CF4/O2 plasma treatment directly prior to lamination was found to increase the adhesion strength to 200 mN/mm, which is sufficiently high to prevent delamination: this treatment provided long-lasting mechanical stability in all transducers studied.Next, the PMDS stamp is pressed into the P(VDF-TrFE) film that is heated to 160 °C, just above the melting temperature of the P(VDF-TrFE).After releasing the PDMS stamp, a second P(VDF-TrFE) film is laminated on top of the arrays of P(VDF-TrFE) pillars.This second film is softened by heating so it partly squeezes in between the pillars on the bottom substrate, provides us with a flat surface for the subsequent deposition of the electrode.Thereafter, the stack is annealed for 1 h at 140 °C37 .The piezoelectric layer is polarized using corona poling (custom-built setup, 15 kV wires at 2 cm, no grid), whereafter a common top electrode (MoCr-Al-MoCr) is sputtered using a shadow mask.Finally, the transducer is mechanically de-bonded from the glass.A photograph of a 15 × 15 cm glass plate containing several arrays is shown in the SI Note 6.
Characterization of discrete transducers
The piezoelectric d 33 coefficient is measured using a Berlincourt setup (d33 PiezoMeter System, Piezotest, London, United Kingdom).The electrical impedance of each transducer was measured using a vector impedance meter (ZVRE, Rhode & Schwarz, Munich, Germany).The acoustic wavefields produced by the flexible single element or array transducers were measured using a hydrophone (diameter 0.2 mm, Precision Acoustics, Dorchester, UK) mounted in an A3200 Npaq, Aerotech Inc. system (Pittsburg PA, USA).The transducer was excited by linear chirp signals of various amplitudes, lengths and frequencies (−6 dB bandwidth 3-12 MHz) generated by an arbitrary waveform generator (33621 A, Agilent Technologies, Loveland, Colorado, USA) and amplified by a power amplifier (75A250A, AR RF/Microwave Instrumentation, Southerton, PA, USA), or a programmable ultrasound system (Vantage 128, Verasonics, Kirkland, USA).The signals received by the hydrophone were amplified by an amplifier (5900, Olympus NDT Inc., Waltham, MA, USA) and digitized (MI.4032,Spectrum Instrumentation, Grosshansdorf, Germany).The excitation voltage over the electrodes was read-out using an electrical probe and digitized.
The receive transfer functions and angular sensitivities of the flexible single element or array transducers were measured using a custom-calibrated source transducer (V311, Olympus NDT Inc., Waltham, MA, USA).The source transducer was excited by signals (linear chirps, −6 dB bandwidth 3-12 MHz, various amplitudes and lengths) generated by the arbitrary waveform generator.The pressure signals were received by the flexible single-element or array transducers, amplified by custom-designed trans-impedance amplifiers and further amplified and digitized by the programmable ultrasound system.The custom-calibrated source transducer was calibrated using a pulse-echo method 38,39 .
Although no endurance testing was performed, the prototypes operated in the lab for more than a year without performance degradation.
Measurement and characterization of material properties
The thickness of the P(VDF-TrFE) was measured using profilometry (Bruker Dektak XT, Massachusetts, USA).The compressional wave speed was measured independently using an acoustic transmission measurement.Here, a high-frequency wave was transmitted (V113-RM, Olympus NDT Inc., Waltham, MA, USA) driven by an arbitrary waveform generator (33250, Agilent Technologies, Loveland, Colorado, USA), sending a 6 MHz burst sinusoidal wave along the thickness direction of the array.The effective compressional wave speed was calculated using the thickness and the arrival time measured using an oscilloscope (DSO6032A, Agilent Technologies, Loveland, Colorado, USA) of said wave.The active radius of the transducer was determined by measuring the acoustic field in a plane perpendicular to the transducer axis, and backpropagating said field to the transducer surface.The density was provided by the manufacturer.
Tissue-mimicking phantom used in high-resolution imaging experiments
The high-resolution images (Fig. 4) were obtained using a commercial tissue-mimicking phantom (040GSE, CIRS, Norfolk, Virginia, USA).This phantom has been designed to optimally mimic the acoustic properties of tissue: its sound speed of 1540 m/s and sound attenuation of 0.7 dB/(MHz.cm)correspond well with average human tissue properties.The phantom contains nylon filament wire targets, to simulate small but strong reflectors in the human body.Furthermore, it has hyperechoic targets, optimized to provide an echo with a predefined relative strength compared to the speckle background and elasticity targets, areas with predefined elasticity values.All features of the phantom are clearly visible in the recorded ultrasound image, and used to quantify the performance of the array.
Image readout and processing for 128-element linear ultrasound array
To perform B-mode imaging with the flexible array transducers a programmable ultrasound system (Verasonics Vantage) was connected to the flexible transducers.The ultrasound system generated the excitation signals.In reception, the signals were first amplified by custom-designed trans-impedance amplifiers before being routed to the programmable ultrasound system for further amplification and digitization.The post-processing of the recorded radio-frequency signals consisted of the following steps: 1. Subtraction of the average signal level 2. Chirp compression 3. Time windowing 4. Time-frequency filtering 5. Application of imaging algorithm, either: a. Wavenumber-frequency domain mapping (Stolt migration) 28,40 b.Plane wave compounding 33 6.Wavenumber-frequency filtering 40 More details can be found in SI Note 7.
Carotid phantom used in blood pressure measurements
To evaluate the prototype array for blood pressure measurements a home-built in vitro carotid phantom was used.A PVA solution (all % by weight) of polyvinyl alcohol (PVA) Cyrogel (10%), distilled water (40%), ethylene glycol (10%), and silica gel particles (2%) was heated to 85 °C until a homogeneous liquid was formed.Next, the solution was poured into a mold with a circular inner lumen (10 mm) with a centrally placed rod (5 mm) to form a carotid vessel with an outer diameter of 10 mm and a lumen diameter of 5 mm.The mold was subjected to three cycles of freezing (−25 °C) and thawing (21 °C), each 16 and 8 h, respectively, to solidify the vessel.During the last freeze-thaw cycle, the vessel phantom was immersed in another PVA solution to surrounding tissue and minimize lumen translational motion.The echogenicity and the outer and luminal diameters of the carotid phantom are in the same range as human carotid arteries, while the elastic modulus of the PVA solution after three freeze-thaw cycles result in similar expansion levels of the carotid phantom as present under human physical conditions 35 .
In vivo carotid artery experiments
To evaluate the prototype array for blood pressure measurements, in vivo experiments were conducted with the approval of TNO's ethical review committee (IRB 2023-058).The aims of the study are: (1) Investigate the performance of the large-area flexible array on a human volunteer (in vivo), (2) Extract the carotid wall displacement from the ultrasound data, (3) Compare the data quality between earlier obtained in vitro data and the in vivo datasets.A sample size of n = 1 was chosen prior to the actual study.Inclusion criteria were defined upfront.The human subject was recruited randomly by posting flyers.No specific preparation was required of the human subject.The human subject was told the experimental process, and was not involved in data processing.Blinding to the investigators was not required since the sample size was one.The data were captured from a live subject at rest (healthy, male, age group 40-45 y, informed consent).These results are not specific to one sex or gender, and sex and gender aspects were not considered in the design study.
Data collection of the ultrasound signal was carried out with a Verasonics Vantage platform.The prototype array was temporarily placed in the neck of the subject, using a commercial ultrasound gel.The prototype array had four rows of piezoelements and a center frequency of 8.2 MHz.Rows 1 and 3 were simultaneously excited in transmission, and row 2 was read-out in parallel.The excitation consisted of 20-cycle chirps with a maximum amplitude of 10 V and a bandwidth of 3-12 MHz.The pulse repetition frequency was 50 Hz.
The ultrasound data was visualized in real-time using a simple imaging procedure consisting of a color plot with the envelope (obtained by taking the absolute of the Hilbert transform) of the recorded radiofrequency data of the 32 channels read-out in reception.The carotid vessel was manually located in the ultrasound data, by observing the rhythmic expansion and compression of the carotid wall echoes.This allowed us to select the channel with the optimal acoustic path through the carotid (through the center), the results of which are shown in Fig. 5d.More details can be found in SI Note 8.
Statistics and reproducibility.All attempts at data and sample replication were successful.During a measurement session of ca. 3 h, the in vivo carotid ultrasound measurements were taken for more than three times.All testing results showed high similarity.
Reporting summary
Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material.If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly the copyright holder.To view a copy of this licence, visit http://creativecommons.org/ licenses/by/4.0/.
Fig. 1 |
Fig. 1 | PillarWave TM technology.a Confocal microscope image of the P(VDF-TrFE) film directly after embossing.b Schematic cross section of the flexible ultrasound transducers.On top of a polyimide foil (thickness 14 μm) and a patterned molybdenum-aluminum electrode (500 nm thick) a ca.40 μm P(VDF-TrFE) film is embossed resulting in 3D structures that are ca.70 μm high with a residual layer of ca. 10 μm below.On top of the 3D structures, a 10 μm P(VDF-TrFE) film is laminated at elevated temperatures, whereafter a 500 nm molybdenum-aluminum top-electrode is deposited.The stack is finished with an isolating and flexible encapsulation film.c Side view of the complete transducer after lamination of the top electrode.The white areas are the P(VDF-TrFE) film.The orangish areas are air-filled gaps between the pillars.d Photograph of the finished ultrasound transducer foil, illustrating its thinness of 0.1 mm and mechanical flexibility.e Transducer foil wrapped around the inner wire of a dilatation catheter for percutaneous transluminal angioplasty (PTCA) (Blue Medical Force NC) that has a radius of 0.25 mm for intravascular ultrasound.f Ultrasound transducer foil placed in the neck on top of the carotid artery for blood pressure measurements.
Fig. 2 |
Fig. 2 | Characteristics of a single element.a Transmit efficiency and b receive sensitivity as a function of frequency in water.c Transmit efficiency at 8.93 MHz as a function of location at transducer surface measured in water.d Magnitude and e phase of the electrical impedance as a function of frequency for the condition in which the active surface area of the transducer is in contact with air or oil.Experimental results are indicated by solid lines.Modeled results use dotted lines.
Fig. 3 |
Fig. 3 | Characteristics of 128-element flexible polymer array transducer.a Photograph of the flexible array while slightly bent.The design consists of two 64element arrays-their locations are indicated by the gold-colored top electrodes -, one used in transmission, one used in reception.b Photograph of the array integrated on a 6-mm EUS probe.c Pulse-echo signal of the array wrapped around the EUS probe measured in water.
Fig. 4 |
Fig. 4 | High-resolution imaging of tissue-mimicking phantom using a 128element flexible polymer array transducer.a Measured transmit and receive transfer functions versus frequency.b Area uniformity of the peak transmit transfer at 8.2 MHz at the transducer surface.The color scale indicates the peak transmit transfer in Pa/V.c B-mode image captured with plane wave compounding.The gray scale indicates the intensity in dB.
Fig. 5 |
Fig. 5 | Large-area flexible ultrasonic blood pressure sensor.a Photograph of a large area of flexible ultrasonic blood pressure sensor while still on the support glass.See Fig. 1f for a photograph of the blood sensor placed in the neck.b Transmit transfer in Pa/V at the resonance frequency of 8.2 MHz of transmit elements, obtained using hydrophone measurements.cThe pressure waveforms derived from the measured carotid phantom vessel diameters as a function of time.The diastolic pressure p d was taken from the reference blood pressure sensor, and the vessel stiffness α was fitted such that the obtained systolic peak pressure matched with the reference blood pressure.The red curve shows the blood pressure measured using a reference blood pressure meter.d Recorded in vivo ultrasound data of the carotid of a healthy volunteer of the optimally positioned array element. | 8,208.2 | 2024-03-30T00:00:00.000 | [
"Engineering",
"Medicine",
"Materials Science"
] |
Assessing the localization impact on land values: a spatial hedonic study
Aim of study: To obtain spatial land valuing models using Geographic Information Systems (GIS), which collect spatial autocorrelation and improve the conventional models estimated by OLS (Ordinary Least Squares) to determine and quantify the factors explaining these values. Material and methods: The mean land values per municipality and the land uses published by the Aragonese Statistics Institute were used, as well as the geographic, agricultural, demographic, economic and orographic characteristics of these municipalities. The Spatial Lag Model and the Spatial Error Model were compared with OLS in general terms and for uses. Main results: The statistics (R 2 , log likelihood, Akaike’s information criterion, Schwarz’s criterion) demonstrated that spatial models (cid:68)(cid:79)(cid:90)(cid:68)(cid:92)(cid:86)(cid:3)(cid:82)(cid:88)(cid:87)(cid:83)(cid:72)(cid:85)(cid:73)(cid:82)(cid:85)(cid:80)(cid:72)(cid:71)(cid:3)(cid:70)(cid:82)(cid:81)(cid:89)(cid:72)(cid:81)(cid:87)(cid:76)(cid:82)(cid:81)(cid:68)(cid:79)(cid:3)(cid:80)(cid:82)(cid:71)(cid:72)(cid:79)(cid:86)(cid:17)(cid:3)(cid:55)(cid:75)(cid:72)(cid:3)(cid:87)(cid:72)(cid:86)(cid:87)(cid:86)(cid:3)(cid:69)(cid:68)(cid:86)(cid:72)(cid:71)(cid:3)(cid:82)(cid:81)(cid:3)(cid:87)(cid:75)(cid:72)(cid:3)(cid:47)(cid:68)(cid:74)(cid:85)(cid:68)(cid:81)(cid:74)(cid:72)(cid:3)(cid:48)(cid:88)(cid:79)(cid:87)(cid:76)(cid:83)(cid:79)(cid:76)(cid:72)(cid:85)(cid:3)(cid:68)(cid:81)(cid:71)(cid:3)(cid:47)(cid:76)(cid:78)(cid:72)(cid:79)(cid:76)(cid:75)(cid:82)(cid:82)(cid:71)(cid:3)(cid:53)(cid:68)(cid:87)(cid:76)(cid:82)(cid:3)(cid:87)(cid:72)(cid:86)(cid:87)(cid:86)(cid:3)(cid:90)(cid:72)(cid:85)(cid:72)(cid:3)(cid:86)(cid:76)(cid:74)(cid:81)(cid:76)(cid:191)(cid:70)(cid:68)(cid:81)(cid:87)(cid:3)(cid:68)(cid:87)(cid:3)(cid:28)(cid:28)(cid:8)(cid:17)(cid:3)(cid:55)(cid:75)(cid:72)(cid:3)(cid:76)(cid:80)(cid:83)(cid:82)(cid:85)(cid:87)(cid:68)(cid:81)(cid:70)(cid:72)(cid:3)(cid:82)(cid:73)(cid:3)(cid:69)(cid:82)(cid:87)(cid:75)(cid:3)(cid:68)(cid:74)(cid:85)(cid:76)(cid:70)(cid:88)(cid:79)(cid:87)(cid:88)(cid:85)(cid:68)(cid:79)(cid:3)(cid:68)(cid:81)(cid:71)(cid:3)(cid:81)(cid:82)(cid:81)(cid:16)(cid:68)(cid:74)(cid:85)(cid:76)(cid:70)(cid:88)(cid:79)(cid:87)(cid:88)(cid:85)(cid:68)(cid:79)(cid:3)(cid:73)(cid:68)(cid:70)(cid:87)(cid:82)(cid:85)(cid:86)(cid:3)(cid:73)(cid:82)(cid:85)(cid:3)(cid:71)(cid:72)(cid:87)(cid:72)(cid:85)(cid:80)(cid:76)(cid:81)(cid:76)(cid:81)(cid:74)(cid:3)(cid:87)(cid:75)(cid:72)(cid:3)(cid:68)(cid:85)(cid:68)(cid:69)(cid:79)(cid:72)(cid:3)(cid:79)(cid:68)(cid:81)(cid:71)(cid:3)(cid:89)(cid:68)(cid:79)(cid:88)(cid:72)(cid:3)(cid:90)(cid:68)(cid:86)(cid:3)(cid:70)(cid:82)(cid:81)(cid:191)(cid:85)(cid:80)(cid:72)(cid:71)(cid:17)(cid:3)(cid:55)(cid:75)(cid:72)(cid:3)(cid:79)(cid:68)(cid:81)(cid:71)(cid:3) value increased with irrigation availability (by a mean of 2.2-fold for the set of all land uses), plot size (by 5.7% for each 1 ha increase), population size, income and location in nature reserves (11.02-12.89%). Research highlights: Results indicate the need to develop spatial models when modeling land prices by implementing GIS.
Introduction
The origin of hedonic regression lies in valuing land of agricultural use (Haas, 1922) which, at the end of the 20th century and the start of the present century and with computers, has been well applied to value land worldwide (Xu et al., 1993;Shi et al., 1997;Maddison, 2000), and evidently in Spain (Caballer, 1973;Calatrava & Cañero, 2000;García & Grande, 2003;Gracia et al., 2004;Caballer & Guadalajara, 2005). In all these works, valuing models has been estimated by Ordinary Least Squared (OLS). However, spatial data, e.g. land values, present two properties that make meeting requirements geographic entities are spatially autocorrelated, (2) distinct study areas.
As a result, the OLS multiple regression estimations for the i be most probably biased and inconsistent, and will also invalidate standard regression diagnostic tests through misstated standard errors (Kim et al., 2003).
Autocorrelation, association or spatial dependence refers to the concentration or dispersion of the values of a variable (land prices in our case) in a land or geographic space. This implies that the value of a variable is conditioned by the value that this same variable takes in one neighboring region or in several. (1970), "Everything is related to everything else, but near things are more related than distant things". models at the end of the 20th century (Can, 1992;Pace & Gilley, 1997;Dubin, 1998) thanks not only to geographic information being implemented and access to big databases gained, but also to Geographic Information Systems (GIS) and software being developed to analyze spatial data. In these GIS, data are geo-referenced by latitude and longitude, or by Universal Transverse Mercator (UTM) X Y coordinates (Guadalajara, 2018). Spatial regression models applied to land valuing have been well developed in the present century. Generally speaking, the most widely used spatial models are the spatial lag model (SLM) and the spatial error model (SEM), and both are applied to correct spatial autocorrelation. SLM includes a spatially lagged dependent variable, while SEM includes the spatial dependence of the error term. Some examples of these spatial models that have been applied to land valuing are those by: Patton & McErlean (2003) in Northern Ireland; Huang et al. (2006) in the USA; Seo (2008) in South America; Maddison (2009) in the UK; Mallios et al. (2009) in Greece; Zygmunt & Gluszak (2015) in Poland; Uberti et al. (2018) in Brazil. They all indicate the need to consider GIS because hedonic The models obtained in the above-cited works include two main categories of explanatory variables: internal and external in relation to property. We cite the following internal variables: (1) irrigation availability: this variable is considered a dummy variable and takes a positive sign in relation to not only the land unit price logarithm found in the work by Mallios et al. (2009), but also to the land unit price in the work by Demetriou (2016), insofar as irrigated land increases the land unit value; (2) plot size: in some cases this variable is considered in its original form (Patton & McErlean, 2003;Maddison, 2009;Zygmunt & Gluszak, 2015;Demetriou, 2016) and in a logarithmic form in others (Huang et al., 2006;Mallios et al., 2009), but it always takes a negative sign in relation to the unit price logarithm (Huang et al., 2006;Maddison, 2009;Mallios et al., 2009) or the land unit price (Patton & McErlean, 2003;Zygmunt & Gluszak, 2015;Demetriou, 2016). This indicates that land unit values lower with plot size; that is, the total price of plots does not linearly in crease with surface; (3) topography: Demetriou (2016) obtained a negative relation between the land unit price and plot slope as so: the steeper the slope, the lower the unit price; (4) altitude: Mallios et al. (2009) obtained a positive relation between unit price and land rise, and both logarithmically insofar as lands at higher altitude sign in some works (Patton & McErlean, 2003;Maddison, 2009) and a positive one in another study (Huang et al., 2006), depending on how the arable land We cite the following external variables, among others, and numerous variables controlling for locational from residential zones, which always has a negative et al., 2002;Patton & McErlean, 2003;Huang et al., 2006;Maddison, 2009;Mallios et al., 2009;Zygmunt & Gluszak, 2015); (b) presence of sea: both Demetriou (2016) and Mallios et al. (2009) consider sea views and the distance from the sea logarithm, respectively, with a positive sign for the case; (c) distance to the nearest main road appears in the models of Mallios et al. (2009) with a negative sign for the land unit price logarith. However, Uberti et al. (2018) and Demetriou (2016) report a positive relation between access to plots and the unit price, which means the same in all three cases: better accessibility to plots increases their price; (d) distance to forest negatively impacts land unit prices (Zygmunt & Gluszak, 2015); (2) population density and personal income per capita (Huang et al. values logarithm increases with population density and personal income per capita, and both logarithmically; et al., In Spain, GIS have been used to model the location factor set out in the Spanish Land Act (Marqués-Pérez et al., 2018). Although the spatial correlation of land values has been demonstrated (Segura & Marqués, 2018), no spatial models have been obtained to explain arable land values, only for house values (Militino et al., 2004;Taltavull et al., 2016;Guadalajara & López, 2018).
Consequently, the objective of the present work was to obtain spatial models to value land used for agriculture by distinguishing among the uses that collect the spatial autocorrelation of land values, and to improve the results obtained with conventional models. At the same time, the intention of using these models aragon.es/-/red-de-espacios-naturales-protegidos. In all, 6686 observations made up the analyzed sample.
Maps were created with mean prices and for each land use per municipality, shown in quantile intervals using ArcGIS Pro 2.2.0 (©2018 Esri Inc.). The UTM projection system and the reference ETRS89 geodesic system were used, time zone 30N, in which the municipalities forming part of time zone 31 were corresponding to municipality limits (recintos_muni-cipales_inspire_peninbal_etrs89.shp, type: 'Poly gon', uncertainty range = 40m, download date 23 July 2018) was obtained from the Centro de Descargas del Centro National Geographic Information Center of the Spanish National Geographic Institute; the Spanish Ministry of Development, the Spanish Government, www.ign.es). This center lists the Aragonese municipalities that form part of time zones 30 and 31.
the data about mean prices per municipality, spatial weights w ij between municipalities were calculated. Weights represent the geographical relationship between locations i and j. Several methods are available that construct spatial weights: contiguity (Huang et al., 2006), k-Nearest Neighbor (Zygmunt & Gluszak, 2015;Uberti et al., 2018) and distance (Patton & McErlean, 2003;Maddison, 2009;Zygmunt & Gluszak, 2015;Uberti et al., 2018). As spatial information comes as geographical coordinates (point data), this work intended to build weights by considering the distance among municipalities, as most authors have done, by using the values X UTM and Y UTM. Weights were calculated in two ways: by taking the inverse of Euclidean distance squared, as Patton & McErlean (2003) did, and with the inverse of Euclidean distance, as Maddison (2009) did, to select that which would provide the most compelling evidence for spatial dependence. For all land uses, a minimum threshold distance was considered so that all the municipalities had at least one neighbor which, at the same time, would be the maximum permitted distance to consider a municipality a neighbor. Spatial weights matrix W = [w ij ] contains weights between each pair of all observations (municipalities) and is a non-negative m×m matrix. Matrix elements cannot be their own neighbors insofar as the matrix's diagonal line is composed of zeros. The weight matrix was standardized in such a way that the sum of the weights in each row equaled 1. was to determine and quantify the factors explaining land prices. The data employed to obtain these models were the mean prices per municipality and per land use type in the Spanish Autonomous Community of Aragón (SACA).
Data
The employed information source was the website of the Aragonese Statistics Institute (IAEST, in Spanish) of the SACA. The SACA covers 47,720 km 2 and is divided into three provinces: Huesca to the north, Zaragoza in the center and Teruel to the south. The following information was collected for the 741 municipalities in the SACA in June 2018: (1) mean land price (€/ha) per land use in 2017; (2) the internal characteristic in relation to property: geographic coordinates (longitude and latitude, time zone of the UTM projection and X UTM and Y UTM); agricultural characteristics (usable agricultural area (UAA [ha]), irrigatable area in relation to UAA in percentages and number of plots on rustic land); orographic characteristics (altitude [m]); (3) external characteristics in relation to property: demographic characteristics (population size; population's mean age; birth rate; death rate); services (number of compulsory secondary education centres (CSEC)); economic characteristics (cadastral value of rustic land in thousands of euros of the whole municipality and gross per capita income in 2014 (euros per person and year) in seven intervals: < 6000; 6000-7999; 8000-9999; 10000-11999; 12000-15999; the mean plot size of each municipality (UAA/no. plots) and population density (population/UAA) were calculated to include them in the study, which falls in line with previous works.
The land use types in the IAEST were: almond trees (non-irrigated and irrigated), arable land (nonirrigated and irrigated), olive groves (non-irrigated and irrigated), vineyards (non-irrigated and irrigated), meadows (non-irrigated and irrigated), irrigated fruit trees, orchards, wasteland, pinewoods and riverside trees. The surface of each land use type was calculated using the 2018 Surface Areas and Crop Yields Survey, with the results summarized by SACA (Spanish Ministry of Agriculture, Fishing and Food: www. mapa.gob.es).
Similarly to other works that have considered distance to places of interest, the location of municipalities in some of the 18 nature reserves in the SACA was also contemplated, which are listed on this website: www. weights matrix, the global Moran's Index (I) test statistics (Moran, 1950) was used, which is the most popular statistics to measure spatial association, whose value varies between -1 (perfect dispersion) and 1 (perfect correlation). A value of 0 indicates a null correlation or a random spatial pattern, and the nearer it comes to 1, the higher the spatial correlation.
Regression models
The methodology used to obtain land valuing models was hedonic regression models. First an estimation by OLS was done. The basic linear hedonic model, using the log-linear model (Pace & Gilley, 1997;Bastian et al., 2002;Maddison, 2009;Mallios et al., 2009;Zygmunt & Gluszak, 2015), is given by: ( 1) where dependent variable Y i is an m×1 vector of the mean land value for each municipality (m is the number of municipalities); is the constant term; X ij is a m×n matrix of the independent variables (n is the number of explanatory variables); j is a n vector of the i is an m×1 error term. Independent variables can be quantitative or dummy, and quantitative variables can come in their original form or be transformed into a j represents the elasticity of demand for this speciij comes in its original form, when X ij varies by 1 unit, then Y varies by j ·100%. on average. If the characteristic comes in a logarithmic form, when X ij varies by 1%, then Y varies by j %. on average. If a characteristic j provides the of the characteristic or not as exp -1 (Mallios et al., 2009).
Eleven models were obtained, one for the set of lands in the SACA and 10 other models for all ten considered land uses. Initially, all the aforementioned variables from the IAEST were included as independent variables. The model for the set of lands in the SACA also included nine dummy variables relating to land use, which took a value of 1 if they were related to the land use in question, and 0 otherwise. To distinguish between non-irrigated and irrigated land uses, another dummy variable was included, namely "Irrigation", which took a value of 1 if it was an irrigated crop, and 0 otherwise. The dummy variable "Nature reserves" was also contemplated, which took a value of 1 if the municipality was located in a nature reserve, and 0 otherwise. A municipality's altitude was considered in km and the number of plots in thousands.
Quantitative variables: cadastral value and population were considered in two ways: in their original form and in their transformed logarithmic form. The municipality's income took values from 1 to 7, with 1 corresponding to the lowest income interval and 7 to the highest.
In order to begin the spatial regression analysis, the spatial autocorrelation in the OLS residuals was evaluated by Moran's I test, done with the residuals to ensure that they were spatially random. The spatial matrix captures the spatial autocorrelation present in the residuals of the hedonic regression by OLS. The spatial sed on the Lagrange Multiplier (LM) of the dependent variable, LM-lag, and of the error, LM-error, and also in their robust versions. These tests allowed the problem be solved. Thus we considered two spatial regression models to incorporate the spatial components into the OLS (Anselin, 1988):
The Spatial Lag Model (SLM) or the Spatial Autoregressive Model (SAR):
According to this model, a land value is considered to be autocorrelated in space. This model is formally written as: where W logYi is the spatially lagged dependent variable (additional regresor) and W logYi (3) The spatially lagged dependent variable is interpreted as a weighted average of the neighboring land values.
The Spatial Error Model (SEM):
This model handles spatial dependence through the error term, and takes the following form: where errors and W ui is the spatially lagged error term.
According to Anselin (1988), the estimation of models SLM and SEM cannot be done by OLS, but by using Maximum Likelihood (ML), which is based on the normality and independence hypotheses of the error term.
(2) , the log likelihood, Schwarz's criterion (SC) and Akaike's information criterion (AIC) were used to test several functional forms for the hedonic price equation and the selected variables, and also the SLM and SEM models estimated by ML. SC, AIC and log likelihood are an appropriate measure for comparing non-nested models. Models with smaller AIC and SC are considered superior (Chi & Zhu, 2008). Conversely, the higher the log likelihood value, the spatial models.
The procedure followed to select the variables was the stepwise method. A Student's t-test was done of the For the regression diagnostics, the collinearity or combination of the explanatory variables was determined by the condition index (CI), and was also explanatory variable. Gujarati (2003) indicates serious multicollinearity problems likely exist with condition index scores over 30 and recommends a lower VIF than 10 (rule of thumb threshold).
For the regression diagnostics, the Koenker-Bassett (K-B) and Jarque-Bera (J-B) statistics were used in the OLS model. If the K-B and J-B statistics are statistically residuals is not normal, and the OLS results will have is lacking in the model). The Breusch-Pagan (B-P) statistics was used to test all the regression models. If the is not consistent. That is, the relations being modeled change in the study area (non-stationarity) or vary in relation to the magnitude of the variable that is to be foreseen (heteroscedasticity). The GeoDa software was used to obtain Moran's I test, as were the OLS, SLM and SEM models with their statistics.
Results
The number of municipalities for which information existed about prices for land uses, the mean, minimum and maximum price values, Moran's I test corresponding to these prices, and the surface of each land use type are found in Table 1. The analyzed uses represented 65.35% of the SACA's surface area, where non-irrigated arable land (25.47%) predominated, followed by pinewoods (16.40%). The following were not included because their price information was not available: non-irrigated fruit trees, scrubland, thickets and conifers, among others. This table also includes the threshold distance, that is considered to calculate the spatial weights, for which all the municipalities have at least one neighbor. The spatial weights were calculated with the inverse of Euclidean distance because it provided identical Moran's I test values to the inverse squared.
As Table 1 shows, the mean price per municipality in the SACA ranged from a minimum of 120 €/ha for wasteland to a maximum of 33,640 €/ha for irrigated orchards, and the mean value was 4317 €/ha. High Moran's I test values indicated that a high spatial correlation exists in the land prices for all land uses, except for irrigated meadows and irrigated lands with fruit trees, for which Moran's I was only 0.0072 and 0.0783, respectively. The highest spatial correlation of land prices was obtained for non-irrigated meadows (0.9143), followed by riverside trees (0.8754), wasteland (0.8742) and non-irrigated arable land (0.8637).
The maps showing the mean values per municipality for each land use, represented in price intervals by that a high spatial correlation existed for land values, except for irrigated meadows and irrigated lands with fruit trees. For all land uses, the highest prices for nonirrigated land were obtained for the province of Huesca, for the irrigated lands in the Ebro Valley and to the east of Huesca. Conversely, the province of Teruel obtained the Table 2 includes the OLS, SLM and SEM models for all the land uses in the studied SACA, where wasteland use is considered as control. Table 3 shows the OLS models that corresponded to each land use by grouping non-irrigated and irrigated in those land uses where both these possibilities were given.
Regression models
Tables 2 and 3 show that the LM-lag and LM-error bust versions of the statistics were taken into account. Both the robust and non-robust versions of the test lag for pinewoods. Therefore, both spatial models were uses, including fruit trees and irrigated meadows. Nevertheless, following Anselin & Rey (1992), the results for LM-lag and LM-error shown in Tables 2 and 3 could indicate that the SEM was the most appropriate model to describe the land value of pinewoods, as well as meadows, irrigated lands with fruit trees, irrigated riverside trees and wastelands because the LM-error values were higher than the LM-lag values. Conversely, the SLM would be more appropriate for arable land, almond trees, olive groves, vineyards and orchards, and to also describe the set of all land uses. The highest CI scores were 31.62 for lands with almond trees, followed by 31.48 for wastelands. The CI scores were always below 30 for all other land uses. As all the VIFs were below 3, all these diagnostics indicated that no multicollinearity existed in these models.
The normality of the residuals was not met, as the J-B test results revealed. So the null hypothesis of a normal error was rejected. The exceptions were wastelands and meadows, for which the J-B test was of the residuals.
Tables 4 and 5 respectively show model SLM and model SEM, which correspond to each land use. In order to select the best model, and in accordance with R 2 , the log likelihood, the AIC and SC, the spatial models were always superior to those obtained by traditional OLS. Considering the spatial models im-2 was always higher in the spatial model, unlike OLS, especially in riverside trees (0.90 vs. 0.36), wasteland (0.86 vs. 0.38) and pinewoods (0.76 vs. 0.31). The same was true of the log likelihood, which increased in the spatial models, especially in the SEM models for the set of land uses (from -1252.11 to -204.18), riverside trees (from -3.55 to 598.16) and wasteland (from -144.55 to 327.88).
AIC and SC lowered in all the spatial models. AIC went from 2538.21 to 440.36 for the set of land uses, from 21.10 to -1188.33 for riverside trees and from 303.09 to -647.77 for wasteland. Respectively for the same uses, SC went from 2653.85 to 549.20, from 53.16 to -1175.32 and from 335.16 to -646.94.
The fact that all the spatial autoregressive terms for SLM and for SEM), indicated Huang et al. (2006), spatial autoregressive estimate , which ranged between 0.2567 for the model for the value of the set of land uses and 0.8962 for the value of the riverside trees, indicated that a 1% increase in the average land prices in nearby municipalities would increase the land prices in the observed municipality by 0.2567% and 0.8962%, respectively. The high positive value indicated that of neighboring lands. The values were higher in the models for uses than in the model for set of land, marked on the land values for uses than on the land in relation to the correlation of the residuals, which was higher in the models for uses. This gave way to most of higher in OLS than in spatial models because spatial and collected part of the land values for lands with pinewoods for which, as we have and obtained similar values in both spatial models. Therefore, it was corroborated that the two spatial models were suitable for modeling land prices. with plot size and lowered with the number of plots in the municipality. Indeed for the land set, a 1 ha increase in plot size increased the land value by 6.49-7.88% ( = 0.0649 in SLM and = 0.0788 in SEM). For land uses for fruit trees, a 1 ha increase in plot size in the OLS model increased the land price by 14.38% ( = 0.1438), while for riverside trees and according to SLM, a 1 ha increase in plot size increased the land price by only 1.09% ( = 0.0109). In relation to the municipality's UAA, irrigated land areas only intervened in the model for vineyards and took a negative sign, while this characteristic did not appear in the model for the other models for each land use. Moreover, the UAA in its logarithmic form only appeared in the model for fruit trees and took a negative sign.
The municipality's income explained the mean price in the model for the land set and in the set of all land uses, except for vineyards and always with a positive sign. This was expected because it is indicative of a municipality's wealth, which tends to come with a higher land price. An increase in income within one interval gives way to a general increase in land of 4.77-5.63% ( = 0.0477 in SLM and = 0.0563 in SEM). For land uses, this increase varied from 0.82% ( = 0.0082) for riverside trees according to SLM to 11.87% ( = 0.1187) for meadows according to OLS.
Another indication of a municipality's wealth is its cadastral value, which increases the land value for all land uses, except for meadows, fruit trees, orchards, pinewoods and riverside trees. A 1% increase in the cadastral value increased the land price by between 0.0251% ( = 0.0251) according to SLM and 0.0761% ( = 0.0761), according to OLS, and for arable land in both cases.
A bigger municipality population increased the land prices depending on the model for the set of land uses, and per use for arable land, irrigated fruit trees and orchards. Population density also increased the vineyard land value.
vineyard land value and negatively so; i.e., the municipalities with an older mean age obtained a lower vineyard land price. The death rate also had a negative
SACA.
A higher altitude lowered the land price for almond trees, irrigated fruit trees, orchards, wasteland and riverside trees, but the opposite occurred for lands with pinewoods and vineyards. Finally, the location of a municipality in a nature reserve increased the land value in general, and for these uses in particular: almond trees, meadows, orchards, pinewoods, wasteland and riverside trees. According to between 0.1046 and 0.1213 depending on the models for price of land located in a nature reserve and that outside a nature reserved ranged from 0.1102-fold (exp 0.1046 -1) or 11.02% to 0.1289-fold (exp 0.1213 -1) or 12.89%. For uses, land values rose from 0.0344-fold (exp 0.0339 -1) or 3.44% for lands with riverside trees in SLM to 0.3531fold (exp 0.3024 -1) or 35.31% for meadows in OLS for those municipalities located in a nature reserve.
The spatial correlation of land prices in Spain was Marqués (2018) Spatial models SLM and SEM proved better than OLS models for all the possible land uses, which also happened in the consulted studies. This indicates the need to develop spatial models to model land prices by implementing GIS. The LM-lag and LM-error statistics pointed out that SEM was slightly better than SLM was stronger on errors than on land prices. This could be due to some of the variables not being included in models, such as temperature, soil quality and precipitation. However, these data were not available for municipalities. This was corroborated by the , which suggests that other explanatory variables may have been omitted from the models.
The R 2 values obtained in the models developed herein were generally similar to those obtained by Huang et al. (2006), and were even higher than those reported in most of the consulted works: 0.60 in Bastian et al. (2002); 0.63 in Patton & McErlean (2003); 0.49 in Maddison (2009); 0.52 in Zygmunt & Gluszak (2015); 0.69 in Uberti et al. (2018).
Similarly to other works (Bastian et al., 2002;Patton & McErlean, 2003;Mallios et al., 2009;Demetriou, 2016;Guadalajara & López, 2018), it was not possible to eliminate the heteroscedasticity of the residuals in most of the models obtained for the land value in the SACA, as deduced from the B-P test results. Heteroscedasticity was eliminated only in olive groves and meadows, and lowered in all crops, except for vineyards and riverside trees, when spatial models were utilized. Apart from employing spatial models, a widely used resource to reduce heteroscedasticity is variables transformed into logarithms, which was done, but was not entirely successful. The inclusion of the municipality's precipitation in the models could have lowered heteroscedasticity. Nonetheless, it is noteworthy that other consulted works (Huang et al., 2006;Seo, 2008;Maddison, 2009;Zygmunt & Gluszak, 2015;Uberti et al., 2018) did not indicate the result of either this test or the J-B test, which apparently suggests a problem in these models that needs to be solved. A literature review indicates that a joint remedy is lacking for these conditions when the nature of heteroscedasticity is unknown.
The multicollinearity condition number in the obtained models was lower than that indicated in other works, e.g.: 34.98 in Mallios et al. (2009) and 48.12 the importance and validity of the models developed in the present work.
variables met a priori expectations. Irrigation was always positive, exactly as indicated by Bastian et al. (2002), Mallios et al. (2009) andDemetriou (2016). Irrigatable areas in relation to the municipality's UAA This could be due to a larger irrigatable surface area in relation to the total surface area, which could increase the supply of irrigated land and could lower its price.
because land was demanded more in the municipalities with a younger population, which could have something more to do with the younger population's interest in producing wines.
Unlike other works (Huang et al., 2006;Maddison, 2009;Mallios et al., 2009;Zygmunt & Gluszak, 2015;Demetriou, 2016), land unit values increased with plot size. This ratio between unit values and plot size might depend on the characteristics of the crops in each country. In Spain, large surface areas mean mechanisation and lower crop costs. These lower land prices for smaller plot sizes are related with the et al. (2018) about which determining factors related to farm management, e.g. agricultural abandonment patterns in Europe. The above authors' study indicated some areas in Spain, like Galicia and south Aragón, where the smaller the plot size, the more likely abandonment is.
in the work by Mallios et al. (2009) but, in our case, trees, irrigated fruit trees, orchards and riverside trees, most certainly because these land uses are more sensitive to damage caused by low temperatures, which occurs more frequently at higher altitudes. As maintained by Huang et al. (2006), land values increase with population density and personal per capita income. A denser population places more pressure on land use and leads to higher prices.
land prices was also shown and coincides with other works (Bastian et al., 2002) and also with the Spanish regulations (BOE, 2011).
The results of this study might be interesting for rural land management, the mass appraisal for the determining factor of market values, territorial taxation, and for actions to avoid land being abandoned. One study limitation is the availability of the municipal data instead of data about plots, characteristics, like plot shape (Zygmunt & Cluszak, 2015), plot slope (Demetriou, 2016), soil type, distance from the population center, etc., which can of proximity to communication routes (main roads, high-speed trains, etc.) and how they improve land prices, and to contemplate the protected designations of origin of some crops like wine. | 6,956.6 | 2019-11-08T00:00:00.000 | [
"Economics"
] |
Membrane-bound estrogen receptor-α expression and epidermal growth factor receptor mutation are associated with a poor prognosis in lung adenocarcinoma patients
Background The purpose of this study is to clarify the correlations between the expression of membrane-bound estrogen receptor-α (mERα) and epidermal growth factor receptor (EGFR) mutation and clinicopathological factors, especially in relation to the prognosis, in patients with lung adenocarcinoma. Methods We conducted a retrospective review of the data of 51 lung adenocarcinoma patients with tumors measuring less than 3 cm in diameter. Immunohistochemical staining for mERα expression and detection of the EGFR mutation status were performed. Results Among the 51 patients, the tumors in 15 showed both mERα expression and EGFR mutation. ("double positive") Significant associations between "double positive" and vascular invasion, vascular endothelial growth factor expression, and Ki-67 expression were observed. A multivariate analysis revealed that only "double positive" was an independent risk factor influencing the recurrence-free survival. Conclusions Presence of mERα expression together with EGFR mutation was found to be an independent prognostic factor for survival in patients with lung adenocarcinoma, suggesting cross-talk between mERα and EGFR mutation.
Background
Lung cancer is a leading cause of cancer-related death worldwide. The recent increase in interest in lung cancer appears to be attributable to the marked increase in the global prevalence of adenocarcinoma. Especially, adenocarcinoma appears to have a predilection for women, and the association of adenocarcinoma with a smoking habit may be less than that for the other histological subtypes of lung cancer [1,2]. These features of lung adenocarcinoma suggest that some factors peculiar to sex may be involved in the clinicopathology of this cancer, and some preference for female-associated pathways in the development of this form of lung cancer.
Estrogen exerts most of its effects in breast cancer via its receptors expressed in the tumor tissue; estrogen receptor (ER) α and ß. In breast cancer, the expression of ERα is a useful marker that provides information on the patient prognosis and the potential efficacy of hormone therapy [3]. Since ER α and ß are also well known to be expressed in both normal lung epithelial cells and lung cancers, a possible role of estrogen has been proposed in lung carcinogenesis [4]. Known for decades, ERα is a nuclear steroid receptor that is expressed in breast, ovarian, and endometrial tissue, but antibodies used to detect ERα in breast cancer show little or no reactivity in lung cancer tissues. On the other hand, non-nuclear (membrane-bound) ERα was described in 2002. Using this antibody that recognizes the ERα carboxy-terminus, staining was found in the cytoplasm and cell membrane [4]. This membrane-bound ERα comprises variant isoforms that lack the amino-terminus, because they cannot be detected by antibodies that recognize the ERα amino-terminus. In this study, we used this antibody for membrane-bound ERα (mERα).
The other well known female-related factor is mutation of the epidermal growth factor receptor (EGFR). EGFR tyrosine kinase inhibitors (EGFR-TKIs) produce a dramatic clinical response in a significant proportion of patients with lung cancer [5]. In 2004, response to EGFR-TKIs was ascribed to the presence of some type of gene mutations in the tyrosine kinase domain of EGFR [6,7]. The EGFR mutations in lung cancer associated with sensitivity to EGFR-TKIs occur more frequently in women, nonsmokers, Asians, and with adenocarcinomas [8,9].
Estrogen directly stimulates the transcription of estrogen-responsive genes of lung cells and transactivates the EGFR pathway. Stimulation of ER has been reported to increase the activity of the EGFR signal, and EGFR signal increases the activity of the ER [10]. Strong nuclear expression of ERß has been shown to be correlated with the presence of EGFR mutation, and the favorable prognostic significance of ERß expression has been shown to be influenced by the presence of EGFR mutation in lung adenocarcinoma [11]. However, to date, no report has described the correlation between mERα expression and EGFR mutation.
Based on these data from previous studies, we investigated the association between the expression of mERα and EGFR mutation in lung adenocarcinoma. In addition, we restricted the tumor size of the adenocarcinomas to tumors measuring less than 3 cm in diameter, because EGFR mutation is considered an early event in the pathogenesis of lung adenocarcinoma [12]. The purpose of this study was to clarify the correlations between the expression of mERα and EGFR mutation and clinicopathological factors, in relation to the prognosis of the patients. In addition, using immunohistochemistry to determine the expression of vascular endothelial growth factor (VEGF) and Ki-67, we studied the tumor proliferative activity and angiogenesis in adenocarcinomas showing mERα expression and EGFR mutation.
Study population
Fifty-one patients with lung adenocarcinoma measuring less than 3 cm in diameter, who underwent surgical resection (lobectomy or segmentectomy) with systematic lymph node dissection, at the Kawasaki Medical School Hospital between 2007 and 2009 were enrolled in this study. None of the patients had received either radiotherapy or chemotherapy prior to surgery. The histological diagnosis of the tumors was based on the criteria of the World Health Organization, and the tumor, nodule, metastasis (TNM) stage was determined according to the criteria in 2009. Written informed consent was obtained from each patient for the study of the excised tissue samples from the surgical specimens. This study was conducted with the approval of the institutional Ethics Committee of Kawasaki Medical School. Follow-up information up to recurrence, or March 2012, was obtained from medical records.
All patients underwent fluorodeoxyglucose positron emission tomography (FDG-PET) before the surgery. The PET and computer tomography (CT) examinations were performed with a dedicated PET/CT scanner (Discovery ST Elite; GE Healthcare, Japan), at 115 minutes after intravenous injection of 150 to 220 MBq of 18 FDG (FDGscan, Universal Giken, Nihon Mediphysics, Tokyo, Japan). The regions of interest (ROI) were placed three-dimensionally over the lung cancer nodules. Semiquantitative analysis of the images was performed by measuring the maximal standardized uptake value (SUV max ) of the lesions.
EGFR mutation analysis
Analysis to detect EGFR mutations was performed in the resected, paraffin-embedded lung cancer tissues by a peptide nucleic acid-locked nucleic acid (PNA-LNA) PCR clamp method [13]. For this study, the PNA-LNA PCR clamp assay was performed at Mitsubishi Kagaku Bio-clinical Laboratories, Inc, Tokyo, Japan.
Immunohistochemical staining
Immunohistochemical analyses were performed in the resected, paraffin-embedded lung cancer tissues. After microtome sectioning (4 μm), the slides were processed for staining using an automated immunostainer (Nexes; Ventana, Tucson, AZ, USA). The streptavidin-biotinperoxidase detection technique using diaminobenzidine as the chromogen was applied. The primary antibodies were used according to the manufacturer's instructions (ERα:, clone HC-20, Santa Cruz Biotechnology, Santa Cruz, CA, 1/500 dilution; VEGF:, clone A-20, Santa Cruz Biotechnology, Santa Cruz, CA, 1/300 dilution; Ki-67: clone MIB-1, Dako Cytomation, Kyoto, Japan, 1/100 dilution). The slides were examined by two investigators who had no knowledge of the corresponding clinicopathological data. The expression of each marker protein was examined and evaluated according to the original protocol reported previously. ERα expression was categorized into eight grades according to previously described immunohistological scores [14]. Initially, six degrees of the proportional scores for positive staining were assigned according to the proportion of positive tumor cells (0, none; 1, < 1/100; 2, 1/ 100 to 1/10; 3, 1/10 to 1/3; 4; 1/3 to 2/3; 5, > 2/3). Next, an intensity score was assigned, which represented the average intensity in the tumor cells showing positive tumor staining (0, none; 1, weak; 2, intermediate; 3, strong). The proportional and intensity scores were then added to obtain a total score, ranging from 0 to 8. For the statistical analysis, ERα expression was judged as positive when the score was ≥ 4. VEGF expression was judged as positive when more than 20% of the cancer cell cytoplasm showed positive staining [15]. The labeling index of Ki-67 was measured by determining the percentage of cells with positively stained nuclei. Ki-67 expression was judged as positive when more than 10% of the cancer cell nuclei showed positive staining [16].
Statistical analysis
Statistical analysis was performed for examining significant differences among the groups and possible correlations between presence/absence of mERα expression/EGFR mutation and the clinicopathological features using Fisher's exact test or the chi square (χ 2 ) test as appropriate. An unpaired t-test was used for comparison of the continuous data. Multivariate analyses were performed using logistic regression analysis. To explore the association between recurrence-free survival (RFS) and the presence of mERα expression/EGFR mutation, a Kaplan-Meier survival analysis was performed by stratifying significant predictor variables identified in the Cox proportional hazards model. All the statistical analyses were conducted using SPSS software (Version 17.0; SPSS Incorporation, Chicago, IL, USA). All statistical tests were two-sided, and probability values < 0.05 were regarded as statistically significant.
Clinical characteristics
The characteristics of the patients are summarized in Table 1. The patients ranged in age from 46 to 83 years (mean, 66.8). There were 23 men and 28 women. The median follow-up period was 34 months (range 3 to 54 months).
Relationship between mERα expression and the clinicopathological characteristics
Of the 51 patients, 24 exhibited marked increase of the immunoreactivity of the tumor cells for mERα, whereas the remaining 27 showed no increase of mERα expression. Significant associations of the mERα expression level in the tumor cells were observed with the tumor differentiation grade (P = 0.019), presence or absence of vascular invasion (P = 0.001), and the SUV max on FDG-PET (P = 0.005), but not with age (P = 0.717), sex (P = 0.921), smoking status (P = 0.615) or tumor size (P = 0.051) ( Table 2). The RFS tended to be worse in patients showing elevated mERα expression level in the tumor cells than that of the patients not showing tumorcell mERα expression; however, the association was not statistically significant (P = 0.076, log-rank test; Figure 1A).
Relationship between the mutation status of EGFR and the clinicopathological characteristics
Of the 51 patients, 26 had EGFR mutation, whereas the remaining 25 had wild-type EGFR. Significant associations of the EGFR mutation status were observed with sex (P = 0.036), tumor size (P = 0.017) and presence or absence of vascular invasion (P = 0.006), but not with age (P = 0.319), smoking status (P = 0.124), SUV max on FDG-PET (P = 0.711) or tumor differentiation grade (P = 0.691) ( Table 2).
Associations of mERα expression and EGFR mutation with VEGF and Ki-67 expression
mERα expression was significantly correlated with VEGF expression (P < 0.001) and Ki-67 expression (P = 0.001). However, the presence of EGFR mutation was not correlated with either VEGF expression or Ki-67 expression ( Table 3).
Relationships between mERα expression, EGFR mutation and clinicopathological characteristics
We Table 4). The RFS of the patients in the double-positive group was significantly worse than that of the other patients (P = 0.003, log-rank test; Figure 1B). A univariate analysis revealed that tumor differentiation grade (P = 0.006), pathological stage (P = 0.005) and double-positive status (P = 0.003) were independent risk factors influencing the RFS. However, a multivariate analysis identified only double-positive status as an independent risk factor influencing the RFS (P = 0.031) ( Table 5).
Discussion
There have been several reports of cross-talk between ER (ERα or ERß) and EGFR status (protein expression or gene mutation). This is the first report focusing on mERα and EGFR mutation. In the present study, we found that patients with lung adenocarcinoma who had both mERα expression and EGFR mutation showed significantly poorer outcomes.
One of the factors peculiar to sex reported to be involved in lung cancer development is estrogen. For example, treatment with estrogen plus progestin in postmenopausal women did not increase the incidence of lung cancer, but increased the number of deaths from lung cancer, in particular deaths from non-small-cell lung cancer (NSCLC) [17]. ER enhances transcription in response to estrogens by binding to estrogen response elements and utilizing activator protein sites [18,19]. ERα exerts an augmenting effect on cell proliferation. On the other hand, ERß exerts a suppressive effect on cell proliferation via inhibition of ERα transcriptional activity [20,21]. The differential roles of ERα and ß in lung carcinogenesis and their biological properties are still controversial. In our study, mERα expression was significantly correlated with VEGF and Ki-67 expression. Therefore, we suggest that mERα may exert an augmenting effect on angiogenesis and cell proliferation.
Some recent studies have suggested the existence of bidirectional signaling between EGFR and ER [22,23]. In addition, two clinical studies have suggested the existence of cross-talk between ER and EGFR. First, Kawai et al. demonstrated that the combined overexpression of mERα and EGFR protein in patients with NSCLC was predictive of poorer outcomes [24]. They showed that while overexpression of either mERα or EGFR was also predictive of poor outcomes, combined overexpression of mERα and EGFR was an independent prognostic factor, suggesting the existence of cross-talk between mERα and EGFR. Overexpression of EGFR has been observed and its prognostic significance confirmed in various cancers. In NSCLC, Salvaggi et al. showed that overexpression of EGFR was correlated with a poor prognosis [25]. However, the factor that is most strongly associated with from EGFR-TKI therapy has been identified as EGFR mutation, but not EGFR protein expression [9]. In the present study, for the treatment of patients with NSCLC, we studied EGFR mutation but not EGFR protein expression. Second, Nose et al. demonstrated that the favorable prognostic significance of overexpression of ERß was influenced by the presence of EGFR mutation in lung adenocarcinoma [11]. They showed that the status of EGFR mutation did not affect the RFS, but that ERß expression was associated with a favorable prognosis. To date, several studies have identified ER as a prognostic factor in lung cancer. In general, ERα expression seems to be associated with a poor prognosis, and ERß expression with a favorable prognosis [14,24,[26][27][28].
An important finding of the present study was that mERα expression and the categorized status of ERα expression/EGFR mutation was significantly correlated with the expression of Ki-67 and VEGF. Immunostaining with the Ki-67 antibody is a widely accepted method for evaluating the proliferative activity in a variety of human tumors. Tumors showing a high expression index of Ki-67 are frequently more aggressive than tumors showing a low Ki-67 expression index [16]. On the other hand, the VEGF family of proteins modulates angiogenesis, which is essential for tumor growth and metastasis. Expression of VEGF has been shown to be associated with tumor angiogenesis, metastasis, and prognosis in several cancers, including NSCLC [15]. To the best of our knowledge, no reports to date have shown a correlation between the expression of ER and VEGF or Ki-67. Our results using tissues from patients with lung adenocarcinoma tumors measuring less than 3 cm in diameter indicate that double marker positivity was significantly correlated with the expression of Ki-67 and VEGF.
Conclusions
This study demonstrated that the presence of mERα expression together with EGFR mutation is an independent prognostic factor in patients with lung adenocarcinoma, suggesting the existence of cross-talk between mERα expression and EGFR mutation. | 3,479 | 2012-07-11T00:00:00.000 | [
"Medicine",
"Biology"
] |
Metaphors in the Headings of the Kaltim Post News
Metaphors used in the political news published in Kaltim Post Daily News are analyzed in this study. The objectives of the analysis are to see kind of metaphors and their intended meaning in the context of news. Content analysis is used for the design in this study. Data are collected from available news published on October to November 2008. Criteria of metaphors as proposed by Wahab (1995) are used for analysis. This study identified two kinds of metaphors among three kinds proposed by Wahab (1995). The first kind is nominative metaphors used as a subject of a sentence. The second kind is predicative metaphor used for an predicate of a sentence. In the context of lead presentation in of news in publication, the use of nominative and predicative metaphors are relevant to the uses of precise dictions in the limited space of publication.
INTRODUCTION
The Kaltim Post is the Indonesian newspaper which reports the news locally and internationally. In January 5, 1988 marked an important in the history of media publishing in East Kalimantan because Kaltim Post is the first daily newspaper. It is one of media under Java Post group in which it reaches 64.800 exemplars a day. Its target is middle up market. The researcher decided to choose the Kaltim Post as her source data of her research because the Kaltim Post is one of famous newspaper in East Kalimantan which presents the complete and update news about events happened in East Kalimantan.
The researcher is encouraged by the uniqueness in the process of interpreting metaphor in written text in which in this study the researcher will take the lead of local political news. The lead is usually the toughest part of writing a story. It is the first sentence which functions to affect the readers' desire to read the whole article. Moreover, the reason why the researcher choose local political is because mostly metaphor conceptual is found in political news. Political news is the report of recent events about activities of government, political affairs. This study is focused to see: metaphors presented in lead of local political news article on The Kaltim Post newspaper and interpretation of metaphor in local political news article on The Kaltim Post newspaper.
In a newspaper, the most important structural element of a story is the lead. Charnley stated that "an effective lead is a brief, sharp statement of the story's essential message become tiredly, containing conundrum, inspiring and containing artistic value. Artistry generally uses metaphors in expressing its mission such as love is journey.
The fact that in understanding word meaning can be separated by either intention, association or conceptualization of its user. In additional, especially metaphor not only can be found in literary work as the example above but also in everyday communications, advertisement and political. The presence of metaphor both in literary work and everyday communication, thus Sayre and Muller in Aminuddin state that all language is metaphor.
Lakoff mentions two kinds of similarities in metaphor: the similarity based on conceptual and image. In addition, according to I.A Richards in the philosophy of rhetoric, metaphors consist of two parts: the tenor and vehicle. The tenor is the subject to which the attributes are ascribed. The vehicle is the subject from which the attributes are borrowed. For example; all the world's a stage." The 'world' in this phrase is compared to a stage, the aim being to describe the world by taking well-known attributes from the stage. In this case, the world is the tenor and the stage is the vehicle. In consequence metaphor very influenced by cultural background of society communications, such as social value, habits, ritual value and symbolism.
Basically, metaphors view a concept of thinking and give pressurizing idea through connotation word. Sugiarto states that in linguistic is known the term "Dead metaphor". It is because this type often used so it become literal sign in which can not express fresh meaning. However, this type still can be read as the way to confirm thought. What and how situation and object are transferred also to show how to look into phenomenon through submitted vehicle. Furthermore, Lakoff and Johnson argue that metaphor is as basic substance in categorized of world (live view) and in human thinking process.
Metaphor is imagination's role in conceptualization and reasoning. According linguistic view, metaphor is to inform word, phrase, and sentence by moving the other context caused arising board meaning. Moreover, in linguistic cognitive view, all language is metaphor. In additional Lakoff and Johnson explain that the core of metaphor is to understand a statement in other terminology.
To make the reader interested to read newspaper, it should have good news style. News style is the particular prose style used for news reporting as well as in news items that air on radio and television. News style encompasses not only vocabulary and sentence structure, but also the way in which stories present the information in terms of relative importance, tone, and intended audience.
In the other word, news writing attempts to answer all the basic questions about any particular event in the first two or three paragraphs: Who?, What?,When?,Where,Why? and occasionally How? (i.e. "5 W's"). This form of structure is sometimes called the "inverted pyramid." to refer to decreased importance of information as it progresses. News stories also contain at least one of the following important characteristics: proximity, prominence, timeliness, human interest, oddity, or consequence. In further there some elements of good writing, they are: (1) Precision. Use the right word. Be specific. Avoid sexism in the writing. Use generic terms: firefighters instead of firemen, letter carries instead of mailmen; (2) Clarity. Use simple sentences; (3) Pacing.
Movement of sentences creates a tone, mood for the story. Long sentences convey relaxed, slow mood. Short declarative sentences convey action, tension, movement. Use variety f sentence lengths. Use shorter sentences when writing about the more active, tense part of the story; (4) Transition. Progress logically from point to point. Put everything in order; (5) Sensory appeal. Appeal to one or more of our five senses: sight, hearing, smell, taste and touch; and (6) using analogies. Describe something that is familiar to readers.
METHODS
Since the researcher analyzed the metaphor in the lead of political news article by giving her own predetermine, suitable design for this study is qualitative content analysis. Content analysis involved series of activities in analyzing documents or other files and then describing them based on the related theories to research (Bogdan and Taylor, 1990:3).
In this study, the data were in forms of explanation which described about metaphor used as subject, object or predicate of a sentence to serve conceptualization language. Moreover, the explanations also express the intended meaning of metaphors finding. Therefore, this study focused on how the metaphors presented and what is the interpretation of those metaphors. The variable of this study is the metaphors presented in the lead of local political news article on the Kaltim Post newspaper.
The subject of this study was 10 editions of local political news article started from October 13, 2008 until October 22, 2008 of the Kaltim Post Newspaper. Meanwhile, the instrument of this study was the researcher herself and to help the researcher in identifying the data, the researcher used checklist based on Wahab and Newmark concept.
The main data of this study were the articles taken by collecting the lead of local political news used metaphor found in The Kaltim Post newspaper. The reason why the researcher picked the Kaltim Post newspaper because local political news could be found easily and completely, and why analyzed the lead because usually some journalist use metaphor word in it. In this study, the total number of the Kaltim Post newspaper was ten editions, because the researcher assumed that those amount were enough to see the presence of metaphors.
The source of data in qualitative research was not emphasized on how many people are representative. The data were taken from the political news article on the Kaltim Post newspaper. The political news on the Kaltim Post consisted of International, national and local news. In this study, the researcher selected the local political news articles which used metaphor.
FINDINGS 3.1. Nominative Metaphors
The nominative metaphors refer to metaphors used as a noun of a sentence. In English a noun functions as a subject and an object of a sentence. Therefore, nominative metaphors can be classified into two kinds: (1) subjective metaphors which refer to a subject of a sentence and (2) complimentary metaphors which refers to object or Vol. 4,No.
43
complementary of a sentence whether in word or phrase form. The following is the example of subjective metaphor taken from news article and shown in excerpt (1) and (2). (1) Bengalon berbenah diri (S-1) Bengalon keeps straighten up Excerpt (1) above is a title of an article quoted from Kaltim Post, October 20 2008. Briefly, the news tells about the visit of Awang Faroek, the regent of Sangatta, to lay a cornerstone of Al Bai'yah Mosque in south Sepaso village, Bengalon. This development is an evidence that Bengalon keeps straighten up to go forward.
Generally, excerpt (1) above describes Bengalon (A name of city in Sangatta) develops its area in development sector. Bengalon is a noun and functions as subject of the sentence, so that it is called subjective metaphors. The word BENGALON is as topic, and straighten up is as image in which the word STRIGHTEN UP based on Hornby (1995) means to make sth tidy or upright.
In this example, the heading basically told that BENGALON make itself upright as if it is a man. From that view, the researcher assumed that BENGALON is conceptualized as a man who does some reparation and STRAIGHTEN UP in this context expressed that the subject make a progress step by step. (2) Tanggapan Masyarakat tak gugurkan caleg (S-2) People's judgment does not make legislative candidate down Excerpt (2) above is the title of an article quoted from Kaltim Post, October 13, 2008. As briefly, the news tells about people's judgment in Balikpapan toward two legislative candidates which can not descend the candidate to participate in 2009 election. This case is revealed by Abdul Rais,the chairman of POKJA KPUD Balikpapan. Abdul Rais said the condition which can make the candidate descend is if the candidate proved use falsifying certificate and has criminal act background.
In (2), however, the phrase PEOPLE'S JUDGMENT is a noun which called abstract noun because it can not be seen by five senses (Hotben, 2003:46). Moreover, the PEOPLE'S JUDGMENT refers to the subject of the sentence, so that it is named as subjective metaphors. Here, legislative candidate is as topic and people's judgment is as image in which the word symbolized as tool that can make someone hurt or fall down.
In regard to interpret the intended meaning of the heading properly the researcher analyzed the context. By analyzing the context the researcher drew a conclusion that the journalist of the news article wanted to confirm to the reader or people that JUDGMENT is not one of the condition to make someone down as legislative candidate from the election. Moreover, actually the journalist wanted to conceptual people to do something else. (3) Dinas Kehutanan jangan plin-plan (S-3) Forest Service, don't swaying with the wind
44
Excerpt (3) above show the phrase FOREST SERVICE is as metaphors expression. Because its position precede verb, it is called subjective metaphors. The reason why the phrase "forest service" is categorized as metaphor expression because the verb "swaying the wind" represent to describe someone whose opinion can be changed easily by other people. In addition, Em Jul Fajri states that "swaying the wind" is the characteristic of somebody who can change his mind easily. From that definition, the researcher assumed that "forest service is as topic and "swaying the wind" is as image in which the topic symbolizes as man.
To know more the intended meaning the context is given. Briefly, the context explains about the agreement of Forest service concern with the development of bus terminal in the land in south Sangatta by local government. The old leader had agreed this development, but because of the leader changing, the local government faces obstacle. It is because the new leadership states that the land included TNK (Kutai National Park) area.
Referring with the definition and the context, it could be concluded that the old and the new leader of Forest services has different views about the land status. (4) Putusan Hakim perlu dikawal terus (S-4) The judge decree needs guarded from now on The phrase JUDGE DECREE is noun phrase that functions as subject of the sentence, hence, it is named subjective metaphors. Based on Newmark's concept, the "judge decree" is categorized as topic of the sentence and "guarded" is as image in which the phrase "judge decree" symbolizes as something important and should keep it save. In line with the definition "guard" from Hornby namely the action or duty of watching out for attack, danger or surprise or protecting sth/sb.
In addition, the researcher concludes that this text has the intended meaning namely, the judge decree should be kept save not only by the judge or the institution but also by the people. Moreover, the reader can obtain the intended meaning directly without read the whole text.
(5)
Samarinda siap pilgub II (S-5) Samarinda is ready for the second governor election The excerpt (5) does not clearly indicate the readiness in what aspect. Thus context is needed to know the actual meaning of the text. The context tells about the condition of Samarinda in facing of the second governor election concerning with the safety.
Referring the excerpt (5) above, can be seen that the word Samarinda (a name of capital city of East Kalimantan) is as subject of the sentence named subjective metaphors. In this case, the word "Samarinda"is as topic and "ready" is as image. Referring to Hornby the word " ready" means fully prepared for sth or to do sth. Moreover, concerning the context and the definition it could be concluded that the sentence describes Samarinda has fully prepared to make the second governor election being success especially in safety aspect.
In addition, the complimentary metaphors refer to a metaphor that functions as a complement of a sentence. In the excerpt (3), (4) and (5) below, the example of complimentary metaphor is given.
Pemkot dicap tebang pilih (C-1) Local Government is labeled as "Partial fell" The excerpt (6) does not clearly indicate what something makes local government get labeling as "partial fell". Therefore, reader should identify the message using context surrounding the words. In short, the context tells the member of legislative elite evaluate that Samarinda government still do "partial fell" in controlling congratulation of idul fitri baligho by one of governor candidate. This case is revealed by Sudarno, the chairman of Pan Fraction in Samarinda Parlemen.
Concerning the excerpt (6) above, the researcher assumes that the noun phrase PARTIAL FELL refers to the complement of the sentence, because it is noun and it presences after verb. Based on Em Jul Fajri in Kamus Lengkap Bahasa Indonesia, that phrase refers to the term of the way in cutting down wood by selecting the age. In this case "partial fell" used as image, "local government" is as topic and the similarity point of both is quality.
From that view, it could be concluded that the intended meaning PARTIAL FELL is the local Government discriminate in treating somebody with another. Furthermore, concern with the context the phrase "Partial Fell" means that the government discriminates in controlling of assembling of governor's candidate baligho. (7) LKDAYAU tetap minta wajah lama (C-2) LKADAYAU requests the old man Excerpt (7) above is quoted from the title of an article in Kaltim Post, October 13, 2008. In short, the news tells about the requesting of newcomer for the member of KPU West Kutai for next 5 years is not easy. It because LKADYAU still ask old member to work again in next leadership. They state it is to keep condition of West Kutai save.
Literally, excerpt (7) above shown that the phrase OLD MAN is adjective phrase which used as object attribute in the sentence. Therefore, it refers to the complement of the sentence named as the complimentary metaphor. In that sentence, the word MAN is as topic and OLD is as image in which those words have similarity point in duration. Based on Hornby (1995) the phrase OLD MAN means one's father, husband or employer, etc. From these definitions could be concluded that the willingness of LKADAYAU to promote old member in KPU to work again in the next leadership.
46
Excerpt (8) above is completed by the context. The context, briefly, tells about Nusyirwan, the former governor candidate, which asks to his entire supporter to choose Amin-Hadi in second governor election. Beside Nusyirwan, some big parties also support the partner Ami-Hadi namely, PKB, PDIP, PKS and Golkar.
In the example above, the word IRRESISTIBLE functions as complement of the sentence named complementary metaphors because IRRESISTIBLE is as adjective phrase which functioned as object attribute. A word named complementary if its position in the sentence as object (Wahab, 1995). Hornby (1995) stated that irresistible is too strong to be resisted or denied. Another definition comes from EM Jul Fajri, the word IRRESISTIBLE means the dike to keep water safe.
From those definitions the researcher assumed that text wants conceptual the readers to think that the power of Amins-Hadi is like water flowing hard. Thus, it means that it is too strong to resist for being the winner in governor election. In regard to the intended meaning, the researcher analyzed its context, the word "irresistible" refers to a support from a few of big party which will give all out contribution in making the partner of Amin-Hadi elected to be Governor and deputy governor. Literally, in excerpt (9) above the phrase "fixed price" functions as object of the sentence. The phrase included noun phrase, hence, it is called complimentary metaphors. In this sentence, the word "Development" is as topic and the phrase "fixed price" is as image in which the similarity point of both is limitation. Literally, "fixed price" according EM Jul Fajri means the price which can not be changed anymore. With regard to the context, it can be concluded that the intended meaning of the sentence above is the development of all sector in Nunukan should be done.
(10) Dishub jangan tutup mata, ambil langkah darurat (C-5) Dishub, don't close eyes, take an emergency steps The phrase "emergency steps" in excerpt (10) above is noun phrase which functions as object of the sentence. Thus, it is named complimentary metaphors. Literally, according Hornby (1995) the word "emergency" means a sudden serious event or situation requiring immediate action. In addition, Em Jul Fajri states "emergency" is unpredictly critical situation which refers to the condition of war, disaster or sick man.
From these definitions above can be seen that "Dishub" is as topic and "emergency steps" is as image which views an action should be taken by Dishub about one problem. In excerpt (10) above does not clearly state what is the problem, thus in Vol. 4,No. interoperating this text, context is needed. In short, the context explains about unserious of Dishub in handling the problem of the properness Klotok Quay, Balikpapan, especially involving the passengers' safety.
With regard of the definitions and the context, the researcher assumed that, in fact, the intended meaning of this heading is Dishub is asked to take temporary steps to handle the problem of Klotok Quay in Balikpapan which concern to passengers' safety.
From the analysis of the sentence above can be known that the position of noun in the sentence can affect the different function. They are the noun which presence in proceeding of verb called subjective metaphor and the noun which presence after verb called complementary metaphors. Moreover, complementary metaphors can divided into two forms word or phrase.
Predicative Metaphors
A predicative metaphor is identified from the predicate of a sentence. This means that a predictive metaphor is a symbolic meaning obtained from a predicate of a sentence. In English a verb1, verb2, gerund, participle functions as predicate of a sentence. Those verbs can be seen in active or passive form. The following are the example of predicative metaphors.
(11) Hamsad Rangkuti berhasil cetak sastrawan baru di Kukar (P-1) Hamsad Rangkuti success in bearing new man of letter in Kukar The excerpt (11) above is easy to understand. The sentence is simple therefore the readers are easy to know what the meaning of that text. So that, the context is not given in this study.
Moreover, the verb BEARING in the excerpt (11) comes from BEAR which functions as predicate in the sentence named predicative metaphors. In this case, the word used gerund form. Based on Hornby (1995)"bear" means to carry sth so that it can be seen. Furthermore Em Jul Fajri stated that the word BEAR refers to woman who produce baby. From those views the researcher concluded that Hamsad Rangkuti as topic, Bear as image and the similarity point of both is work to produce something. In addition, the verb BEARING has an intended meaning that subject produce young generation in sastra.
(12) Dermaga Klotok belum juga tersentuh (P-2) Klotok quay has not touched In the text above, the symbolic meaning is identified from the verb NOT TOUCHED that functions as predicate in form of present perfect tense. By using the present perfect tense in this sentence in literal shows that there is a work is not done at all. Furthermore, The Klotok Quay is as topic which symbolized as human who get less attention. Meanwhile according Hornby (1995) TOUCH means put your hand or finger onto sb/sth.
To get the actual meaning of the excerpt (12) the context is given. In short, the context tells about Klotok quay, Balikpapan is as the evidence of the careless passengers' safety by Dishub Balikpapan because until right now the condition of Klotok quay is danger. Concerning the context of the heading the researcher assumed that there is no reconstruction for the development of Klotok Quay yet by Dishub Balikpapan.
(13) Penduduk alami kenaikan tajam (P-3) Population sharply increased The verb INCREASED in the excerpt (13) is the predicate of the sentence which functions as a predicative of metaphors and in form of verb2. The verb INCREASED being topic, SHARPLY being image. The adverb "sharply" here symbolized as the line of graphic in statistic diagram. Then the similarity point is the sharpness line in graphic, thus it can be easily to conclude that the intended meaning of the sentence is the progress of population amount in short time.
(14) Dinas pertanahan dirampingkan (P-4) The Land Matters Service is slenderized Excerpt (14) above is quoted from the title of an article on Kaltim Post, October 16, 2008. The news tells about policy of Government on employees. The number of staffs is too many, so that the Government takes policy to reduce the number, in this case is the Land Matters Service's staffs.
Literally, the word SLENDERIZED in the excerpt (14) functions as predicate of the sentence. It is called predicate because it is verb in form of passive. In this sentence, "The Lands Matters Service" is as topic and "slenderized" is as image in which "slenderized" according Em Jul Fajri refers to human activity for making their body be slim. Thus, the researcher assumed that the similarity point between The Land Matters Service and human is the size.
In addition, it seems that readers are not difficult to interpret the intended meaning of the sentence, because the word slenderized is usually used as activity to make someone who has over weight to be propositional. The sentence shows that the number of Land Matters service's staff will be decreased because over number.
(15) Panwas cium keterlibatan orang lain (P-5) Panwas smells another person wrapped The verb SMELLS above functions as predicate of the sentence, thus it is called predicative metaphors. In this case, the metaphor is in form of Verb1 in active sentence. Here, the word PANWAS being topic and the verb SMELLS being image. According to Hornby (1995) "smells" means to notice sth/sb by using nose. Generally, "nose" refers to the part of human's body functioned to breath or notice something. Moreover, to interpret the intended meaning of this text is not difficult so that the readers didn't need read the whole context because the subject and setting are clear. In addition, the researcher assumes that Panwas symbolize as man and the intended meaning is Panwas suspicious somebody else wrapped in campaign activities.
From the analysis of sentence above, it showed that predicative could be indicated in some forms, namely gerund, verb3, verb2, and passive voice. Even though there are some forms of predicate word it is not make a crucial change in interpreting meaning.
DISCUSSION
Based on the data analysis above, there are three points can be presented for discussion. First, the mostly metaphor used in political news article is conceptualize as man. Second, based on the linguistic aspects the metaphors in political news article are used mostly as predicative metaphor. Third, by using metaphor can reflect the situation of politic in certain area in this study is East Kalimantan and also it can be proved that metaphor is part of our daily language.
The metaphors found in local political news articles in The Kaltim Post newspapers ,actually, this study confirm the theory of Wahab (1995) about three kinds of metaphors but in this study the researcher didn't find the sentential metaphors. Moreover, in this study the researcher only found two kinds of metaphors. First, the metaphors functions as nominative metaphors which represented into two form subjective and complementary metaphors. In this study, complementary metaphors found in two forms word and phrase. Second, the predicative metaphors used active and passive forms. Furthermore, most of those identified as dead metaphor, according to Sugiarto, it is because this type often used so it become literal sign in which can not express fresh meaning, however, this type still an confirm the thought.
In line with Lakof and Johnson which state that metaphor in language is the result of the analogical nature of human conceptualization, it means that metaphor word not only to assembly sentence but also it can transfer properties from X toY so that as if it was Y, for example in this study shown in the script (9) "The Lands Matters Service is slenderized", the verb "slenderized" is for human who has over weight and does activity to decrease that weight. So it can be interpreted that the staff of land Matter Service is over and should be decrease.
The above findings confirm the earlier studies. The findings on metafora kekuasaan dan metafora melalui kekuasaan the study of Siregar (2003). Siregar claims that metaphor correlates between language and thought, mainly in conceptualization the change of community.
The studies above has similarity with this current study in analyzing metaphor correlates between language and thought which shown through word that can conceptualize something to the reader. In the end of this study there are two principles that can be used (Wahab, 1995). First, metaphor will always consist of symbolic meaning and the intended meaning. The symbolic meaning is called as signifier, whereas the intended meaning is named as signified. In this case the intended meaning also refers to conceptualize meaning. The conceptualization meaning in this study means in line with Maxwell E. Mc Combs and Donald L. Shaw (1997) that mass media is able to influence the people perception about an event.
CONCLUSION
1) The metaphors presented in the lead of local political news articles on the Kaltim Post consist of two kinds. First, nominative metaphors presented within two functions namely as subject of the sentence named subjective metaphors, for example: people's judgment, the judge decree etc, and the other function is as object of the sentence named complementary metaphors for example black champagne, partial fell etc. Second, predicative metaphors refer to the metaphors expression used as predicate of the sentence. In this study the predicative metaphors appear in some form namely gerund, passive, verb1 and verb2, for example: bearing, sharply increased etc. Those findings above confirm two kinds from three kinds metaphors stated by Wahab (1995).
2) The interpretations of metaphors serve literal and intended meaning. The literal meaning is achieved from the dictionary meaning. Here, the literal metaphors is needed to know what the metaphor symbolized is. Respectively, the intended meaning refers to the actual meaning of metaphor expression written by journalist to convey the reader. So that, in interpreting metaphors, should pay more attention to the literal meaning, the criteria of metaphors based on Newmark's concept and the information which available in the context. In this study, most of metaphors found symbolizes as man, it is shown by the word "straighten up", "pursue", etc. In addition, generally the intended meaning of metaphors express critique to the works of government, and represents government behavior and the condition of politic in East Kalimantan. | 6,809.6 | 2019-02-01T00:00:00.000 | [
"Linguistics",
"Political Science"
] |
Controlling the order of wedge filling transitions: the role of line tension
We study filling phenomena in 3D wedge geometries paying particular attention to the role played by a line tension associated with the wedge bottom. Our study is based on transfer matrix analysis of an effective one dimensional model of 3D filling which accounts for the breather-mode excitations of the interfacial height. The transition may be first-order or continuous (critical) depending on the strength of the line tension associated with the wedge bottom. Exact results are reported for the interfacial properties near filling with both short-ranged (contact) forces and also van der Waals interactions. For sufficiently short-ranged forces we show the lines of critical and first-order filling meet at a tricritical point. This contrasts with the case of dispersion forces for which the lines meet at a critical end-point. Our transfer matrix analysis is compared with generalized random-walk arguments based on a necklace model and is shown to be a thermodynamically consistent description of fluctuation effects at filling. Connections with the predictions of conformal invariance for droplet shapes in wedges is also made.
Introduction
Fluid adsorption on micropatterned and sculpted solid substrates exhibit novel phase transitions compared to wetting behaviour at planar, homogeneous walls [1,2,3]. The simple 3D wedge geometry has been extensively studied in the past decade theoretically [4,5,6,7,8,9,10,11,12,13,14], experimentally [15,3,16,17] and by computer simulation [18,19,20]. Thermodynamic arguments [21,22,23] show that the wedge is completely filled with liquid provided the contact angle θ is less than the tilt angle α. These studies show that the conditions for continuous wedge filling transition are less restrictive than for critical wetting at planar walls [5,6]. Close to critical filling, the substrate geometry enhances interfacial fluctuations, which become highly anisotropic. We refer to these as breather modes excitations [5,6]. However, most of these studies neglect the presence of a line tension associated with the wedge bottom. Previous studies by the authors [13,14] for short-ranged binding potentials show that the line tension may play an important role in filling phenomena and may drive the transition first-order if it exceeds a threshold value. We extend our analysis to arbitrary binding potentials, in particular to van der Waals dispersive interactions. Again, this shows that we can induce first-order filling by tailoring (micro-patterning) the substrate close to the wedge bottom. This may provide a practical means of reducing the fluctuation effects which would otherwise dominate any continuous filling transition. This finding may have technological implications for microfluidic devices. However, the borderline between first-order and critical filling depends on the specific range of the interactions. If the binding potential between the interface and the flat wall decays faster than 1/z 4 , where z is the local interfacial height above the substrate, both regimes are separated by a tricritical point, as in the case of contact binding potentials [13,14]. On the other hand, for longer-ranged binding potentials, a critical end point separates the first-order and critical filling transitions. We note that these two situations correspond exactly to the fluctuation-dominated and mean-field regimes for critical filling [5,6].
Our Paper is arranged as follows: In Section II we review briefly the phenomenology of wedge filling and introduce the breather mode interfacial model used in our study. The definition of the path integral used in our transfer matrix analysis is discussed in some detail. While other formalisms have been forwarded they suffer from a number of problems. As we shall show our definition is consistent with thermodynamic requirements (exact sum-rules), generalized random walk arguments and also the predictions of conformal invariance. Section III is devoted to the analysis of wedge filling for contact binding potentials. Some of these results have been previously reported, without derivation, in a brief communication [13,14]. Section IV extends the transfer matrix analysis to the important practical case of filling with long-ranged van der Waals forces. We conclude with a brief discussion and summary.
The model
Our starting point is the interfacial Hamiltonian pertinent to filling in shallow wedges (small tilt angle α) [4]: where z(x, y) is the local height of the liquid-vapour interface relative to the horizontal, Σ is the liquid-vapour surface tension and W (z, x) is the binding potential between the liquid-vapour interface and the substrate (see figure 1). Here after we assume the temperature defines the energy scale and set k B T = 1. To first approximation we may suppose that W (z, x) is independent of the position across the wedge section, i.e. W (z, x) ≈ W π (z), where W π (z) is the binding potential due to a single flat substrate.
Corrections may arise when the liquid adsorption is small enough however -a point we shall return to later.
A mean-field analysis shows that locally the interface across the wedge is flat and that fluctuation effects are dominated by pseudo-one-dimensional local translations in the height of the filled region along the wedge (the breather modes) [5,6]. Fluctuation effects at filling can be studied by using an effective pseudo-one-dimensional wedge Hamiltonian which accounts only for the breather-mode excitations [5,6]: where l(y) = z(0, y) is the local height of the interface above the wedge bottom. The effective bending term Λ(l) resisting fluctuations along the wedge can be expressed as [5,6]: where τ is the line tension associated with the contact lines between the filled region and the substrate far from the wedge bottom. Note that for large l, we may neglect the τ contribution since Λ(l) is proportional to the local interfacial height. Similarly, for large l the effective binding potential V W (l) takes the form [5,6]: The first two terms corresponds to the bulk and surface thermodynamic contributions required to form the filled liquid region [7]. Here h denotes the bulk ordering field measuring deviations from bulk two-phase coexistence, θ is the contact angle of the liquid drop at the planar wall-vapour interface and l π is the equilibrium liquid layer thickness for a single planar wall. The line tension τ is defined as above, and τ ′ is the line tension associated with the wedge bottom. Note that the line tension contributions are essentially independent of l for l ≫ l π , so they become irrelevant in that limit. Finally, the last term corresponds to the binding potential contribution to V W . Upon minimisation of V W (l), we recover the mean-field expression for the mid-point height (at bulk coexistence) [5,6]: However, we stress that the form (4) for V W (l) is only valid for l ≫ l π . For l l π , both Λ(l) and V W (l) will behave in a different manner. Furthermore, we may control the adsorption properties for small l by micropatterning a stripe along the wedge bottom, so as to weaken the local wall-fluid intermolecular potential. The interfacial binding potential is consequently strengthened, and under some conditions it may bind the liquid-vapour interface to the wedge bottom even at the filling transition boundary θ = α. Thus by introducing a line tension associated with the wedge bottom one may induce first-order filling in the modified wedge provided the modification is strong enough. As we shall see the lines of first-order and continuous filling transitions are separated by either a tricritical point or a critical end point depending on the range of the intermolecular forces. The quasi-one-dimensional character of the Hamiltonian means it is amenable to a transfer-matrix analysis. The partition function corresponding to this Hamiltonian can be expressed as the following path integral [24]: However, the presence of a position-dependent stiffness coefficient makes the definition of the partition function ambiguous. This problem was pointed out, but not satisfactorily resolved, in [8] and is intimately related to issues associated with the canonical quantization of classical systems with a position-dependent mass [25,26]. In this paper we use the following definition of the partition function: where l 0 ≡ l a and l N ≡ l b , and K(l, l ′ , y) is defined as: The partition function Z(l b , l a , Y ) satisfies the differential equation The operator H W is defined as [26]: whereṼ W (l) is given bỹ and prime denotes differentiation with respect to argument. The solution of (9) can be expressed via the spectral expansion [24]: where {ψ α (l)} is a complete orthonormal set of eigenfunctions of the Hamiltonian operator H W , with associated eigenvalues E α . Analogous to discussion of 2D wetting [24] we can now obtain the interfacial properties from the knowledge of the propagator Z(l b , l a , Y ). In particular, the probability distribution function (PDF), P W (l), can be obtained as: while the joint probability P W (l 1 , l 2 , Y 1 , Y 2 ) of finding the interface at midpoint heights l 1 and l 2 at positions Y 1 and Y 2 (> Y 1 ), respectively, is: Further simplifications arises if we assume the existence of a bounded ground eigenstate ψ 0 . Then, substitution of the spectral expansion (12) into (13) reads (for an infinitely long wedge) Similarly the excess wedge free-energy per unit length of an infinitely long wedge is identified with the ground eigenvalue E 0 . The two-point correlation function h(l 1 , l 2 , Y ) can be obtained in a similar manner: For large separations this vanishes exponentially allowing us to determine the correlation length ξ y = (E 1 − E 0 ) −1 , where E 1 is eigenvalue corresponding to the first excited eigenstate. This result still holds even if E 1 corresponds to the lower limit of the continuous part of the spectrum of H W , although now the leading order of h(l 1 , l 2 , Y ) is not purely exponential but is modulated by a power of Y . An interesting connection with 2D wetting problems can now be seen. Introducing the change of variables [27] the eigenvalue problem H W ψ α (l) = E α ψ α (l) transforms to the following Schrödingerlike equation: whereṼ * W is defined as: (18) shows that the filling problem can be mapped onto a 2D wetting problem in the η variable, under an effective binding potential. In our case Λ(l) = 2Σl/α, so we obtaiñ V W (l) = −3α/16Σl 3 . The change of variables (17) leads to η = 8Σ/9αl 3/2 and thusṼ * W (η) = −5/72η 2 . Note that the use of the variable η ∝ l 3/2 as the appropriate collective coordinate has been previously recognized in the literature [8,7]. However, our definition of the path integral (7) and (8) leads to a novel term in the wedge binding potential, which will be essential in our study and ensures thermodynamic consistency.
The filling potential V W (l) must fulfill some requirements. In order that the interface does not penetrate the substrate, V W (l) has a hard-wall repulsion for l < 0. Consequently we need to impose an appropriate boundary condition on the eigenfunctions of H W at l = 0. The analytical expression of the boundary condition is obtained by a regularization procedure: we assume that the filling potential V W and the position-dependent stiffness Λ are constant for l < ξ 0 , where ξ 0 is some microscopic scale. Furthermore, we impose that Λ(l) must be continuous at l = ξ 0 , so Λ(l < ξ 0 ) = 2Σξ 0 /α. On the other hand, the filling potential can take an arbitrary value −U . The latter square-well potential models the modification of the filling potential V W (l) for small l due to the line tension associated with the wedge bottom. The eigenstates must fulfill the usual matching conditions that φ α and ∂φ α /∂l are continuous at l = ξ 0 . Finally, we consider the appropriate scaling limit as ξ 0 → 0.
The qualitative form of the filling potential depends on the order of the mean-field phase transition [5,6]. For critical filling with long-ranged intermolecular forces, the binding potential at bulk coexistence behaves as where A is a Hamaker constant while the exponent p depends on the range of the forces. Specifically, for non-retarded van der Waals forces p = 2. For systems with short-ranged forces this is replaced by an exponential decay ∼ exp(−κl) where κ is an inverse bulk correlation length. The presence of the fluctuation-induced filling potentialṼ W (l) ∝ l −3 in (9) gives rise to two distinct scenarios. For p > 4 and large l, the direct contribution in (9) ∝ l 1−p is negligible compared to the fluctuationinduced potentialṼ W arising from the position dependent stiffness. In this case we anticipate universal, fluctuation dominated behaviour. On the other hand, for p < 4 and large l we can neglectṼ W . Since the l 1−p contribution to the binding potential is now repulsive, we expect a qualitatively different phase diagram. It is worthwhile noting that both situations exactly correspond to the existence of a mean-field and a filling fluctuation-dominated regime for the filling transition predicted from heuristic arguments [5,6]. Fig. 2 shows the schematic filling phase diagrams we expect for short-ranged ( Fig. 2a) and long-ranged forces (Fig. 2b). Previous work for the contact potential case [13,14] showed a similarity between 3D wedge filling (for h = 0) and 2D wetting, with θ − α playing the role of the ordering field and the effective line tension the role of the wetting binding potential. The borderline between the first-order and the secondorder transition lines corresponds to a tricritical point (analogous to the 2D critical wetting case). For long-ranged forces the analogy between 3D filling and 2D wetting still holds (see above). However, as the effective binding potential is repulsive for large l at θ = α, the similarity must be established with 2D first-order wetting. As for contact binding potentials, we expect the interface to be bound to the wedge bottom . Schematic phase diagrams for 3D wedge filling pertinent to: (a) shortrange forces, (b) long-range forces. U is the effective line tension strength, θ is the contact angle for the liquid on a planar substrate, and α the wedge angle. The filled circle locates the borderline between the first-order transition line (dashed line) and the second-order transition line (thick continuous line) between the bound and unbound states. These correspond to a tricritical point and a critical end-point for short-ranged and long-ranged forces, respectively. In the latter case, the first-order transition line continues to a first-order pseudo-transition line in the partial filling region (dotted line), which terminates at a pseudo-critical point (cross). See text for explanation.
at θ = α for a large well depth U (corresponding to the line-tension contribution). As U decreases, the interface will unbind along the θ = α path. However, the interface must tunnel through a free-energy barrier to become unbound. Consequently, the borderline between the first-order and second-order wedge filling is a critical end-point, where the spectator phase is the bound state at θ = α. Actually, the connection with wetting phenomena also leads naturally to this picture, since first-order wetting was previously recognized as an interfacial critical-end point scenario [28]. In principle, we may expect the first-order transition line to continue in the partially filled region as the coexistence between two bound states with different adsorptions. This thin-thick transition is analogue to the prewetting line, and it should terminate at a critical point. However, the quasi-one-dimensional character of the wedge geometry rules out this transition, as it is destroyed by breather-mode fluctuations. Nevertheless traces of this smeared transition may be found in the bimodal form of the interfacial height PDF (see later). We will consider two cases which correspond to different regimes of the filling transition. The contact interaction will be studied as the paradigm of the filling fluctuation-dominated regime. On the other hand, the van der Waals (p = 2) case is analysed as a prototypical case of the mean-field regime.
Results for contact interactions
We consider first the case of short-ranged potentials. Some of our results have been reported in a brief communication but without detailed explanation [13,14]. Here we provide full details of our transfer matrix solution and present new results for the form of the propagator. At lengthscales much larger than that of the bulk correlation length ξ b we may write V W (l) = Σ(θ 2 − α 2 )l/α for l > 0 and allow for line tension arising . For the latter case, the value of the reduced ground eigenvaluẽ ǫ =ǫ * is highlighted.
from the wedge bottom via a suitable boundary condition at the origin. Analysis of (9) shows that the short-distance behaviour of the eigenfunctions will be dominated by the l −3 contribution to the effective filling potential. In fact, ψ α (l) ∼ l 1/2 or l 3/2 as l → 0, so the PDF function P W (l, θ) ∼ l or l 3 as l → 0. We anticipate that the latter behaviour corresponds to the critical filling transition as predicted by scaling arguments [7], but the former will be a completely new situation which, as we shall see, is related to the possibility of tricriticality. Turning to the critical filling transition first we note that the short-distance behaviour of the PDF, P W (l, θ) ∼ l 3 , which emerges from our analysis, is indeed the required, thermodynamically consistent, result. This short-distance expansion ensures that the local density of matter near the wedge bottom contains a scaling contribution that vanishes ∝ T − T f , where T f is the filling temperature. This is the required singularity which emerges from an analysis of sum-rules connecting the local density near the wedge bottom to (derivatives of) the excess wedge free-energy [29]. This leads us to conclude that our definition of the path integral in (8) is the correct one for the 3D wedge filling problem. As we shall see it also ensures that our model is conformally invariant.
Wedge filling along the θ = α path
At the filling phase boundary (θ = α), the problem can be mapped onto the intermediate fluctuation regime of 2D wetting [30] via (18).
We apply the regularization method described above, and consider the scaling limit ξ 0 → 0, U → ∞ and 4ΣU ξ 3 0 /α → u. Under these conditions, the ground eigenvalue whereǫ = −4ΣE 0 ξ 3 0 /α and Ai(x) is the Airy function. Graphical solution of (21) (see inset of figure 3) shows that there is a bound state with E 0 < 0 for u > u c , where u c ≈ 1.358. Otherwise E 0 = 0 and there is no bound state. The existence of a bounded ground state at θ = α implies that the filling transition is first-order for u > u c , and critical for u < u c . The explicit form of the PDF for u > u c in the thermodynamic limit is where Note that the lengthscale ξ u can be arbitrary large as u → u c . Figure 3 plots the PDF in terms of the scaling variable l/ξ u . For small l, P W (l, α) ∼ l γu , with a short-distance exponent (SDE) γ u = 1.
showing that, in the scaling limit, there is only one lengthscale controlling the fluctuations of the interfacial height. Finally, the correlation length along the wedge axis ξ y close to the filling transition can be obtained as These observations indicate the emergence of a new relevant field (in the renormalization group sense) t u ∝ (u − u c ), in addition to t θ ∝ θ − α and the bulk ordering field h. Thus the conditions θ = α, u = u c and h = 0 correspond to a tricritical point which separates the lines of first-order and critical filling transitions. The excess wedge free energy density E 0 vanishes as u as u tends to u c from above. Critical exponents for the divergence of the characteristic lengthscales can be defined as the tricritical point is approached along the θ = α path: Our results show that β u W = ν u ⊥ = 1 and ν u y = 3. Finally, the effective wedge wandering exponent ζ W = ν u ⊥ /ν u y = 1/3, which coincides with its value for the critical filling transition [5,6].
For u ≤ u c and θ = α, the interface is unbounded in the thermodynamic limit. However, we can study the finite-size behaviour of the droplet shape when the interface is pinned very close to the wedge bottom at positions y = ±L/2. Making use of results presented in the Appendix, in particular (A.11) and (A.12), we find that the PDF at the tricritical (u = u c ) and critical (u ≪ u c ) wedge filling transition are given by where The typical droplet shape may be characterized by the most-probable position l mp (y) which follows from the relation ∂P L W (l m p)/∂l = 0, or by the average shape l av (y) via the definition l av = ∞ 0 dllP L W (l) [24]. In all cases we find that the typical droplet shape follows λ L l = c, where c is a number which depends on the definition of the typical shape and whether the pinning is at critical or tricritical filling. Consequently, the droplet shape must obey: Satisfyingly this shape is precisely that predicted by the requirement of conformal invariance which can be used to map a droplet pinned at just one end to one pinned at two points (y = ±L/2) [31]. This gives further indication that the definition of the measure used in our transfer matrix formulation is appropriate for the wedge geometry.
The necklace model
A simple model can be introduced to understand the filling properties of the wedge at θ = α. This model is a generalization of the necklace model introduced for 2D wetting [32,33]. The interface is pinned to the wedge bottom along segments of varying length but unbinds between them, forming liquid droplets. We associate with each vertex a weight v which is related to the point tension between a bound and unbound state.
Following the analysis described in [32], we introduce the generating function: where Z L is the interfacial canonical partition function of a segment of length La (which we assume to be discretized in intervals of length a) and z is an activity-like variable. The pure pinned (A) and unbound (B) states have generating functions: where s = exp(−u), with u as an effective line tension in units of k B T /a. On the other hand, w = exp(−τ 0 ), where τ 0 is the reduced excess free energy per unit length of a long liquid droplet on the wedge, which we can assume to be zero by shifting the energy origin. Finally, ψ is the exponent characterizing the first return of the interface to the wedge bottom. This can be calculated from the u ≪ u c limit corresponding to the completely filled regime. As shown in (A.13) and (A.15) in the Appendix, ψ = 4/3. Now the complete generating function is: Thus there will be a continuous phase transition at u c defined as [32]: where G c = L Z B L /w L . Below u c , the interface is completely unbound while it remains pinned for u > u c . The singularities of the various interfacial properties close to u c are characterized by critical exponents. In particular, the specific heat critical exponent α u W and the longitudinal correlation length critical exponent ν u y can be expressed in terms of the exponent ψ as [32,33] 2 − α u W = ν u y = 1 ψ − 1 (22) with ξu ≈ 1.968ξ θ (which corresponds to ǫ 0 = −1.5, see inset) is also plotted (thin dashed line). Finally, the scaled PDF obtained in [8] is also shown (thin dotdashed line). Inset: Plot of the ground (black lines) and first-excited (red lines) reduced eigenvalues ǫ ≡ ΣEξ 3 θ /α of H W as a function of ξ θ /ξu for u < uc (continuous lines) and u > uc (dashed lines).
Substituting ψ = 4/3 we find the correct values 2 − α u W = ν u y = 3 derived earlier. In order to estimate the interfacial average height and roughness, we need the interfacial height PDF in a liquid bubble, which is given by (25). A similar argument to the one presented in [32] leads to: where Y B is the length of a liquid bubble, and a is the average of a in the pure liquid (B) phase. It is straightforward to see that l w ∼ ξ ⊥ ∼ ξ ζW y , with ζ W = 1/3. Consequently, ν u ⊥ = β u W = 1, in agreement with our exact results. Note that in contrast to the 2D wetting case [32,33] the exponent associated to the probability of first return ψ = 2 − ζ W . This can be traced to the role played by the position dependent stiffness in the filling model (2) which biases the random-walk-like motion of the interface. As pointed out previously [14] it is remarkable that the critical exponents for 3D critical and tricritical wedge filling are identical to those anticipated for 2D complete and critical wetting with random bond disorder. The necklace model provides an elegant means of understanding this unusual dimensional reduction.
Wedge filling for θ > α
Now we extend some of our previous results to partial filling conditions, i.e. θ > α. The the Schrödinger equation (18) into a parameter free form This leads directly to the critical behaviour of the mean mid-point height l W ≡ l ∼ (θ − α) −1/4 and the correlation length along the wedge ξ y ∼ (θ − α) −3/4 , in agreement with scaling arguments [5,6], provided that the PDF is not singular, i.e. takes nonnegligible values at finite values ofη. The PDF is obtained analogous to the case θ = α case described above, from determination of the ground eigenstate ψ 0 . We find for the PDF where ξ θ ≡ Σ −1/2 [(θ/α) 2 − 1] −1/4 , ǫ 0 ≡ ΣE 0 ξ 3 θ /α is the reduced ground eigenvalue, C is a normalization factor and H s (z) is the s-order Hermite function [34]. As l → 0, the PDF vanishes (in general) like P W (l, θ) ∼ l while at large distances P W (l, θ) ∼ l ǫ 2 0 /2−1 exp[2l(ǫ 0 − l/ξ θ )/ξ θ ] as l → ∞. The value of the reduced ground eigenvalue ǫ 0 depends on the boundary condition at l = 0. In order to do this consistently we once again turn to a regularization procedure: in the scaling limit (ξ 0 → 0 and 4ΣU ξ 3 0 /α → u), the reduced eigenvalues are the solutions of the equation: where ǫ 0 is the minimum value of them. For u close to u c , we can expand the left-hand side of (37) around u c as: where the positive (negative) sign corresponds to u > u c (u < u c ), respectively. This expression allows us to identify ξ u ∝ ξ 0 /|u − u c | in a manner completely consistent with our expression obtained for u > u c at θ = α. We obtain scaling behaviour for the wedge excess free-energy per unit length E 0 (see also inset of figure 4): where c is an unimportant metric factor, and the sign corresponds to the situations u > u c and u < u c as above. The scaling two functions ǫ + 0 and ǫ − 0 have the following properties: The asymptotic behaviour of the PDF as the filling transition is approached, i.e. θ → α, is different for three situations: (i) u > u c , (ii) u < u c and (iii) u = u c (see figure 4). For the case (i), saddle-point asymptotic techniques [35] applied to the PDF (36) recover the expression for the PDF (22). Consequently, the lengthscale governing the interfacial height and range of the breather-mode fluctuations is ξ u , which remains finite as ξ θ diverges. Thus on lengthscales compared to ξ θ the PDF becomes a highly localized delta function located at l = 0. On the other hand, the firstexcited eigenvalue scales as E 1 ∝ ξ −3 θ (see inset of figure 4), so the lateral correlation length also remains finite, ξ y ∝ ξ 3 u . For case (ii) we must take the limit (42) of the scaling function ǫ − 0 . Substitution into (36) leads to the asymptotic behaviour of the PDF as θ → α. Now, the lengthscale which controls both the mean interfacial height and roughness is ξ θ . It is also interesting to note that P W (l) ∼ l 3 for l → 0, so thermodynamic consistency is assured. Our solution is different from the PDF reported in [8,7] (see figure 4) although the global behaviour is qualitatively similar. On the other hand, the lateral correlation length ξ y has the asymptotic behaviour ξ y = ( θ . Finally, the PDF for case (iii) is obtained by substitution of the limiting value (40) for ǫ 0 into (36). Although the relevant lengthscales behave asymptotically as in the case (ii), the scaled PDF is different, as shown in figure 4. In particular, P W (l) ∼ l as l → 0.
Results for dispersion forces
Wedge filling in systems with short-ranged forces is representative of the universality class of fluctuation dominated behaviour occurring if the exponent in the binding potential p > 4. In almost all practical realizations of wedge filling however, long-ranged, van der Waals forces from the fluid-fluid and solid-fluid intermolecular potentials will be present. While exceptions to this may be found, for example in polymer systems, the case of long-ranged forces is much more the rule than the exception. We wish to understand how such long-ranged forces alter the nature of the phase diagram and in particular the change from first-order to continuous filling behaviour. We can allow for the presence of long-ranged forces through the binding potential (20). For the three-dimensional case and non-retarded van der Waals interactions the value of p = 2. While higher order-terms are present these will not play a significant role in determining the physics and can be safely ignored.
Here we show that the potential is amenable to exact analysis. The form of the total effective binding potential V W (l) +Ṽ W (l) is shown in figure 5. As one can see there is a local maximum for small l which arises from the competition between the fluctuation-induced component and the van der Waals contribution. A rough estimate for its location can be obtained by setting θ = α, and a simple calculation leads to For θ > α, a local minimum is obtained for larger l. This minimum arises from the balance between the thermodynamic contribution Σ(θ 2 − α 2 )l/α and the van der Waals component of the effective binding potential. The position of the minimum is governed for θ − α ≪ α by the mean-field interfacial height: Since for long-ranged forces we anticipate that mean-field theory describes correctly the critical filling transition, a third lengthscale is the mean-field roughness, identified as [5,6]: Since in our case W π (l) = −A/l 2 , we find, from (46) where our definition of ξ θ is unchanged from the previous Section. It is remarkable that the roughness is independent of the strength of van der Waals interactions. The fluctuation-induced contribution to the wedge binding potential suggests it may be possible to find a bound state at a lengthscale determined by l * , for θ ≥ α. However no bound state is possible for θ < α: any interface would tunnel through the barrier and unbind completely from the wedge bottom.
Our analysis proceeds along the same lines as the previous Section. First we investigate allowed states that exist at the filling phase boundary θ = α, and then extend our analysis to the partial filling regime θ > α.
Wedge filling along the θ = α path
First we suppose θ = α and search for a bound state ψ 0 (l) of H W . As anticipated this is only possible if E 0 < 0, in which case the eigenfunction is given by where we define as previously ξ u = (−4ΣE 0 /α) −1/3 . In order to obtain E 0 , we make use of the regularization procedure. In the scaling limit this satisfies the equation: whereǫ = (ξ 0 /ξ u ) 3 . For values ofǫ not too close to zero, ξ u ∼ ξ 0 ≪ l * in the scaling limit, so the second term in the Airy function argument can be neglected. Thus the solution forǫ coincides with that corresponding to a contact binding potential. However, asǫ → 0, a crossover to a different situation is observed. In particular, for ξ u ≫ l * we can make use of the asymptotic expansion of the Airy function for large arguments: Substituting into (50), we find that Next we define u c as the value of u at whichǫ = 0. It is straightforward to see that u c = 1.358 + O(ξ 0 /l * ), so in the scaling limit the threshold for the existence of a bound state is the same as for the contact binding potential. Now we expand (52) for u around u c andǫ around zero. In the scaling limit, we obtain that: Finally, the PDF in this regime can obtained from substitution of the asymptotic relationship (51) into (49). After some algebra, the PDF reads: (see also figure 6). Since l * remains finite at u c , the interface remains bound as u approaches u c from above, and suddenly unbinds for u < u c . This identifies the point h = 0, θ = α and u = u c as a critical end point in the surface phase diagram. However, although u is not a relevant field in the renormalization-group sense, the longitudinal correlation length ξ y = |E 0 | −1 diverges as (u−u c ) −1 . A similar behaviour occurs within the subregime C of the intermediate fluctuation regime for 2D wetting transitions [30].
Wedge filling for θ > α
We now extend our results to θ > α proceeding in the same way we did for contact interactions. Again we can analytically obtain the ground state ψ 0 (l) of H W as: where ǫ 0 , ξ θ and H s (x) are defined as in (36) for the contact binding potential. Note that the dependence on the dispersion forces only appears at the order s of the Hermite function H s . The regularization procedure leads to the following equation for the reduced eigenvalues ǫ (for u close to u c ): and ǫ 0 is the minimum of the solutions. We can see that ǫ 0 will depend now not only on the ratio ξ θ /ξ u and the sign of u − u c , but also on ξ θ /l * . Nevertheless as for the case with a contact binding potential, we are mainly interested in the limit θ → α, so that the lengthscale ξ θ is very large. For u > u c , saddle-point asymptotic techniques analogous to those applied in the previous Section [35] show that the ground state of H W converges to the expression (49) as θ → α. Consequently we recover the results obtained earlier for the special case θ = α. This indicates that, for these values of u, the wedge filling transition must be first-order.
For u < u c the non-existence of ground state for θ = α indicates that the filling transition is critical. We anticipate that mean-field theory will describe faithfully singularities at the critical wedge filling [5,6]. Thus we expect that the interfacial height PDF is centered around l MF W with Gaussian fluctuations on the scale of ξ MF ⊥ representing the breather mode excitations. The mean-field value of the excess free energy per unit length is given by V W (l MF W ). Consequently, the mean-field value of the reduced ground eigenvalue ǫ MF 0 is: The shift of ǫ 0 with respect to ǫ MF 0 due to breather-mode fluctuations ∆ǫ 0 can be estimated in the following way. We expand V W around its minimum up to quadratic order, so the shift may be estimated via: implying that which vanishes as θ → α.
We can now proceed with a more formal derivation. Equation (55) can be written as: where we have defined: with ∆l ≡ l − l MF W and ∆ǫ 0 = ǫ 0 − ǫ MF 0 . Note that (60) is the wavefunction of the harmonic oscillator in the x coordinate (in units of /mω), modulated by the factor √ l. As in the latter case, ψ 0 will increase exponentially as x → −∞, i.e. l → 0 and large l MF W , unless s is a non-negative integer. This result is independent of the explicit value of ξ θ /ξ u . Consequently, the shift ∆ǫ for the lowest eigenvalues is given by: with n a non-negative integer. The ground eigenstate will correspond to the case n = 0. The corresponding PDF becomes a Gaussian: with l = l MF W + 2l * /3 ≈ l MF W , and roughness ξ ⊥ = ξ θ /2. Finally, the longitudinal correlation length ξ y = ( Consequently, the explicit transfer matrix solution is in complete agreement with the predicted mean-field values for the critical exponents when p = 2 [5,6]: 2 − α θ W = 1/2, β θ W = 1/2, ν θ y = 1 and ν θ ⊥ = 1/4. Finally, we search for the existence of the thin-thick transition line. Our calculations show that there is no sharp phase transition for θ > α. However, we observe that the interfacial PDF becomes bimodal for u < u c and some range of values of θ > α. We can identify a first-order pseudo-transition line when the areas under each maximum of the PDF are equal (see Fig. 7). This line is the continuation to the partial filling region of the first-order filling transition line for u < u c , and touches tangentially the filling transition borderline at u = u c . As u decreases, the two maxima become closer, and eventually merge (the pseudo-critical point).
Discussion and conclusions
In this paper we have reported analytical results for 3D wedge filling transitions in the presence of short-ranged and long-ranged (van der Waals) interactions based on exact solution of the continuum transfer equations for a pseudo one-dimensional interfacial Hamiltonian. First-order and continuous (critical) filling are possible for both types of force depending on the strength of the line tension associated with the decoration of the wedge bottom. Our analytical solution for the interfacial height PDFs at critical filling recovers the known values of the critical exponents for critical filling for short and long-ranged forces. In addition we have elucidated the nature of the cross-over from first to critical filling which occurs at a specific value of the wedge bottom line tension. This is qualitatively different for systems with short and long ranged forces whose surface phase diagrams are shown to have tricritical and critical end-points respectively. To finish we argue that these results, which are proto-typical of the fluctuation dominated and mean-field regimes respectively, are qualitatively valid for any type of binding potential. As already mentioned, the 3D wedge filling phenomenon can be mapped onto a 2D wetting problem with a new collective coordinate η ∝ l 3/2 . If we suppose that, at the filling phase boundary, l 3 V W (l) → 0 for large l, we can make use of a renormalization-group arguments to determine the allowed behaviour. As h and t θ ∝ θ − α are always relevant operators, we will restrict ourselves to the filling transition boundary θ = α, h = 0. The effective 2D wetting binding potential decays faster than 1/η 2 , so it becomes irrelevant in the renormalization-group sense [36,37]. The renormalization-group flows are dominated by the unstable fixed point and the stable fixed point for the 2D wetting binding potential −5/72η 2 , which correspond to the tricritical and critical filling transition, respectively. Thus, for large scales the only effect of such binding potentials that decay faster than 1/l 3 is to renormalize the line tension associated with the wedge bottom. For long-ranged binding potentials i.e. those for which, at θ = α, l 3 V W (l) → ∞ for large l, we may resort to a mean-field analysis [5,6]. We expect that close to the filling transition the PDF is asymptotically a Gaussian characterized by the meanfield interfacial height l MF W and roughness ξ MF ⊥ . However, if the next-to-leading order to the wedge binding potential is determined by the short-distance repulsive part of W π (z), we find a similar scenario to the one depicted in figure 5. In particular, it is possible to bind the interface to the wedge bottom. The interfacial roughness will be controlled by the (microscopic) lengthscale l * ∼ l π corresponding to the maximum of the total effective binding potential. The existence of such a bound state, as well as the threshold to the critical filling transition will depend on the specific details of the interfacial binding potential.
The predicted filling phenomenology presented in this paper can hopefully be checked experimentally or by computer simulations of more microscopic models. Of course our predictions for contact (strictly short-ranged) forces requires the elimination of van der Waals forces which are ubiquitous for simple fluids. Here our results are most easily tested using large scale Ising model simulations. In this case it should be straightforward to induce first-order filling by weakening the local spin-substrate interaction near the wedge bottom. In contrast our predictions for first-order filling with van der Waals forces may well be amenable to experimental verification sometime in the near future. Taking an even broader perspective it may be that the chemical decoration of a wedge bottom will provide a practical means of eliminating large scale interfacial fluctuations. This may be of relevance to the construction of microfluidic devices whose efficiency will depend crucially on the control of fluctuation effects.
on the (0, ∞) interval, so the solution is not unambiguosly defined, but instead depends on a parameter c which defines the short-distance behaviour of the eigenfunctions, i.e. φ α (η) ∼ cη 5/6 + 2 2/3 Γ(1/3) Γ(−1/3) η 1/6 η → 0 (A.1) We define the partition functionZ(η b , η a , Y ) via the spectral expansion: where the summation must be understood as an integral for the continuous part of the spectrum. From (17), this partition function is related to Z(l b , l a , Y ) via: The continuous part of the spectrum of any self-adjoint extension of our interfacial Hamiltonian corresponds to E > 0. The corresponding scattering states can be expressed as [38]: where J s (x) is the s-order Bessel function of first kind. In addition, for c > 0 there is a bounded eigenstate: with associated eigenvalue E 0 = −c 3 /2, where K s (x) is the s-order modified Bessel function of second kind. Transforming back to the original variable l = (9α/8Σ) 1/3 η 2/3 , the associated eigenfunction ψ 0 (l) is: where Ai(x) is the Airy function, and ξ u is defined: On the other hand, the longitudinal correlation length along the wedge axis reads ξ y = 2/c 3 . Comparison with the results obtained by the regularization procedures in the text allows us to identify c ∝ (u − u c ). For general c, we cannot perform the spectral integral. However, for c = 0 (u = u c ) and c = −∞ (u ≪ u c ) we have closed form expressions forZ(η b , η a , Y ) [39,40] with the scaling property: Substituting into (A.3) we obtain where l * ≡ l/ξ u and Y * ≡ Y /ξ y . Now the lengthscale ξ u is arbitrary but ξ y = 4Σξ 3 u /α. In both cases we have scaling such that for c = 0 and −∞. For Y → ∞, U c has the following asymptotic behaviour: To obtain results for arbitrary c, we will make use of the Krein formula [41], which relates the Green functions of different self-adjoint extensions of a closed symmetric operator. In our case, the Green function Z(l b , l a ; E) is basically the Laplace transform of Z(l b , l a ; Y ) with respect to Y : where l * > and l * < are the largest and smallest between l * a and l * b , respectively, and E * ≡ Eξ y .
To continue, we define the lengthscales ξ u and ξ y for each c as: | 10,728.2 | 2007-02-27T00:00:00.000 | [
"Physics"
] |
Application of Substructure Techniques to Syntactic Metal Foams in a Finite Element Environment
The presented work focuses on the development of a novel method that can numerically describe the properties of metal matrix syntactic foam (MMSF) with low memory requirements and short computational times without losing the properties of the interior structure. In this paper, we propose a novel method that avoids using the homogenization technique and instead rearranges stiffness matrices and constructs specific substructures to perform the overall construction. The two-dimensional cases are discussed in order to focus on the methodology itself. First, the reductions and structural design with solid mesh structures were performed
Introduction
One of the main problems with the modelling of metallic foams lies in their stochastic structure. The solutions for this problem are: • to investigate representative elements of the cell structure, or • to model the whole random foam structure.
The first case is a strong simplification that may lead to a poor agreement with the measured properties, especially in the case of more complex structures like metal matrix syntactic foams (MMSFs), in which the porosity is ensured by embedded porous second-phase particles. In the second case, the calculation time can reach days even on high-performance computers. In this paper a new, so-called substructure technique is proposed, to solve the task and problems mentioned before. The main goal of this work is to create an accurate, reduced-order model. This way the running time can be drastically decreased, which is a common problem in modelling of metal foams. This paper is based on mechanical models, but it is also important to briefly describe models based on other principles (e.g., numerical calculations) to get a more complex picture of the modelling methods in the field of MMSFs.
Besides the low density of metal foams, a very favorable property is their high energy absorption capacity. It is difficult to formulate generalized properties to describe the behavior of the metal foams [1]. Their elastic modulus is a property that allows the foams to be modelled relatively well, within certain limits. Several parametric modelling techniques have been developed that can give good estimates of compressive curves and elastic modulus. From previous studies, it can be seen that some papers process and compare several different modelling techniques (e.g., numerical, finite element, analytical) [2][3][4][5]. The most commonly used modelling process is the homogenization technique, which is often combined with the representative volume elements (RVE) [5][6][7][8][9]. This method has advantages but disadvantages too. In many cases, the models do not even consider the interior foam structure but assign foam properties to compact models. This method is commonly found in commercial finite element software. Neglecting or simplifying the marrow structure often causes differences, to a lesser or greater extent, between calculated values and experimental results [10]. Some researchers have found that preserving the interior defects and accurately representing the disordered interior structure lead to significantly improved results. [11][12][13]. In any cases, CT-based models are prepared for finite-element calculations, which often show exact agreement with experimental results, but their extremely time-consuming preparation makes calculations with larger-scale models nearly impossible [8,13]. However, CT scanning is often used on its own as an examination of the response to external loads [9,[14][15][16]. Occasionally special, more manageable interior structures are created to try to replicate the interior assembly behavior. However, these usually only give good results for certain load cases. The capabilities of the different methods are often presented on two-dimensional structures and in the case of satisfying results, the methods are extended to spatial models [17,18]. A further step forward from the use of conventional metal foams is the use of MMSFs, where the foam structure is provided by filling with hollow spheres. With MMSFs better properties can be achieved than with conventional closed-cell foams, but their accurate modelling is a more difficult task (due to the second phase filler) [19]. In this case, modelling the chemical interaction at the interface between the hollow spheres and the matrix becomes an important factor. The calculations will be greatly improved if the interfacial separation can be simulated [11]. Another important factor is the distribution of hollow spheres in the matrix. It has been shown that the Gaussian distribution provides the closest result to the experiments [20]. In the last couple of years, the porosity in the matrix has been getting attention, regarding how it can be a negligible factor or a significant influence under external loads [21].
To improve the modelling of MMSFs, the application of the substructure technique is proposed. The basic idea of the substructure technique was conceived a long time ago for the reason that large models (e.g., airplanes) could be treated as separate units and then reunited along the joining perimeter [22]. This paper builds on this basic idea but applies it to a very different set of problems. This work is based on the creation of so-called super finite elements. In general, model reduction procedures based on the substructure technique are commonly used for studies involving model updating (e.g., modal analysis) [23,24]. In addition, if the program is parallelized, the runtime can be reduced even more drastically. Fortunately, this method is easy to parallelize [25]. In this way, the substructure technique can reduce computation time by up to three orders of magnitude [25,26]. In the modal analysis of large structures (such as bridges), it has been shown that if many identical substructures can be formed, then the calculations can be run faster. This finding will be an important reference later in this paper [27]. Regardless of whether the substructure technique is used for dynamic or quasi-static cases, the stiffness matrix is always decomposed in a similar way. The nodes are clustered in the connection points of the substructures and other points. From this separation, a reduced matrix of stiffnesses will be derived that combines the stiffness of all points into connection nodes. This will give significantly lower degrees of freedom (DoF) system [28,29]. Often, large structures are divided into completely periodic, repeating parts to form substructures. In some cases, 2D models are used to test the efficiency of the methods, as their effectiveness is already becoming apparent [30]. Several studies compare different numerical and analytical techniques, but it is clear from all of them that no universal formula exists, and that each method must be optimized for each problem [31,32].
The main goal of this work is to use the substructure technique to provide a reduced DoF model that considers the disorder of the interior structure of MMSFs while keeping the model treatable at runtime. The presented program is based on the displacement method within finite element calculations. To develop the methodology, a two-dimensional model was used to make the presentation of the subsequent steps even easier to follow. The method was mainly developed for small deformations and static loads.
Introduction of the substructure technique
Describing the real geometry of metal foams and solving its mechanical behavior is a challenging task even for a high-performance computer. To improve the effectiveness of the finite element method (FEM) various techniques can be applied such as reduced ordered model (ROM) or sub-structuring. The inhomogeneous interior structure of metal foams is usually treated as homogeneous with generalized material properties, but much interior, strength-relevant information is lost. The foundation of these techniques, assuming that dynamic effects are disregarded, is the discretized equation of equilibrium (Eq. (1)): where K is the stiffness matrix of the structure, u is the nodal displacement vector, and f is the load vector.
Here, only small displacements and stains are taken into account. An example of a plane strain problem is shown in Fig. 1, which in this case shows a rectangular geometry for simplicity. But it can be any other two-dimensional geometry. Fig. 1 shows a quadrilateral mesh, but these can be replaced by triangular elements too. Displacements are fixed in the x-y directions at the left side of the meshed region and the upper right side is loaded. The primary problem with this approach is that for models with a rising number of DoF the memory usage and computation time also increases significantly. The basic idea of the substructure technique is to decompose the mechanical model into parts and incorporate their interior stiffness values into to the boundary points. While preserving the original geometry this method results in a mechanical model that behaves under external loads in the same way as the originally meshed model, but with a significantly reduced DoF.
To demonstrate the sub-structuring consider a bit more complex model ( Fig. 2) with FE mesh than the previous one. The substructures are marked with a big yellow number which can be called super finite elements. The black dots represent the inner nodes and the green dots the edge nodes. The boundary nodes lie on both common and non-common edges of the substructures. Interior nodes are eliminated, and their effect is taken into account through a modified stiffness matrix.
The method is illustrated on a 2 × 2 substructure with mesh and then extended to more complex structures. The stiffness matrices of the adjacent substructures, grouped into the interior and external nodes, look like Eq. (2): where u b is the boundary and u i is the interior column vector of the displacements in the directions of x and y, K bb is the stiffness matrix concerning the boundary points, K ib and K bi describe the connection between the boundary and the interior points, and K ii is the stiffness matrix concerning the interior points. , , and are equal to zero, f b including the load given for all the perimeter points. Because the stiffness matrix K bb needs to be split into four separate parts, each substructure has its own boundary point, several of which coincide. Then, the load f b and displacement u b vectors must also be decomposed into as many pieces as many substructures exist. Each of them must be considered separately with its traction and displacement on the surface. The matrix equation is: Performing the matrix multiplication and expressing the first and last two lines in terms of the displacements of the interior nodes results in the following algebraic system of Eqs. (4) to (7): After explaining the remaining third row, Eq. (8) Here it can be seen that Eqs. (4), (5), (6) and (7) result in the back-transforming matrices: where n = 1, 2, 3, 4. It will be indispensable later.
The new, generalized formula of Eq. (11) is the following for n substructures and m substructure loads: where n denotes the number of substructures and m the number of load vectors. Based on Eq. (11), it is easy to see that if the given part is divided into many substructures, the stiffness values of the same substructures do not need to be calculated from scratch but can be stored in memory and simply added to the stiffness values in the next step. The effectiveness of the substructure technique depends largely on the ratio of the edge points to the interior points. If this ratio is as small as possible, the running time can be significantly reduced.
Application of substructure technique to metal foams
Modelling conventional metal foams is a challenging task due to their internal random porosity. For MMSFs, this is even more complicated. The problem with CT-based mechanical models or other techniques describing the internal structure is that the created models have too high DoF, which is extremely time-consuming to solve. For this reason, the substructure technique was used, but in a way, it had never been applied before. The aim was not to develop a new homogenization technique, but to maintain the internal inhomogeneous structure. The first step was to create a few differentiating substructures. An important criterion is that the boundary points of all substructures (super finite elements) must be the same, as in the case of conventional finite elements. In the following, it will be weighty to take into account the fact that the more identical substructures are in the whole structure, the more efficient the memory usage will be. In addition, if the points on the edges are equivalent, then one super finite element can be replaced by another. Compared to the solid structure, the only modification required about the foams is to remove the corresponding finite elements from the model with their appropriate stiffness values. Exactly how many finite elements should be left out of a given substructure is determined by what percentage of the volume of the foam to be modelled as a cavity. The hollow parts are filled with gas in the real case, but the existence of gas is mechanically negligible.
In the case of MMSFs, this is complemented by the need to substitute a different material constant matrix around the cavity in a given layer when calculating the stiffness matrix. Fig. 3 shows a substructure where the filler hollow sphere is shown in green. 63% of the internal part of the substructure is removed, because this is the theoretical maximum gas percentage of metal MMSFs [7]. Prefabrication of substructures with different internal spherical geometries is the key to this method. The efficiency of the method is that the whole structure is built by producing and assembling the predefined finite number of pieces of different substructures (e.g., Fig. 3). The stiffness matrices of these small substructures can be easily Fig. 3 Representative MMSF unit for substructure technique generated. Thus, the external geometry and stiffness of the whole model are the same as that of the conventional model, but with a much smaller number of DoF. Finally, only summarizing stiffnesses the substructures is needed to produce the stiffness matrix of the reduced structure. The connectivity matrix of the super finite elements will provide information on the summing of stiffnesses. The geometry of the resulting model and the displacement values calculated on the external surface are completely consistent with the conventional method. The advantage will be the faster solvable algebraic equation system due to the smaller number of unknowns.
An example of this method can be seen in Fig. 4, where nine substructures with different interiors have been predefined. In such a case, only the stiffness of the nine differentiating super finite elements is calculated and separated into boundary and interior points. Then the matrix operations behind the first sum of Eq. (11) must be performed for the nine cases. Afterward, the matrix operations behind the first sum of Eq. (11) must be performed for the nine cases. An important criterion is that the substructures must be identical at the edges, as the nodes must overlap for continuity when joining. As an analogy, this is understood in the same way as in the rules of dominoes that only pairs of identical elements can be joined. A normal distribution is the best way to approach real inhomogeneity. Thus, it is only needed to insert the calculated stiffness values of the nine cases into the corresponding positions. In this way, it is possible to construct inhomogeneous random internal structures in a memory-efficient way using a few different sub-structures.
The irregularity of the internal structure of the model can be further increased by varying the diameters of the hollow spheres. This method is applied in the same way to the three-dimensional case, but with three-dimensional substructures.
Demonstrating the effectiveness of the substructure technique on different MMSF structures
Consider a practical implementation of the two-dimensional foam structure. The model built from substructures is subjected to loading. The 10 × 10 mm model substructure partitioning and specifying boundary conditions are shown in Fig. 5. Each substructure includes a mesh spacing of 0.0125 mm edge length. The internal structure of the super finite elements includes a 63% space-filling of the spherical cavities, but this is no longer visible on the reduced structure. Equation (11) was used to calculate the displacements of the boundary points in the full substructure model.
The displacements of the boundary points were interpolated to produce the reduced displacement field shown in Fig. 6. For better visibility of displacements and line widths the scale factor is ten.
The resulting plots will usually show the largest displacements, since the accurate displacements of the full contours of the models will always be known. However, this method does not yet provide information on the internal points at this stage. Equation (9) gives the possibility to determine the complete displacement field of the selected substructure. The entire displacement field of such a super finite element is shown in Fig. 7.
The key to the effectiveness of the method lies in the minimized use of memory. Previously, it was mentioned that the geometry of the model is described by predefined and reduced-size substructures, but the memory efficiency of this was not discussed. These predefined super finite elements just need to be stored in the memory to build up the whole model with them. Relative efficiency increases with the larger model size and when the number of types of substructures is fixed. Consider a bar diagram (Fig. 8.) where a predefined substructure is used to describe models of increasing size. Two pre-defined substructures were used to determine the values of the diagram. The vertical axis shows the ratio of memory allocation of the stiffness matrix between the predefined substructures and the total reduced structure.
Equation (12) is used to calculate the memory utilization rate (MUR), where m denotes the pre-defined substructures and M denotes the total reduced memory allocation. It can be seen, that the preallocated memory requirement decreases exponentially as the number of required substructures increases (see Table 1 and Fig. 8).
The proportion of predefined substructures that allocates memory becomes smaller compared to the total structure. Since a real model will consist of many substructures, it is easy to see that the initial memory allocation will converge to zero.
Increasing the internal mesh density of the substructures will not affect the rate of run-off, as it will increase the size of the overall reduced structure. The only parameter affecting the ratio reduction is the number of predefined substructures. Increasing this number reduces the convergence efficiency. Since the vertical axis is a ratio of the memory allocations, increasing the number of pre-defined substructures would increase the values of the ratios in direct proportion as shown in Fig. 9. Each color represents an increasing number of predefined substructures.
One method to achieve more accurate results in FE calculations is to increase the density of the mesh. But this increases the size of the system of equations to solve. Regardless of the way the system of equations is solved, this significantly increases the computation time.
If the number of substructures in a model is fixed, but the basic mesh is continuously densified, the effectiveness of the method becomes apparent which can be clearly seen in Table 2. This difference can be illustrated in Fig. 10.
In the case where the number of substructures is not fixed the interpretation of running times becomes more complex. In such cases, the shortest running time can be found by varying the reduction ratio of the boundary points (Eq. (12)) and the number of substructures.
If the calculation is performed on models with a fixed number of DoF at multiple points, the colored curves can be seen in Fig. 11 are obtained. The reason for including these two data on the horizontal axes is that their mutual interaction will clearly influence the running time. An important aspect of Fig. 11 is that since data on horizontal axes interact with each other, the value of one axis must always be fixed when searching for the optimum. The interaction of the horizontal axes can be easily seen from the fact that if the number of substructures is increased, the reduction ratio will obviously change and vice versa. Side views 1 and 2 in Fig. 11 show how the running time increases dramatically with the reduction ratio and slightly with the number of substructures. From the top view, it can be seen that the curves with a constant number of DoF take on a hyperbolic shape.
Each curve has a minimum point associated with the minimum running time. It can be found that the minimum points of curves with increasing DoF belong always to higher reduction ratios (Eq. (12)). The reduction ratios for the minimum running time are summarized in Table 3. This optimal ratio shows a slowly increasing trend, as shown in Fig. 12. The implication for models with larger memory allocation sizes or denser meshes is that more substructures can be used to achieve minimum runtimes. However, this trend is not directly proportional to the number of DoF, but only slightly increasing.
Conclusions
The substructure technique has been presented for calculating small deformations of MMSFs. The conventional finite element computation method for the plane deformation problem has been compared with the substructure technique in 2D. The substructure technique can significantly reduce the running time, which is becoming more pronounced as the size of the model increases. This work has shown that homogenization techniques or CT-based meshing are not the only methods to describe metal foam structures. The substructure technique can combine the beneficial features of both methods and neglect their disadvantages. An important finding is that with pre-defined substructures it is possible to build the internal foam structure in a memory-efficient way without increasing the runtime as in CT-based methods. The other major result is that as the number of DoF is increased, the reduction ratio associated with the minimum running time shows a slight increase. This slightly increasing reduction trend shows the number of substructures that, depending on the size of the model, can achieve the minimum running time. . 12 The trend of the minimum running time points with increasing DoF | 5,069.4 | 2023-08-21T00:00:00.000 | [
"Materials Science"
] |
Class incremental learning of remote sensing images based on class similarity distillation
When a well-trained model learns a new class, the data distribution differences between the new and old classes inevitably cause catastrophic forgetting in order to perform better in the new class. This behavior differs from human learning. In this article, we propose a class incremental object detection method for remote sensing images to address the problem of catastrophic forgetting caused by distribution differences among different classes. First, we introduce a class similarity distillation (CSD) loss based on the similarity between new and old class prototypes, ensuring the model’s plasticity to learn new classes and stability to detect old classes. Second, to better extract class similarity features, we propose a global similarity distillation (GSD) loss that maximizes the mutual information between the new class feature and old class features. Additionally, we present a region proposal network (RPN)-based method that assigns positive and negative labels to prevent mislearning issues. Experiments demonstrate that our method is more accurate for class incremental learning on public DOTA and DIOR datasets and significantly improves training efficiency compared to state-of-the-art class incremental object detection methods.
INTRODUCTION
In various industries such as urban planning, security monitoring, outer space exploration, and many others, remote sensing image processing is widely utilized.It has consistently been a focal point in computer vision due to its high resolution, significant differences in object size distribution within images, and varying orientations.In recent years, the development of deep learning technology has enabled some methods to effectively handle small and multi-directional objects (Xiaolin et al., 2022;Ming et al., 2021).However, existing methods do not allow for continuous learning of new classes in a human-like manner.In other words, when the model learns a new class, it must retrain with samples from both the previously learned class and the new class to achieve satisfactory results.Otherwise, the model will experience catastrophic forgetting.This learning process differs from that of humans.Furthermore, storing samples from old classes consumes a considerable amount of storage space.
For this reason, developing a model that can learn new classes without using old samples and avoid catastrophic forgetting is essential.Some methods attempt to address this issue by updating the parameters of new tasks in the orthogonal space of old tasks from an optimization perspective, thus mitigating forgetting to some extent (Kirkpatrick et al., 2017;Li & Hoiem, 2017).Other methods (Rebuffi et al., 2017;Rolnick et al., 2019) adopt a rehearsal mechanism, similar to human review.When learning new tasks, they include a small number of training samples from old tasks.Distillation (Lee et al., 2019;Yang & Cai, 2022) is widely employed in these methods to ensure the model performs well across all tasks.Yet other methods (Kirkpatrick et al., 2017;Mallya & Lazebnik, 2018;Fernando et al., 2017) are based on the over-parameterized characteristics of deep neural networks, activating or expanding neurons for different tasks.However, these methods lack the utilization of learned knowledge, akin to humans reviewing old knowledge to better learn new knowledge.Furthermore, recent work (Simon et al., 2022) employs Mahalanobis similarity as a learning parameter to learn meaningful features, but it still encounters the issue of linearly increasing the number of parameters as the number of tasks increases.Most existing lifelong learning methods assume that tasks originate from the same distribution, ignoring the more general situation where tasks come from different domains.
There are also incremental object detection methods designed to address catastrophic forgettings, such as Liu et al. (2020a), which restricts the updating of weights on new classes based on the importance of the impact of a new class on the model and limits the update of weights on new tasks.A regularization term is introduced to constrain the update of model weights on a new class.With a certain number of neurons added to the model to learn the new class, Dong et al. (2021) and Shieh et al. (2020) ensure that the model learns the new class while maintaining the model's parameters for the old classes simultaneously.In Hao, Fu & Jiang (2019a), distillation techniques are employed to ensure that the network model remembers the old classes while learning a new one.Shieh et al. (2020) use a replay-based approach, i.e., storing some representative samples of the old classes, and acquiring new knowledge by using new task samples and the stored old samples.However, there are two main problems with existing methods: 1.The existing methods cannot fully exploit the similarity information among classes as humans can.For instance, humans can learn to detect helicopters faster in a model that has learned to detect aircraft.2. With the increase in classes, a larger model, storage and computational costs will be inevitable, and the model's accuracy will decrease rapidly.
To deal with the above issues, the main contributions of this article are concluded as follows: 1.Based on class similarity distillation, we propose a method for class incremental object detection, which can dynamically adjust the distillation weights according to the similarity between new and learned classes, i.e., if the new task is more similar to the old class.In that case, the distillation weights on the new class can be increased to enhance the forward transfer ability of the model and vice versa to ensure the unity of model plasticity and stability.2. By maximizing the mutual information between the new class and the old task, we propose a global similarity loss (GSL) that maximizes the extraction of similarity information between the new and old classes.3. The experiments demonstrate that our model can guarantee high accuracy without adding additional storage or computing resources.The related work is briefly reviewed in the ''Related work'' section, and the proposed approach is clarified in the ''Methods'' section.Experiments and implementation details are provided in the ''Results'' section to validate our method's effectiveness using two standard remote sensing datasets.There is further discussion of the article's shortcomings in the ''Discussion'' section, and a conclusion is in ''Conclusions''.
RELATED WORK
In recent years, deep learning-based object detection methods have seen rapid development.Generally, these methods can be classified into two categories: anchor-based, such as the R-CNN series (Girshick, 2015;Ren et al., 2015) and YOLO series (Redmon et al., 2016;Redmon & Farhadi, 2017;Redmon & Farhadi, 2018), and anchor-free, which are not based on preset anchors, such as FCOS (Tian et al., 2019) and DETR (Zhu et al., 2020).Both algorithms are highly accurate in detecting objects, but they cannot handle class incremental learning tasks.In recent years, some class incremental object detection algorithms (Yang et al., 2022;Zhang et al., 2021;Ul Haq et al., 2021) have emerged that can incrementally learn new tasks.These methods are divided into three main categories: parameter isolation-based, replay-based, and regularization-based.
The first category is the rehearsal-based method, similar to human review.When the model learns new tasks, the impact of old tasks is considered simultaneously, allowing the model to better remember old tasks and avoid catastrophic forgetting.This method widely uses distillation technology, as it can quickly learn new tasks with few samples.The most representative is the ICARL algorithm (Rebuffi et al., 2017), which uses a teacher network and student network to enable all learned tasks to converge quickly with a small number of training samples.Therefore, only a small number of previous task samples need to be stored when learning a new task.To save memory overhead, Rolnick et al. (2019) propose reservoir sampling to limit the number of stored samples to a fixed budget data stream.Continual prototype evolution (CPE) (De Lange & Tuytelaars, 2021) combines the nearest-mean classifier approach with an efficient reservoir-based sampling scheme.More detailed experiments on the rehearsal for lifelong learning are provided in (Masana et al., 2020).
Compared to directly storing samples, another representative method is GEM (Lopez-Paz & Ranzato, 2017).It stores the gradient of previous tasks instead of training samples, ensuring the direction of the gradient update for new tasks is orthogonal to the previous tasks, reducing interference with prior knowledge.Many methods adopt similar principles.
To further save memory space, numerous GAN-based methods are proposed to generate high-quality images and model the data-generating distribution of previous tasks, retraining on generated examples (Robins, 1995;Goodfellow et al., 2014;Shin et al., 2017;Ye & Bors, 2021).Although GAN-based methods reduce storage space, they introduce many additional calculations.
The second category is the regularization-based method.The main idea of these methods is to add a regularization term of parameter importance, which can reduce the update of essential parameters for old tasks and increase the update of unimportant parameters.To evaluate the importance of parameters, LwF (Li & Hoiem, 2017) limits the update of parameters according to the difference between the new task and the old task.EWC (Kirkpatrick et al., 2017) determines the importance of weight parameters according to the training Fisher information matrix.However, with increased tasks, Fisher regularization will excessively limit the network parameters, resulting in the inability to learn more new tasks.To address this problem, some methods, such as the SI algorithm (Zenke, Poole & Ganguli, 2017), determine the importance of network parameters according to the variation range of network parameters from old tasks to new tasks.However, the parameter update method of random gradient descent often makes the results unstable.In contrast, MAS (Aljundi et al., 2018) allows importance weight estimation to provide datasets without supervision, enabling it to perform user-specific data processing.Variational continuous learning (VCL) (Nguyen, Ngo & Nguyen-Xuan, 2017) uses a variational framework for continuous learning.Some Bayesian-based works (Ahn et al., 2019;Zenke, Poole & Ganguli, 2017) estimate the importance of weights online during task training.Aljundi et al. (2018) propose an unsupervised parameter importance evaluation method to increase flexibility and online user adaptability.Further work by Lange et al. (2020) and Aljundi, Kelchtermans & Tuytelaars (2019) extends this method to the case of no task setting.However, these methods are generally difficult to converge.
The third category is neuron activation or expansion methods, which activate different parameters of the network for different tasks or add additional parameters for new tasks in advance if the deep neural network is over-parameterized.However, the increased number of tasks can easily lead to the saturation of model parameters.
PackNet (Mallya & Lazebnik, 2018) prunes weights in the network according to the importance of the weights.Only the first 50% of the weight is selected each time to train the current task.HAT (Serra et al., 2018) either freezes previous task parameters or dedicates a model copy to each task when learning new tasks.Alternatively, the architecture remains static, with fixed parts allocated to each task.The previous task parameters are masked during new task training, and each task feature is converted into an embedding.After passing these embeddings, the network converts them into masks.HAT (Serra et al., 2018) takes sparsity as the loss function, which is more intelligent.These works typically require a task oracle, activating corresponding masks or task branches during prediction.Therefore, they are restrained to a multi-head setup, incapable of coping with a shared head between tasks.Expert gate (Aljundi, Chakravarty & Tuytelaars, 2017) avoids this problem by learning an auto-encoder gate.
Compared to fixed network weight numbers, there are also some methods such as progressive networks (Rusu et al., 2016), dynamic memory networks (Perkonigg et al., 2021), and DER (Yan, Xie & He, 2021) that increase the network structure.Whenever a new task is performed, appropriate neurons are added to train the new task.However, these methods cannot be used for large-scale task learning due to the limitation of the number of parameters.
In recent years, several works in remote sensing have been using incremental learning methods to detect optical remote sensing images acquired through remote sensing, SAR, and hyperspectral images as a result of the above methods of incremental object detection.These works have been achieving some results by using these incremental learning methods.Although remote sensing image object detection is a complex task, studies have yet to be conducted on class incremental object detection owing to its high complexity.Instead of adapting to unseen new classes, acquiring new samples from old classes will improve the detector rather than adapting to unseen new classes.The article's authors propose a class incremental learning method based on multiscale features to detect objects in more than one direction.Dong et al. (2021) proposed a method that could reduce the number of new classes by using a class incremental learning method that combines a teacher-student structure and selective distillation to reduce the number of new classes.
In Li et al. (2022), a Rank-aware Instance Incremental Learning (RAIL) method, based on the notion of a rank-aware instance incremental learning measure, is proposed.RAIL considers the differences between learning values in data learning order and training loss weights.Rank scores are then used to weigh the training losses to balance the learning contributions.However, existing research on continual object detection is still in its early stages, and current approaches primarily fall into two main categories: experience replay (Joseph et al., 2021a) and knowledge distillation (Liu et al., 2020b;Shmelkov, Schmid & Alahari, 2017).Joseph et al. (2021a) stores representative examples in memory, allowing them to be trained alongside new category samples and fine-tuning the model.Shmelkov, Schmid & Alahari (2017) employs knowledge distillation for both object localization and classification.Liu et al. (2020b) further utilizes attentive feature distillation to extract essential knowledge through both top-down and bottom-up attention mechanisms.
However, when the distribution of the new class is very different from the distribution of the old class, existing methods based on knowledge distillation cannot effectively learn the information of the new class.Furthermore, even though complex models can be used to increase the detection accuracy of individual tasks, it is detrimental to knowledge distillation when this happens.Based on human learning, the efficiency of learning increases as more knowledge is learned since humans can use the learned similarity information to increase the speed of learning.
Inspired by human learning behavior, we propose a new method to continuously detect objects in remote sensing images by considering the similarity and differences between new and old classes by utilizing knowledge distillation to its fullest extent.As a result, the efficiency of the model can improve as more knowledge is learned.
METHODS
Our proposed class incremental object detection framework is shown in Fig. 1.We use the Faster R-CNN detection framework with the backbone of the feature pyramid network (FPN) (Lin et al., 2017).To maximize the similarity between learning tasks, we use class similarity distillation (CSD) loss at the block-wise level and Global Similarity Distillation loss at the instance level.In addition, we use an RPN-based method to assign positive and negative labels to prevent the mislearning problem caused by the new class being taught against the background of the previous class.
Problem setting
Our class incremental learning setup is as follows, given an object detector that has been trained on C classes, when a new class C n comes and we are given a dataset D n which comprises a set of pairs (X n ,Y n ), where X n is an image of size H × W and Y n is the ground-truth.Here, Y n only consists of labels in current classes C n .The model should be able to predict all classes C 1 : C n in the history.
Class similarity distillation
The detail of the proposed CSD is shown in Fig. 2. When learning a new class.We train the new model using the new class samples and labels, consider the output of new samples in the old model, and ensure that the new model avoids catastrophic forgetting.In order to avoid the instability caused by large models, we use the CSD at the block level.The proposed CSD can make better use of similar information.After each block, we use the weighted distillation loss to decide the degree of distillation according to the similarity We first obtain the prototype of new class k by computing an in-batch average shown in Fig. 2 on Z = R H ×W ×C .Given a batch of feature maps B = R B×H ×W ×C , we flatten the batch, height and width dimensions and index the as z i , where i = 1, . . ., BHW .The centroid of class c is computed as Eq. ( 1) where 1[ The cummulative prototypes P 1 : P k of all classes from class 1 to class t are computed at the end of class k.
We construct a prototype map m x = R HxW ×C where each pixel x contains a prototype vector m x = p k Then we compute a similarity map S = R HxWxV between the prototype m x of a new class in each pixel x and the prototype is p k of old class.Each entry S (x,k) is cosine similarity between m and p k , the normalized similarity map S is defined as (2) Finally the class similarity distillation loss distills the weighted outputs of the old model and the new model: where k and x are old class and new class features.
Learning this similarity provides two benefits.As a first step, the model can relate the new class to what it had previously learned, which facilitates the transfer of the old knowledge to the new class for a better learning experience.Second, it encourages the model to learn the underlying class hierarchy implicitly.We do not need to save the class ID and only save the prototype when the new class are well trained so that we can learn the similarity of the new class more quickly.
Global similarity distillation
In order to maximize the extraction of correlation features of different class objects in remote sensing images, we propose the global similarity loss (GBL) to maximize the similarity information of old class and new new by maximizing the mutual information in the instance level before classification and regression results.The GBL is shown in Eq. ( 4) where x t is the old instance-level class feature, and y t is current instance-level class feature, x j is noisy old class feature.and S() is cosine similarity.Maximizing this equation is equivalent to maximizing the relationship between the model discriminated learned and unlearned classes, and maximizing the mutual information of the new class and the old classes.
Positive and negative samples assignment based on RPN
In general, the way Ren et al. (2015) assigns positive and negative labels for training samples in remote sensing datasets is based on the size of anchors.Some datasets contain multiple class samples simultaneously.Thus, some unknown class positive samples are labeled as new class negative samples, leading to decreased efficiency and accuracy in learning these samples.
To solve this problem, we propose an RPN-based technique for assigning positive and negative samples to label potential new classes.Specifically, these new classes will be designated as unknown samples, which means they will not be included in the training of positive and negative samples, thereby avoiding the problem of new tasks appearing in old tasks, which would result in inadequate training.
Firstly, based on the characteristics of the region proposal network (RPN), which can output the class probability scores and the bounding boxes of almost all objects, our approach is to treat those objects with higher objective scores but do not have higher IoU with ground-truth scores as potential unknown objects that should not be included in the training of the positive and negative samples.Specifically, a negative sample is defined as ''1'' where the probability score ranking of the last k objects is less than a certain threshold, and at the same time, the IoU of the ground-truth is less than a certain threshold.
Loss function
The loss function of the entire framework is shown as Eq. ( 5). (5) The first of them is the faster R-CNN detection loss function, the second term is the proposed class similarity distillation loss, and the third is the global similarity loss.We use gradient descent with momentum to optimize the model.During the training period, we first fix the other parameters and train the RPN of new class branches of the parameters to converge, and then we train all the parameters.The results prove the effectiveness of the training method.
RESULTS
We used two public remote sensing datasets, DOTA (Xia et al., 2018) and DIOR (Li et al., 2020), to verify the effectiveness of the proposed method; first, we compared with some State-of-the-Art (SOTA) methods, and then we conducted an ablation study to verify the effectiveness of the proposed two distillation loss functions.The specific training parameters were set as follows; we cropped the image to 800×800 size, the batch size was set to 2, the momentum was set to 0.9, the iteration, the number of times, was set to 50,000, the initial learning rate was set to 0.0025, every 10,000 times was reduced to one-tenth of the original, IoU was marked as the correct result when it was significant with 0.7, the RPN output was 128 for both positive and negative samples, and the experiments all used horizontal bounding boxes.
There are 2,826 images in the DOTA dataset and 188,282 instances with image sizes ranging from 800×800 to 4000×4000, containing 15 classes, and we use the first eight classes as old classes.We incrementally learn the other seven classes.
There are 11,738 images in the DIOR dataset, and 20 classes contain 190,288 instances.We set the first ten classes are old classes and the last ten classes are new.
Evaluation criteria
To obtain a generic model performance estimate, after training task t, we compute the average accuracy (AA) on all testing datasets of tasks T. The average accuracy is defined as Eq. ( 6).The higher the average accuracy, the better the performance of the model.
where TP and TN are the numbers of correctly classified samples.P t and N t are the number of positive and negative samples for task t.T is the total number of tasks.
Performance evaluation
We used ResNet as the uniform backbone, and it can be seen from the AA on both datasets in Table 1 that the proposed method improves by 5% compared with the SOTA method FPN-IL (Chen et al., 2020).This is because our method can consider the old class features when learning new classes, thus obtaining a higher AA.Other methods use traditional methods to generate class agnostic RoI or use the dispersion of features before RPN to learn new knowledge and do not fully use the new class information of similarity, so the detection results are unsatisfactory.
Table 1 shows the detection results on each class in the new seven classes of the DOTA dataset.The detection result by Fast-IL (Shmelkov, Schmid & Alahari, 2017) is poor in detecting every class, as the detection framework is not effective.The Faster-IL (Hao et al., 2019b) and FPN-IL (Chen et al., 2020) are much better than Fast-IL, but the average accuracy (AA) is lower as the number of classes increases.Meta-ILOD (Joseph et al., 2021b) uses meta-learning to learn a global optimum solution without learning the similarity between classes.SID (Peng et al., 2021) employs distillation in some intermediate features, while our method performs global information distillation at various scales, resulting in better performance compared to SID.The training process of ORE (Joseph et al., 2021a) is more complicated, requiring a long pre-training period to achieve good results.Compared with the CWSD (Feng et al., 2021), the proposed method is supplemented by weighted similarity not only supplements similar features.The proposed method has improved approximately 1% on AA compared to the four most recent methods, and as the classes increase, the detection of the new class does not show a noticeable drop.
To demonstrate in more detail that the proposed method can learn the similarity information among classes well, we list the average accuracy of each class for each class, as shown in Table 2.In the DOTA dataset, because the class of baseball field (BF) was learned before when learning new categories such as tennis court (TC) and basketball court (BC), which have relatively similar characteristics to a baseball court (BC), the accuracy of our method in detecting these is significantly higher than that of other methods.Since our approach uses the same backbone architecture as FPN-IL, it has similar performance during joint training without having learned from similar samples.However, due to our method's ability to fully learn similar information, it performs better when learning from similar samples later on, such as SBF, SP, HC, etc. Meta-ILOD (Joseph et al., 2021b) employs meta-learning to obtain a global optimum solution without learning inter-class similarities, while our approach conducts global information distillation at multiple scales, leading to enhanced performance in comparison.The training process of ORE (Joseph et al., 2021a) is complex, and the CWSD (Feng et al., 2021) is not in line with the continual learning setting.Therefore, the proposed method achieves roughly a 1% average improvement in AA compared to the four most recent techniques mentioned above.Although the accuracy of each class varies slightly with the learning order, the overall AA and joint training are comparable due to the learning of the old class similarity by the proposed method, and there is a significant improvement in AA when the similarity task is learned later.This shows that the proposed method is stable and effective.
Figure 3 shows the visualization detection results of the proposed method on the DOTA dataset with the truck as the old task to learn the new task sedan, and the visualization detection results with the soccer ball field (SBF) as the old task to learn the basketball court (BC) and tennis court (TC).From the detection results, we can see that our method obtains high average accuracy on both new and old classes.In contrast, other methods have many missed detections on the old class, as shown in the red box, which is because our method can learn information about the similarity between classes, preventing catastrophic forgetting while accelerating the learning of new classes.
Figure 4 shows the comparison of the visualization results in the DIOR dataset with low similarity of learning tasks, and since the proposed method can adjust the distillation weights adaptively according to the task similarity, it can also obtain better detection results.
Furthermore, the heatmaps are used to verify the effectiveness of the similarity distillation method we proposed in Fig. 5.In the heatmaps, the darker the color of the heat map, the more critical the area is. Figure 1 shows that we first learn the class SBF and then learn the class BC.From the change in the heat map of the network, the SBF in the bottom right corner of the heatmap (a) is activated.When the network continues to learn the class BC, both areas can be activated, which shows that the proposed incremental learning method can remember the previous knowledge well.Moreover, after learning BC, the activation area of SBF changes from the annular to the central square area, which shows that the network can learn the similarity features between classes.
Based on the public natural scene image dataset VOC, we tested the class similarity distillation method to verify the effectiveness of class incremental object detection, as shown in Table 3.For CSD in the last row, we used the settings described in the implementation details.To compare, we also replaced the CSD loss with the L2 loss to minimize the distance between the selected features.As a result of the performance of CSD on average accuracy, it is consistently superior to other methods, proving that it is more appropriate to obtain a trade-off between stability and plasticity for continuous object detection by using CSD.For 19+1 and 15+5 tasks, CSD is more effective than the L2 loss on average accuracy.Since CSD enforces the instance-level features of the incremental model to imitate the features of the old incremental model to a high degree, the performance of the old classes can be adequately maintained.
In contrast, the performance of the new classes will be suppressed at the same time.A comparison of CSD and L2 loss on average accuracy shows that CSD is more effective than L2 loss for 19+1 and 15+5 tasks.CSD enforces instance-level features of the incremental model to entirely mimic those of the old model so that the performance of old classes can be maintained simultaneously as the performance of new classes is suppressed simultaneously.
Ablation study
An ablation study is performed to validate the contribution of distillation loss in the DOTA dataset.Like the experiment in Table 2, we incrementally learn the following seven classes.The results of the ablation experiments in Table 4 show the effectiveness of the proposed CSD and GSD.In Table 4, the second column is the result obtained without the distillation algorithm, the second and third columns are the AA obtained by using one distillation loss, respectively, and the last column is the result of using two distillation losses at the same time.Each distillation loss we proposed can boost AA, and the best results can be obtained when used together.
DISCUSSION
Despite the promising gains that can be achieved with our proposed class similarity distillation (CSD) and global similarity distillation (GSD) for class incremental object detection in remote sensing, there are still several concerns that need to be further researched in the future.First, there is a significant discrepancy between the outcomes of sequential addition training and the outcomes of joint training in all classes, which may be caused by the gradual accumulation of mistakes during the incremental learning process.Additionally, the chosen features for correlation distillation need to be more accurate after numerous learning stages.Due to the lack of data and the trade-off between stability and plasticity, the performance of both old and new classes cannot be improved simultaneously.
CONCLUSION
In this article, we propose a novel class similarity distillation-based class incremental object detection method in remote sensing images that considers the similarity of new and old classes.First, class similarity distillation (CSD) was proposed to determine the plasticity and stability of the model during local distillation in the backbone of the object detector.
To further mitigate catastrophic forgetting of the incremental model, we also introduced a global similarity distillation (GSD) loss to maximize the mutual information between old and new classes.Results on DOTA, DIOR, and VOC datasets demonstrate that the proposed method is effective in incremental class learning to detect objects in remote sensing images without forgetting what has previously been learned.
In the future, it will be possible to combine incremental object detection with other techniques, such as those found in Morioka & Hyvarinen (2023), to maintain better feature discrimination within the incremental class procedure.We will also consider designing novel methods for classifiers and regressors to further boost class incremental object detection performance.
Figure 1
Figure 1 The framework of proposed method, We use the faster R-CNN detection framework with the backbone of FPN.To maximize the similarity between learning tasks, we use class similarity distillation (CSD) loss in the block-wise level and global similarity distillation loss in the instance level.Full-size DOI: 10.7717/peerjcs.1583/fig-1
Figure 3 Figure 4
Figure 3 The visualization detection results of the proposed method on the DOTA dataset.Full-size DOI: 10.7717/peerjcs.1583/fig-3 | 7,133 | 2023-09-27T00:00:00.000 | [
"Computer Science",
"Environmental Science"
] |
Project Amihan: Online Air Monitoring System for Selected Areas along McArthur Highway, Valenzuela City
Purpose – Air pollution, through the years, is a major problem in the Philippines. Different illnesses came about as its consequence. To mitigate air pollution particularly in Valenzuela, the Department of Environment and Natural Resources with the help of Valenzuela City Government’s City Environment and Natural Resources Office installed an air monitoring system. Inspired by such technological innovation, the researchers aimed to design and develop “Project Amihan: Online Air Monitoring System for Selected Areas along McArthur Highway, Valenzuela City.” The study is significant to Valenzuela residents along McArthur Highway, to the community, educational institutions, the government, and future researchers. Method – For data gathering, survey and interview was used. The researchers’ method was Agile Prototyping Model and they used Arduino Uno microcontroller, wi fi module, gas sensors, and jumper wires for the hardware; HTML, PHP, JavaScript, jQuery, Ajax to develop the website and embed the system, as well as a webhost to disseminate information. The system was tested in four (4) selected areas along McArthur Highway of Valenzuela namely: Balintawak Beer Brewery (BBB), Fatima, Karuhatan, and Malinta. Twenty (20) Valenzuela residents and five (5) IT Experts served as respondents who evaluated the system. Five-point Likert Scale using the ISO 9126 as the standard reference was used to evaluate it. Results
INTRODUCTION
According to Mendoza (2017), air pollution might be one of the most dangerous aspects of general well-being that individuals think little of and disregard.The overall population should consider air pollution important, particularly when it can be a factor in the deterioration of anyone's health, which is far more regrettable than what was once known.
According to a report from Penolio (2011) and Mendoza (2017), the transport sector contributes to about 80% of air pollutants in the City of Valenzuela.It is because of the growing population of the city that results to higher volume of commuters.Pollutants can travel deeply into a man's respiratory tract and can cause short-term health impacts and compound medicinal conditions to people with asthma or coronary illness (Geronimo, 2017).
This project aims to disseminate information about the air quality in Valenzuela City that can raise awareness about the neglected dangers of air pollution and in turn may reduce the likelihood of acquiring certain fatal diseases.
BACKGROUND OF THE STUDY
One of the major problems society is facing today is pollution.Air pollution particularly brings harm not only to the environment but mostly to one's health.Air pollution is a mixture of gas and solid particles that automobiles and industrial factories mostly emit.These particles, if inhaled, can result in serious diseases such as allergies, asthma and even cancer.Calderon-Garciduenas et al. (2002) mentioned that exposure to certain air pollutant mixtures produces inflammation in the upper and lower respiratory tract.
In the Philippines, the government has mandated a law known as the Philippine Clean Air Act of 1999 (Republic of the Philippines, 1999).This law recognizes the importance of preventing the spread of air pollution in the country.It enforces different government agencies to make a resolution on how to maintain the country's air quality.To support the government's objective, the proponents decided to make a capstone project that will help to monitor the air pollution of some selected areas in Valenzuela City.
The proponents aim to develop an embedded system that will monitor the quality of air in selected areas along McAthur Highway namely: Balintawak Beer Brewery (or BBB), Fatima, Karuhatan, and Malinta.If the system detects that the air pollution is above the normal range, it will automatically send a notification to the users of the system.Through this system, users can be aware whether the air that they are inhaling is good or bad for their health.
The general objective of the study is to design and develop the system entitled "Project Amihan: Online Air Monitoring System for Selected Areas along McArthur Highway, Valenzuela City" that will monitor the air quality index of certain areas in Valenzuela.Specifically, it aims to: (1) monitor the air quality in selected areas along McArthur Highway -BBB, Fatima, Karuhatan, and Malinta; (2) create a prototype using an Arduino Uno microcontroller and develop a website by using HTML, PHP, JavaScript, jQuery, Ajax and a webhost that will disseminate information derived from the system; (3) detect the pollutants in the air by integrating gas sensors such as MQ7 and MQ135; (4) view the air quality by logging onto the system's website (bit.ly/ProjectAmihan); and (5) notify the user by sending an email notification if the air quality in one of the selected areas is below the normal level to inform the user how air pollution can bring harm to human health.
SIGNIFICANCE OF THE STUDY
This system will help the users to detect whether the air that they are inhaling is good or bad for their health.This may also help them prevent acquiring diseases, such as asthma and allergies by giving them information about the air quality.It can raise the community's awareness about the importance of monitoring the air quality that they breathe.Schools and universities can help their students to gain knowledge about air pollution by exposing them to this research.By doing this, their students can get insights about how air pollution affects them and how to prevent it.
This study can further contribute to the realization of the government's mission to prevent the spread of air pollution in the country.It may help future researchers to gain a better understanding on how to address the air pollution problem in the country.This research can serve as a basis on how they will conduct their project or experiment in the future.
RELATED LITERATURE
In this era, people live in constant danger not only because of widespread poverty and terrorism; but also specifically because of the worsening problem in pollution.According to Vallero (2014), planet earth is composed of abundant resources that transform into compounds to support its environment.As temperature, oxygen content and essential mixtures of mass and energy changes brought about by pollution, the quantity of these compounds fall in smaller range.If such compounds do not exist in the environment, living things could not even last for a day.
Pollution is the culprit of unnatural events in the environment.Merriam Webster Dictionary (2018) defined it as the act of contaminating the atmosphere with man-made waste.Pollutants that bring harm to the environment are such.Society ironically relies too much on nature yet people are gradually destroying it.Mahatma Gandhi once said that "Earth has enough to satisfy every man's need, but not every man's greed" as human population increases, people demand too much thus exceeding the environment's capacity to supply such demands.
Alcott (2016) stated that humans negatively affect the environment in many ways.Industrial waste from factories, illegal logging in preserved forests, and overconsumption of fossil fuels are the typical examples of how humanity destroys the ecosystem.These activities cause different types of pollutions, such as water pollution, air pollution and soil pollution that bring disasters like flash floods and landslides.
Different types of pollution exist.They are categorized based on the element of the environment that they affect according to Read and Digest (2018).Similarly, Asthana (2013) emphasized that the ruin of the environment can be classified based on either the pollutant's type or the nature of the surroundings it affects.He further grouped pollution into three (3) main classifications: water pollution, soil pollution, and air pollution.
The last of the three main types of pollution is air pollution.Rapid industrial development and urbanization in unindustrialized countries led to a surge in air pollution (Huang et al., 2014).The World Health Organization (2018) defined air pollution as the contamination of the indoor or outdoor environment by any chemical, physical or biological agent that modifies the natural characteristics of the atmosphere.The main agents that pollute the air are gases such as, carbon monoxide, methane and chlorofluorocarbons.Automobiles, industrial factories and house appliances, such as the refrigerator and air conditioner commonly emit these gases.Consequently, suburbanization results in rising traffic density that in turn becomes a major cause of air quality decline (Kanabkaew, Nookongbut, & Soodjai, 2013).
Moreover, researchers have linked air pollution to a wide array of health effects for a significant period.Among them are respiratory problems, such as asthma and lung problems.Air pollution could also cause cancer to humans.The World Health Organization (2018) and National Institute of Health (2018) concluded that the contaminants in the atmosphere are carcinogenic substances to humans.Premature death due to a wide range of causes included long-term health effects of ozone and fine particulate matter with a diameter smaller than 2.5 micrometers as emphasized by Lelieveld et al. (2015).Gowrie et al. (2016) hypothesized that, there are cases of valid physician-diagnosed cases of paediatric asthma -which led to emergency room visits in Trinidad.
Moreover, based on the report by Ellis (2017), UNICEF Executive Secretary mentioned that around 6,000 children under age 5 die because of air pollution especially in poor nations.He also said that pollutants do not only harm the developing lungs of children but also damage their developing brains.He further emphasized that no society can disregard air pollution; and he therefore encourages world leaders to take steps in order to stop the spread of world pollution (Ellis, 2017).Even Colombia is concerned about their urban air quality (Ramírez, Mura, & Franco, 2017).
Air pollution concerns all the countries across the globe, thus each nation has its own ways to combat its effects.In the Philippines, in keeping the nation's atmosphere safe to breathe, the government has mandated Republic Act 87495 or commonly known as the "Clean Air Act of 1999".It provides the policy's framework for the country's air-quality management act.In order to do so, the act requires different government agencies to draft and enforce regulations to monitor and to prevent the spread of air pollution.Smoke-belching vehicles are one of the factors that pollute the atmosphere.To address this, the Department of Environment and Natural Resources, in partnership with the Land Transportation Office, formulated an ordinance that prohibits smoke-belching vehicles to travel on roads.They also created an ordinance that implements travel ban for phased-out and long overdue cars.These ordinances will not only lessen heavy traffics on roads but will also lessen the contaminants in the air.
Similarly, smokes emitted from cigarettes also contain harmful elements that contaminate the air, thus the government also formulated some policies to control its negative effects.Inspired by the success of the ordinance in Davao, the present administration launched a nationwide smoking ban that forbids the act of smoking in public areas (Aurelio, 2017).This includes establishments like schools, hospital, restaurants, and hotels.The government will designate smoking areas with adequate ventilation separated from other rooms.He further said that those who cannot comply might end up as offenders, thus he encourages the public to heed the executive order or face the consequences.
As for Nitrogen Dioxide content in the air, road transport is mainly the cause of NO2.Likewise, Henschel et al. (2015) noted that recent studies suggest that traffic (roadside) (TR) NO2 concentrations have not declined as expected, and in some cases increased, probably due to the use of oxidation catalysts and particle filters in diesel vehicles.Consequently, Esquivel-Hernández et al. (2015) noted that HYSPLIT back air mass trajectories analysis, and weather data available for the Central Valley suggest that such differences arise as result of a decline in the mixing layer of depth (~425 m) and the wind speed (~1.5 m/s) favoring the buildup of polluted air masses in the urban area.
Today, pollution levels in many areas of the United States exceed national air quality standards with at least one of the six common pollutants.Although levels of particle pollution and ground-level ozone pollution are substantially lower than in the past, levels are unhealthy in numerous areas of the country.Both pollutants are the result of emissions from diverse sources, and travel long distances and across state lines.An extensive body of scientific evidence shows that long and short-term exposures to fine particle pollution, also known as fine particulate matter, can cause premature death and harmful effects on the cardiovascular system, including increased hospital admissions and emergency department visits for heart attacks and strokes.Scientific evidence also links it to harmful respiratory effects, including asthma attacks.
Based on Lanzafame et al. (2014), their study revealed, as expected, how pollutant concentrations peak especially during commute times (early morning and afternoon).Moreover, the long-lasting health effects associated with sustained exposures to high concentrations of air pollutants are an important issue for millions of big city residents and millions more residing in smaller urban and rural areas (Calderón-Garcidueñaset al., 2015).
Project Scope
The respondents of the study include twenty (20) Valenzuela City residents and five (5) IT experts that will evaluate the system.The proponents will develop an air monitoring system that will be installed in the selected areas (BBB, Fatima, Karuhatan, Malinta).Alpha testing will be held on the first week of January 2018 and the location of the test will be held in selected areas along McArthur Highway of Valenzuela namely, Balintawak Beer Brewery (or BBB), Fatima, Karuhatan, and Malinta.Twenty-five (25) respondents underwent a sampling procedure with the aid of survey questionnaires.The respondents include 20 end-users and 5 IT experts (Table 1).
Conceptual Framework
The below Figure 1 shows the construction of the system "Project Amihan: Online Air Monitoring System for selected areas along McArthur Highway, Valenzuela City" using Hierarchical Input, Process and Output framework.
Agile Prototyping Model
The researchers used the agile prototyping model to develop the system.It is a combination of iterative and incremental process models.This model enables the researchers to focus on process adaptability thereby helping them to achieve rapid delivery of the working software product by creating builds through process iterations. [continue] [finished] Determine needs
Evaluate prototype
Every iteration involves cross-functional teams working simultaneously on various areas like planning, requirements analysis, design, coding, unit testing, and acceptance testing.
Project Development
The sequential phases in Agile Prototyping model (Ambler, 2014) are: Determine needs: The researchers sent communication letters and secured scheduled appointments with the major respondents of the study.They also interviewed experts about the safety of the hardware design.Researchers chose experts with requisite qualifications, observed the way they used the system, and quantified their evaluations.Surveys were conducted to determine the average quantity of devices to be used; Build prototype: On the hardware part, researchers carefully selected necessary equipment.Arduino Uno, Wi-Fi Module (ESP8266), gas sensors (MQ7 and MQ135), jumper wires, breadboard, LEDs, power source and other necessary hardware were tested especially on the compatibility criterion so that the hardware part will run smoothly.On the software part, the researchers used Arduino as Integrated Development Environment (IDE) in programming the hardware and HTML5 for building the system's website; and
Evaluate prototype:
The researchers evaluated the prototype by testing it in four selected locations and compared the results from an international air quality monitoring system (Figure 2).
Figure 2. The Agile Prototyping Model
The Arduino Uno microcontroller, gas sensors, Wi-Fi module, and LEDs were connected to the breadboard with the help of jumper wires.Then the researchers used HTML and other web development tools to embed the system.The system will automatically function as long as it is connected to a working power source.A web host made the air monitoring system available through the internet.The system must have internet connectivity so that it can continuously send data to the website.Upon loading the system's website, Valenzuela City residents can view the monitored air quality data by the system.If the air quality reaches beyond 300 Air Quality Index (AQI), it will send email notification to registered users regarding the polluted area and it will suggest what to do in such a situation.
System Architecture
As regards the system architecture, Figure 3 shows how the project's devices are connected to one another.Figure 3 shows that the Arduino Uno, Wi-Fi module, and the gas sensors are connected to the breadboard through jumper wires.The above table (Table 2) shows the classifications of air quality according to the United States Environmental Protection Agency (2017).It describes the values of the International Air Quality Index, levels of health concern with the corresponding color codes (AirNow, 2016).
Evaluation Procedure and Calculation of Data
The researchers used the Five-Point Likert Scale to evaluate the operational feasibility of the proposed system (Table 3).Functionality, Reliability, Usability, Efficiency, Maintainability and Portability are the different criteria for evaluating the proposed system.The researchers used the rating 1 to 5: 1-Unacceptable, 2-Moderately Acceptable, 3-Acceptable, 4-Very Acceptable, 5-Highly Acceptable.The standard of reference was ISO 9126.
The weighted mean (Equation 1) of the set of data represented by x 1 , x 2 , x 3 ,...,x n can be expressed as the sum of the data multiplied by their corresponding frequency or weight (Satya, 2018).The researchers implemented the above Five-Point Likert Scale (Gade, 2013) during the evaluation of the project.
Project Evaluation
Table 4 below shows the summary of the weighted mean from the IT Experts' evaluation.The system received the highest mark of 4.80 -Highly Acceptable -on Reliability while it received the lowest mark of 4.10 -Very Acceptable -on Portability.Overall, IT Expert's evaluation obtained a 4.49 overall mean that proves its strengths in the different criteria mentioned below.Table 5 shows the summary of the weighted mean from the end users' evaluation.Table 6 shows that the system is functional at a mean of 4.53 which means it is highly acceptable.The project is also usable at a mean of 4.53 and highly acceptable as gathered from the respondents.The system received the lowest mean of 3.90 -Very Acceptableon portability due to the fact that the system will be installed on selected areas only.Overall, the system gained a 4.33 overall mean that proves its strengths in the different criteria mentioned above.
Summary
Pollution is very much disregarded when it is not given any attention.But, with the help of the system, users can have a glimpse of the current situation of the air quality that surrounds them.The system most notably detected Carbon Dioxide (CO2), which the city government's existing system did not detect.
With proper dissemination of data derived from the monitoring system and with the help of the internet, the project can raise awareness and can help people reduce air pollution by giving them insights of what bad air may cause them so they can come up with an action plan.
Conclusions
Pollutants in the four selected locations varied from each other.The amount of pollutants also varied in different time schedules.For example, the amount of air pollutants reached its highest point at about 400+ ppm of carbon during rush hours or 4:00PM to 8:00PM.A graphical representation of the data gathered from the system further justified that the selected areas are not always polluted but only on certain time schedules.Implementing the system in these selected locations can further give insights to the city government about their possible plan of action to lessen the pollutants during the said hours.
Recommendations
Through observations, meticulous testing of the system, and software evaluation -a rating of 4.33 (Highly Acceptable) -the researchers are now able to give their final recommendations to the users, beneficiary and future developers to further improve the project: Users must sign up for the system to receive a notification if ever the air quality reaches the dangerous level; The city government -as the beneficiary of the said system -should provide a stable internet connection to the system to enable real-time monitoring of the air quality along McArthur Highway; The city government may allow the developers to connect the system to the city's power connection, i.e. lamppost, to keep the air monitoring system running; The city government may implement this system by expanding it to more than four locations to further analyze the air pollution in the city; The future developers must integrate more sensors to further improve the accuracy of the current system; and Future developers may opt to integrate a renewable power source such as solar panel for the system so it can run even without the city's electric power supply.
RA 8749 assigns the Department of Environment and Natural Resources (DENR) as the governing body on the overall implementation on the provisions of the law.It also delegated several agencies such as Department of Transportation and Communication (DOTC), Department of Science and Technology (DOST), Philippines Atmospheric and Geophysical and Astronomical Services Administration (PAG-ASA), Department of Education (DepEd) and the Local Government Units to support the mission of the law.
Figure 4 .
Figure 4. Selected locations of project testing
Table 1 .
Respondents of the Study
Table 2 .
International Air Quality Index
Table 4 .
IT Experts' evaluation
Table 6 .
Summary of the combined weighted mean of respondents | 4,785.2 | 2018-06-14T00:00:00.000 | [
"Computer Science"
] |
Spectrally Efficient Multi-Carrier Modulation Using Gabor Transform
Non Orthogonal Frequency Division Multiplexing (NOFDM) systems make use of a transmission signal set which is not restricted to orthonormal bases unlike previous OFDM systems. The usage of non-orthogonal bases generally results in a trade-off between Bit Error Rate (BER) and receiver complexity. This paper studies the use of Gabor based on designing a Spectrally Efficient Multi-Carrier Modulation Scheme. Using Gabor Transform with a specific Gaussian envelope; we derive the expected BER-SNR performance. The spectral usage of such a NOFDM system when affected by a channel that imparts Additive White Gaussian Noise (AWGN) is estimated. We compare the obtained results with an OFDM system and observe that with comparable BER performance, this system gives a better spectral usage. The effect of window length on spectral usage is also analyzed.
Introduction
With a growth in the number of users, there is an increase in the demand for spectrum.Research pertaining to the efficient use the available spectrum has produced numerous results over the past few years.This work came to fruition in the form of Orthogonal Frequency Division Multiplexing, which is a special class of Multiple Carrier Modulation that has emerged as a leading candidate for high data rate wireless communication.The main advantage of this is its ease of implementation that eliminates ICI at low receiver complexity.
Ahmed et al. in [1] introduced a Spectrally Efficient Frequency Division Multiplexing (SEFDM) system which is spectrally more efficient than OFDM.In this SEFDM system non-orthogonality was introduced by a parameter α, where Δf = α/T and 0 < α < 1.The above discussed system requires high frequency precision in order to reduce frequency offset effects.In addition, it will require different equipment for different values of frequency spacing.To overcome the limitations of this system a technique is discussed in [2,3] where, the transmitted symbol set was modified by zero padding which leads to higher frequency resolution on performing IDFT.The partial transmission of the IDFT results in spectrum compression.
By exploiting the circular conjugate symmetry property of Discrete Fourier Transform (DFT) better spectral efficiency & BER-SNR performance was achieved as discussed in [4].All these systems make use of non-orthogonal bases for which an inverse doesn't exist.This results in greater receiver complexity.In this paper we develop an NOFDM system that deviates from the conventional system as discussed in [5] where ΔfT < 1, by introducing a non-orthogonal basis set that possess a dual, which can be used in the receiver to retrieve the transmitted symbol, hence simplifying the receiver design.In order to understand the NOFDM system, a clear understanding of the working of an OFDM system is a prerequisite.
This paper is organized as follows: Section 2 gives an insight into the critical concepts behind the working of an OFDM system that are critical for a clear understanding of an NOFDM system.Section 3 begins with a brief introduction on NOFDM, and proceeds to discuss the significance of Gabor bases and their use in the implemen-tation of an NOFDM system.A typical OFDM system has been considered as the baseline system.Section 4 gives the simulation results of the proposed NOFDM system in comparison with the baseline OFDM system.Conclusions are drawn from the simulation results that help in developing an NOFDM system with optimal parameters resulting in better spectral efficiency and BER-SNR performance.
OFDM
In an OFDM system, the symbols obtained after constellation mapping are modulated onto orthogonal signals of different frequencies corresponding to each of the subcarriers used [6].
where, X k represents the symbols to be transmitted and S k (n) represents each subcarrier given by, On imposing Hermitian symmetry to the symbols i.e.
for a symbol stream of length N/2 we end up with a stream of length N where the symbols are symmetric about X N/2 .Because of the use of orthogonal carriers, this modulation of symbols can be simplified and represented as taking an IFFT of the stream of symbols.Thus the IFFT and FFT pair performs the operation of modulation and demodulation of the symbols onto orthogonal Sine and Cosine subcarriers (depicted in Figure 1).
Another concept of interest is the use of cyclic prefix for the diagonalization of the channel matrix.It is this diagonalization that enables channel partitioning.Cyclic prefix also helps in the removal of ISI.Thus the transmitter portion of the OFDM system can be visualized as depicted in Figure 2.
NOFDM
Non Orthogonal Frequency Division Multiplexing provides us with a major advantage of spectral efficiency, as the underlying pulse can be chosen with sharper frequency domain decay.Also the modulation scheme based on Reisz bases tends to be more robust against frequency selective fading [5].The use of non-orthogonal basis functions (Riesz basis) is the key idea in this paper.
Gabor Transform
Gabor transform is a type of Short Time Fourier Transform (STFT).In STFT, the input symbols are first windowed in the time domain and then the Fourier transform operation is performed on the windowed symbol set.The windowing operation removes the orthogonality of the basis functions [7].In Gabor transform a Gaussian window is used for windowing.It is given by, Gabor transform is a two dimensional transform in which the response depends on the frequency of the signal and the time at which windowing is performed.
The basis functions of Gabor transform are non-orthogonal which means that the basis functions are not invertible.But, a unique property of a Gabor bases is that it is that there exists a dual basis [8] set such that the original signal can be recovered by a transformation operation which uses the dual basis set.
Implementation of NOFDM System
In the NOFDM system the loss of orthogonality calls for the use of non-orthogonal signals for mapping the symbols onto.Hence to induce non orthogonality we make use of a form of Gabor transform for modulation.Here, we go with a combination of Gaussian and hanning window as the final combination that performs windowing in the time domain.This window is given by, We use Hanning window in order to provide finite support for the Gaussian window.We select this window approximately eight times the 3 of the Gaussian window.This window and FFT combination can be approximated as performing the modified Gabor transform (since the window is not just Gaussian in nature) on the signal.This Gabor basis set (each basis corresponding to a specific carrier) obtained, used for modulating and demodulating the symbols from different users is as shown in Figure 3.
It was observed that for σ value of 2/π we obtain better results while using Gabor bases in image processing [9], using this results in the Gabor Bases a minimum 3 dB bandwidth is observed.So, we use this value for implementation of the Modified Gabor transform in the NOFDM system.Also, it is observed that the BER-SNR performance improves as the value of sig decreases and an optimal performance is obtained at a sig value of 2/π as shown in Figure 4.
In the NOFDM system we modulate the symbols using the Modified Gabor basis signals which follow the equation, At the receiver, to demodulate the transmitted signal we use the Inverse Gabor transform whose bases are the conjugate of the Gabor basis [8].The Gaussian window used in the inverse transform is given by, The inverse transform is given by, In the NOFDM system, we define a factor "p" that is a power of 2. This "p" is a ratio of the symbols to be transmitted to the window length."N/p" subcarriers are used for the transmission of N symbols.Due to windowing operation (use of Gabor basis) the N/p subcarriers can be used to facilitate the transmission of the p sets of symbols.The Hanning window used ensures that the Gaussian window goes to zero outside the specified window length.This minimizes abrupt changes in the transmitted signal and hence leads to a much sharper frequency response.The transmitted signal in the NOFDM system is given by, where,
Simulation Results and Conclusions
The following results are for an OFDM and NOFDM system designed for the Digital Video Broadcasting (DVB) standard of European terrestrial digital television (DTV) service based on [10].Our simulations will focus in the 2 k mode of the DVB-T standard which uses 1705 carriers which carry symbols with a useful duration of 224 ms.
From the plots (a) and (b) of Figure 5 we find that the bandwidth decreases from 8 MHz to 4 MHz as the factor p changes from 1 to 4. This implies that the bandwidth (B) used decreases with the window length i.e.B is proportional to N/p (9).From the plots (b) and (c) of Figure 5 we observe that the bandwidth used is almost the same but there is a decrease in the power of the out of band emissions.
On transmitting the symbols on a set of subcarriers over a channel that imparts AWGN noise, one can observe that the recovery of the symbols is possible in both the OFDM and NOFDM systems and the data transmitted through the NOFDM system has a BER just slightly higher than that transmitted through an OFDM system for a given SNR (depicted in Figure 6).
It is also observed that the BER-SNR performance improves for the NOFDM system with a decrease in the window length used to develop the non-orthogonal basis set as shown in Figure 7. Since the main role of cyclic prefix is to maintain the orthogonality of the sub channels, it must be noted that no cyclic prefix is needed in the NOFDM system.Thus the NOFDM system designed does not have cyclic prefix.However, cyclic prefix can be incorporated if necessary in order to combat ISI.
Thus from the results obtained we can conclude that the spectrum of the NOFDM signal transmitted is occupies lesser bandwidth when compared to that of the OFDM system when p > 1 (where, ).This is a major advantage over the OFDM system.Since the BER-SNR performance of this system remains comparable with that of the baseline OFDM system, the developed system is an improvement over OFDM in DVB-T.
Summary
The NOFDM system has been implemented.The BER vs. SNR performance for a system implemented using a Gabor basis with sigma value of 2/π was found to be optimal and comparable to that of an OFDM system.The spectrum is found to be efficient than that of OFDM systems and also the bandwidth is found to be inversely proportional to the window length of the Gabor transform.
Figure 1 .
Figure 1.Fourier basis for specific channel.
Figure 2 .
Figure 2. Transmitter of an OFDM system.
Figure 3 .
Figure 3. Gabor basis for a specific channel.
Figure 5 .
Figure 5. Power Spectral Density of the transmitted signal given N = 4096 for (a) p = 4 in an NOFDM system; (b) p = 1 in an NOFDM system; (c) OFDM system.
Figure 6 .
Figure 6.BER vs SNR for an OFDM and an NOFDM system. | 2,521.4 | 2013-04-16T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
An evaluation of the effect of matric suction on the dynamic strain-dependent parameters of an unsaturated silt
. This research investigates the influence of matric suction on the variation of shear modulus and damping ratio of a silty soil in very small to medium shear strain levels. A set of laboratory experiments including three bender elements tests in addition to three resonant column-torsional shear tests have been carried out on unsaturated Firuzkuh silt specimens. In this regard, an unsaturated triaxial cell equipped with a set of bender elements and a resonant column-torsional shear device that can apply and control matric suction have been used. All specimens had an initial void ratio of 0.7 and were tested in various matric suctions under a net mean stress of 50 kPa. For this purpose, the axis translation technique has been implemented for applying matric suction, and High Air Entry Value (HAEV) ceramic discs have been used for air-water control of unsaturated silt specimens. According to the results, a significant variation of shear modulus and damping ratio has been observed with changes in matric suction and shear strain level. The output data indicate that shear modulus increases with increasing matric suction while the damping ratio decreases with an increase in matric suction. In addition, shear modulus and damping ratio decrease and increase with increasing shear strain, respectively.
Introduction
Climate change and consequently changes in the degree of saturation imply that the soils are mostly in an unsaturated condition, especially in semi-arid areas. In addition, in earthquake-prone areas where the soils are mostly unsaturated, dynamic behavior of unsaturated soils becomes essential for the effective design of foundations and geotechnical structures. Very few researchers have conducted element tests on small strain unsaturated soils, however, their research indicated that the hydro-mechanical behavior of unsaturated soils has a significant impact on the dynamic behavior and properties of the soils [1][2][3].
Shear modulus (G) and damping ratio (D) are two key dynamic parameters of the soils. The shear modulus represents the shear stiffness of the soil and the damping ratio is representative of the dissipation of waves' energy, while waves propagate in the soil. According to the studies conducted in the last decades, several factors including mean effective stress, void ratio, degree of saturation, stress history, plasticity, soil grain characteristics, and shear strain level have remarkable effects on these two factors which affect the dynamic behavior of the soils [4][5][6][7]. Shear strain is one of the most significant parameters that influence the dynamic behavior of the soils and plays a remarkable role in the assessment and interpretation of the shear modulus and damping ratio. The small-strain, medium-strain, and large-strain shearing occur at strain levels of γ < 10 −4 %, 10 -3 % < γ < 10 -1 %, and γ > 10 -1 %, respectively [8].
Recent research on the dynamic characteristics of unsaturated soils can be divided into two categories: 1) the small-strain level and 2) medium to large strain levels. In both categories, the impact of matric suction, net stress, degree of saturation, and hydraulic paths on the small-strain shear modulus (G0 or Gmax) and damping ratio (D0 or Dmin), and on the strain dependent shear modulus and damping ratio of the unsaturated soils are investigated. Unsaturated resonant column and bender element tests usually are implemented for small strain tests and unsaturated resonant column-torsional shear, triaxial, simple shear, and hollow cylinder tests are used for medium to large shear strain levels. On the basis of the test results, a number of empirical and semiempirical correlations have also been proposed by researchers for the prediction of small-strain shear modulus and damping ratio [9][10][11][12][13][14][15][16]. The medium to large strain tests on unsaturated soils have also revealed the degradation of the unsaturated shear modulus and the aggradation of the damping ratio with strain level in all unsaturated tested specimens. In addition, it is shown by previous researches that an increase in matric suction would result in an increase in the shear stiffness and a decrease in the damping ratio [17][18][19][20][21][22][23].
This paper aims to present the results of some unsaturated tests to assess the shear modulus and damping ratio of a specific unsaturated silt by implementing both unsaturated bender element and resonant column-torsional shear tests.
Test Material
Firuzkuh silica silt obtained from a mine in the northeast of Tehran, Iran is used in this study. The soil is a nonplastic silt with a grain size distribution curve obtained from hydrometer testing according to ASTM D7928 standard and is shown in Fig. 1a. The soil water retention curve (SWRC) was achieved by implementing the axis translation method under mean net stress of 50 kPa, and the van Genuchten model [24] was used to fit the obtained experimental data for the soil specimen in drying path, as presented in Fig. 1b. This soil is classified as ML in UCSC soil classification. The physical properties of the material are listed in Table 1. The maximum and minimum void ratios are assessed using ASTM D4254 and D4253, respectively. These methods are only applicable to soils with a maximum fines content of 15%, hence there is no standard method to acquire these parameters for soils with a higher fines content. In spite of this restriction, these methods have been used to obtain emax and emin in several research [e.g. [25][26][27].
Test Equipment
To investigate the dynamic properties of the tested soil, in small-to-medium strain levels in the unsaturated state, a triaxial cell equipped with bender elements (BE) and a resonant column-torsional shear (RC-TS) system which were developed and modified at the advanced soil mechanics laboratory of the Sharif University of Technology were implemented. These devices are suction-controlled and can control and apply the matric suction during any hydromechanical path. The axis translation technique proposed by Hilf [28] was utilized for this purpose as well. The small strain shear wave velocity of the silt under various hydromechanical conditions was measured utilizing the modified triaxial cell, equipped with bender elements. The associated G0 or Gmax was calculated using Eq. 1: Where Vs is the measured shear wave velocity and ρ is the density of the specimen. Shear modulus and damping ratio in the small to medium strain range were also measured using the resonant column-torsional shear apparatus. This device can implement both resonant column and torsional shear loadings. Resonant column test (RC) essentially involves a soil column in fixed-free end condition that is excited in a wide range of frequencies to catch the natural frequencies of the soil specimen. Once the first resonant frequency (fr) is obtained, the shear strain amplitude (γ), shear wave velocity (VS), and shear modulus (G) of the soil can be readily determined. In this test shear wave velocity was determined by measuring resonant frequency and then shear modulus was calculated by using Eq. (1). Shear strain in this test, was determined using accelerometer sensor data, and was calculated using Eq. (2): Where Ac is the output voltage of the accelerometer, d is the distance between the location of the accelerometer and the axis of the specimen, L is the length of the specimen, CF is the accelerometer calibration factor, and finally Req is the equivalent radius of the specimen (equal with 0.707r).
The damping ratio (D) in the resonant column test can be determined from frequency response curves via the half-power bandwidth method which can be calculated using Eq. (3): In which 1 and 2 are the frequencies at half-power points and is the resonant frequency. In the torsional shear test (TS), shear modulus and damping ratio were measured by the shear stress-shear strain hysteresis curve. The secant shear modulus was defined as the slope of the hysteresis loop and the area of the hysteresis curve of shear stress-shear strain was used to estimate the damping ratio.
Specimen Preparation
The diameter and the height of the cylindrical specimens used in the bender element were 60 mm and 120 mm, respectively. In the resonant column-torsional shear test, the dimensions of the specimens were 36 mm and 72 mm, respectively. For specimen preparation, the deaired distilled water was added to the oven-dried silt and then was blended well, and then placed in a sealed plastic bag for 24 hours. All specimens for both tests were prepared with the same initial void ratio equal to 0.7 ( 0 = 0.7). The specimens were prepared with an initial water content of 15% and according to Fig. 1b, the value of the initial suction for the specimen was estimated at about 26 kPa and the relative density of the soil specimen was measured at approximately 93%. This high relative density indicates that the soil specimens were very dense and stiff, therefore, the volume changes in all tests were insignificant. The static compaction using the under-compaction method proposed by Ladd [29] was used to prepare the specimens by placing the soil in 10 layers for bender element specimens and 4 layers for resonant column-torsional shear specimens. No need to mention that the interfaces between the successive layers were scarified for better interpenetration of the subsequent layers.
Test Procedure
After the preparation of the specimen and place it on the pedestal of the cell, a vacuum of about 40 kPa as seating load was temporarily applied at the top cap to separate the mold from the noncohesive specimen and consequently to avoid deformation, disturbance, and overturning of the specimen. The focus of this research was on the measurement of the dynamic parameters of the unsaturated tested silt during the drying path in relatively high matric suctions. Therefore, all the specimens needed to be saturated and then dried. For this purpose, backpressure was implemented to saturate the specimens. In this regard, a pressure difference of 40 kPa between the back water pressure and the confining pressure was kept, and the pressures were simultaneously increased step by step to approach the saturation condition. Enough waiting time was permitted for dissolving air bubbles into the water to obtain a Skempton's B-value of larger than 0.95. When the saturation of the specimen is secured, a net mean stress (pn) of 50 kPa was applied and kept on the specimen to get to equilibrium. Then, the target matric suction which is the difference between the air and water pressures was applied to the specimen and was maintained for about a week or two, depending on the specimen condition, up to the time when there was no change in the water content and the volume change of the specimen, and the target hydromechanical equilibrium of the specimen was achieved. In this stage, the shear wave velocity was measured by conducting a bender element test. Since the propagation of a small strain wave imposes no disturbance in the specimen, the next target suction could be applied on the same specimen, and after a new hydromechanical equilibrium shear wave velocity could be measured for the new target suction. This process repeated until shear wave velocities were measured for all pre-planned matric suctions.
For the implementing resonant column-torsional shear test, similar steps are taken for specimen preparation. After preparation of the specimen and saturation process, similar to that explained for the bender element test, desired matric suction was applied to the soil column. In the target hydromechanical equilibrium, the resonant column (RC) test was performed at different shear strain levels by changing the Amperage Amplitude of the current in five steps of 0.1, 0.3, 0.5, 0.7, and 1. There is a direct relationship between the value of the shear strain and the amperage amplitude in the RC device. After the RC test was completed, the torsional shear (TS) test was carried out on the same specimen, as the experienced shear strain was still enough small at the end of RC tests, to preserve the shear behavior of the tested reconstituted specimens. The torsional shear test was carried out at a mediumstrain level to measure shear modulus and damping ratio corresponding to that level of strain for target hydromechanical conditions. At this part of the tests, the values of the shear modulus and the damping ratio at different strain levels, and various hydromechanical conditions were measured using the stress-strain hysteresis loop related to each test step. The RC-TS device has a restriction in the measuring of the specimen volume change, hence, the specimens were taken out of the cell, pictured, weighted, and the dimensions were measured to calculate the volume change of the specimen at the end of the test. Because the tested specimen at the end of the TS test was disturbed, a new specimen should be prepared and tested for a new set of RC-TS tests for the new target matric suctions, similar to the previously mentioned procedure. The target matric suctions of 64 kPa, 128 kPa, and 256 kPa were considered for this set of tests, under the net mean stress of 50 kPa for all tests. The measured degree of saturation for these matric suctions were equal to 37%, 25%, and 18%, respectively. The tested soils in these conditions were located in the transient and residual zones of the Soil Water Retention Curve (SWRC) (Fig. 1b). It should be mentioned that the reported experiments only encompass three suctions and one net mean stress value, and this research is being continued to be completed in the near future.
Effect of matric suction on the resonant frequency and frequency curves of the tested soil
To demonstrate the effect of matric suction on the resonant frequency of the tested soils, the variations of shear strain with resonant frequency is shown in Fig. 2. The figure demonstrates that the increase of matric suction induces an increase in the soil resonance frequency and a decrease in the maximum shear strain level. It is noticeable that the matric suction has a remarkable impact on the soil behavior and the frequency response curve; although this impact reduces by the increase in matric suction. This means that the difference between the curves associated with the suctions of 128 kPa and 256 kPa is less than that for curves associated with 64 kPa and 128 kPa. Additionally, it can be observed that the shear strain level has a noticeable effect on the shape of the response curve so that the shape of the response curve for I = 1A is wider than that for I = 0.1A (Fig. 2). These shape variations affect the half-power points that are used for calculation of the soil damping ratio.
According to Fig. 2, the frequency response curves of the soil are affected by the level of the soil suction which results in an asymmetrical shape of the curves and consequently reduce the precision of the damping ratios calculated, using the half-power bandwidth method.
Effect of matric suction on the stress-strain hysteresis loop
The results of torsional shear tests are presented in Fig. 3 in the form of stress strain hysteresis loop. According to the observations from this figure, it can be concluded that increasing the matric suction from 64 to 256 kPa resulted in a hardening behavior of the soil and increase in the slope of the secant line or the shear modulus. Also, the area of the stress-strain hysteresis loop or the soil damping decreased by increase in the matric suction.
The hardening behavior is more profound for change in the matric suction from 64 kPa to 128 kPa, and after that, very small changes can be observed. This observation is due to the fact that the tested soil under both suctions of 128 kPa and 256 kPa were located in the residual zone of the SWRC, and so the change in suction did not considerably affect the soil saturation and consequently the stiffness or damping ratio of the tested soil. In fact, the soil saturation and inter-particle forces were almost unaffected by increasing of the matric suction from 128 kPa to 264 kPa, and therefore the soil stiffness and stress-strain behavior of the tested soil were not affected as well. As demonstrated in Fig.3, the hysteresis loop for both suctions of 128 kPa and 256 kPa were approximately identical.
Effect of matric suction on the dynamic properties of the tested soil
Variations of shear modulus and damping ratio versus shear strain for tested soil under various matric suctions are presented in Fig. 4. What is obviously notable is that by increase in the strain level the shear modulus decreases and the damping ratio increases. Also, the effect of suction on the shear modulus is clearly observed. According to Fig. 4a, the results of the bender element test demonstrate that the maximum shear modulus increased significantly as the matric suction increased. Furthermore, the results of the resonant column test in the range of tested shear strain levels indicate that the effect of matric suction on the shear modulus decreased as the shear strain increased.
The data resulted from torsional shear tests shown in Fig. 4a, for the secant shear modulus also illustrate a decrease in shear modulus with an increase in strain level. However, the effect of matric suction on the stiffness degradation curve was different. The result for matric suction of 64 kPa indicates a normal decrease in shear modulus with the increase in shear strain and decrease in matric suction. However, as illustrated in Fig. 3 and discussed earlier, the results of the TS test for the suctions of 128 kPa and 264 kPa almost coincided. Namely for the tested soil, in the high matric suctions and medium range strain, and probably higher shear strain levels, the soil stiffness is independent of the The conclusion that can be extracted from Fig. 4b is that the damping ratio of the tested soil, obtained from the bandwidth method and from the stress-strain hysteresis loop, were in the same configuration and both nonlinearly increased as the shear strain increased. The results shown in Fig. 4.b also indicate that the matric suctions had negligible effects on the damping ratio in the tested range of the matric suction and the net stress for the tested soil. However, there are some other studies that by an increase in the matric suction, the measured damping ratio decreased [17]. Finally, the results of the tests revealed that the damping ratio determined by the bandwidth method, obtained from the resonant column were between 2% to 6% and that determined from the hysteresis loop, obtained using the torsional shear tests were between 17% to 20%.
For a better insight into the effect of matric suction on the dynamic parameters of the tested soil, the curves of changes for normalized shear modulus and damping ratio versus shear strain are prepared and are illustrated in Fig. 5. The values of shear modulus obtained from bender element test was regarded as the maximum shear modulus and the maximum damping ratio was assumed to be equal to 20% for all tests. Fig. 5a shows that the curves for different matric suctions nearly merge, however, in the higher levels of suctions higher degradation can be observed. The shape of the normalized damping ratio curves shown in Fig. 5b is very similar to the shape of the curves shown in Fig. 4b, as all variations of the latter figure (.i.e Fig. 5b) are divided to a constant value to get to the former one. So probably employing Fig. 4b for damping can be more appropriate.
Conclusion
This study is mainly focused on the influence of matric suction on the variation of shear modulus and damping ratio, in the shear strain levels of very small to medium, for the tested silt. In this regard, a triaxial cell equipped with a pair of bender elements and a modified resonant column-torsional shear system were utilized to evaluate the shear modulus and damping ratio of a silt in unsaturated conditions. All the specimens had an initial void ratio of 0.7. The tests were carried out along the drying hydraulic path under a net mean stress of 50 kPa and the matric suctions of 64, 128, and 256 kPa.
The experimental measurements revealed that the shear modulus increased with an increase in the matric suction. This increase was more significant in lower shear strain levels. With the increase in shear strain level, the influence of the matric suction on the shear modulus was decreased especially in the medium strain levels. In high matric suctions and medium strain levels, an increase in suction had no remarkable impact on the stiffness of the tested soil. The degradation of the shear modulus with the increase in the shear strain level was observed for all matric suctions, however, a normalized curve indicated that the intensity of the degradation is higher for higher matric suctions.
The test results indicated that the damping ratio was not affected by the matric suction, irrespective of the methods of the measurements and calculations. However, the damping ratio of the tested soil was highly dependent on the strain level.
The tests have been performed at Advanced Soil Mechanics Laboratory of Civil Engineering Department of Sharif University of Technology which is acknowledged. Also, the authors acknowledge the financial support awarded by the research deputy of the Sharif University of Technology. | 4,921 | 2023-01-01T00:00:00.000 | [
"Geology"
] |
Poly-β-(1→6)-N-acetyl-D-glucosamine mediates surface attachment, biofilm formation, and biocide resistance in Cutibacterium acnes
Background The commensal skin bacterium Cutibacterium acnes plays a role in the pathogenesis of acne vulgaris and also causes opportunistic infections of implanted medical devices due to its ability to form biofilms on biomaterial surfaces. Poly-β-(1→6)-N-acetyl-D-glucosamine (PNAG) is an extracellular polysaccharide that mediates biofilm formation and biocide resistance in a wide range of bacterial pathogens. The objective of this study was to determine whether C. acnes produces PNAG, and whether PNAG contributes to C. acnes biofilm formation and biocide resistance in vitro. Methods PNAG was detected on the surface of C. acnes cells by fluorescence confocal microscopy using the antigen-specific human IgG1 monoclonal antibody F598. PNAG was detected in C. acnes biofilms by measuring the ability of the PNAG-specific glycosidase dispersin B to inhibit biofilm formation and sensitize biofilms to biocide killing. Results Monoclonal antibody F598 bound to the surface of C. acnes cells. Dispersin B inhibited attachment of C. acnes cells to polystyrene rods, inhibited biofilm formation by C. acnes in glass and polypropylene tubes, and sensitized C. acnes biofilms to killing by benzoyl peroxide and tetracycline. Conclusion C. acnes produces PNAG, and PNAG contributes to C. acnes biofilm formation and biocide resistance in vitro. PNAG may play a role in C. acnes skin colonization, biocide resistance, and virulence in vivo.
Introduction
The anaerobic Gram-positive bacterium Cutibacterium acnes is an abundant colonizer of human skin (Achermann et al., 2014).Although considered a beneficial commensal, C. acnes can cause opportunistic invasive infections of the skin, soft tissue, cardiovascular system, and implanted medical devices (Coenye et al., 2022).Cutibacterium acnes also contributes the pathogenesis of the common inflammatory dermatosis acne vulgaris (McLaughlin et al., 2019).
Previous investigations of PNAG production in C. acnes were inconclusive.Okuda et al. (2018) detected N-acetylglucosamine (GlcNAc) in the extracellular biofilm matrix of five C. acnes strains isolated from infected cardiac pacemakers using a wheat germ agglutinin dot blot assay, but dispersin B did not inhibit biofilm formation by any of the five strains in 96-well polystyrene microplates, an indicator of PNAG production.Gannesen et al. (2019) observed no nuclear magnetic resonance (NMR) spectroscopic signal for PNAG in the biofilm matrix of C. acnes strain RT5, an acne isolate.In the present study, we reexamined PNAG production in C. acnes using an anti-PNAG monoclonal antibody and the PNAG-degrading enzyme dispersin B.Here we present evidence that C. acnes produces PNAG in vitro, and that PNAG mediates C. acnes surface attachment, biofilm formation, and resistance to killing by the anti-acne agents benzoyl peroxide and tetracycline.
Bacterial strains and growth conditions
The bacterial strains used in this study are listed in Table 1.Strains were maintained on Tryptic Soy agar (BD).Bacterial inocula for broth cultures were prepared by transferring a loopful of cells from a 24-h-old agar plate to a microcentrifuge tube containing 200 μL of saline, mixing the cells by vortex agitation, and diluting the cells to 10 6 -10 7 CFU/mL in filter-sterilized Tryptic Soy broth (BD).Culture vessels were 13 × 100 mm glass tubes or 15-mL conical-bottom polypropylene centrifuge tubes.Tubes were filled with 1-mL of inoculum and incubated anaerobically at 37°C for 72 h.Anaerobic conditions were created using a BD GasPak EZ Anaerobe sachet system.
Antimicrobial agents and enzymes
Tetracycline hydrochloride (Sigma-Aldrich; Catalog No. T7660) was dissolved at 10 mg/mL in distilled water, filter sterilized, and diluted in broth to the indicated concentrations.Benzoyl peroxide (TCI Chemicals; Catalog No. B3152) was dissolved at 10 mg/mL in dimethyl sulfoxide and diluted directly in broth.Sodium dodecyl sulfate (SDS; Catalog No. 428018) was purchased from Merck.Deoxyribonuclease I (Catalog No. DN25) was from Sigma-Aldrich.Dispersin B was obtained from Kane Biotech (Winnipeg MB, Canada).
Fluorescence confocal microscopy
PNAG was detected on the surface of Cutibacterium spp.cells by fluorescence confocal microscopy using the PNAG-specific human IgG1 monoclonal antibody (mAb) F598 conjugated to Alexa Fluor 488 as previously described (Cywes-Bentley et al., 2013).Human IgG1 mAb F429, which binds to Pseudomonas aeruginosa alginate (Pier et al., 2004), was used as a negative control.Briefly, cells were swabbed from an agar plate onto glass microscope slides, air-dried, and covered for 1 min with ice-cold methanol.After rinsing, slides were reacted with 5.2 μg/mL mAb F598 or control mAb F429 directly conjugated to Alexa Fluor 488 along with 4 uM Syto 63 in BSA/PBS.After 2 h at
Highlights
Cutibacterium acnes is a bacterium that is found on the skin of most people.C. acnes helps maintain a healthy skin microbiota but also causes acne and infections of implanted medical devices.In this study we found that C. acnes produces an adhesive extracellular polysaccharide named PNAG (poly-N-acetylglucosamine) which may help C. acnes colonize skin and medical implants.We found that PNAG protects C. acnes from killing by benzoyl peroxide and tetracycline, two drugs that are commonly used to treat acne.PNAG may represent a novel target for skin antiseptics and anti-acne drugs.
Crystal violet binding assay
Biofilms cultured in glass tubes were rinsed vigorously with tap water and stained for 1 min with 1 mL of Gram's crystal violet.Tubes were then rinsed with tap water to remove the unbound dye, air-dried, and photographed.Tubes containing sterile broth were incubated and processed along with the inoculated tubes to serve as controls.To quantitate crystal violet binding, stained tubes were filled with 1 mL of 33% acetic acid, incubated at room temperature for 30 min, and mixed by vortex agitation.A volume of 200 μL of the dissolved dye was transferred to the well of a 96-well microtiter plate and its absorbance at 595 nm (A595) was measured in a microplate reader.Biofilm inhibition by dispersin B was calculated using the formula 1 − (A595 Dispersin B /A595 No enzyme ) × 100.
Surface attachment assay
Cutibacterium acnes cells were scraped from an agar plate and resuspended in PBS at ca. 10 6 CFU/mL.Cell suspensions were filtered through a 5-μm pore-size syringe filter to remove large clumps of cells and then aliquoted into three 15-mL centrifuge tubes (2.5 mL/tube).The first tube was left untreated to serve as a control.The second tube was supplemented with 20 μg/mL of dispersin B. The third tube was supplemented with 20 μg/mL of heat-inactivated dispersin B (95°C, 10 min).After 30 min at 37°C, the tubes were mixed by vortex agitation, and four 0.5-mL aliquots of each cell suspension were transferred to four separate 1.5-mL microcentrifuge tubes (0.5 mL/tube).A 25-mm long ethanol-sterilized polystyrene rod (1.5 mm diam; Plastruct Inc., Des Plaines IL, United States) was placed in each microcentrifuge tube.After 30 min, the rods were removed, rinsed with PBS to remove loosely adherent cells, and transferred to 15-mL conical centrifuge tubes containing 1 mL of PBS.Cells were detached from the rods by sonication, diluted, and plated on agar for CFU enumeration.
Autoaggregation assay
Cutibacterium acnes cells were scraped from an agar plate into 2 mL of PBS using a cell scraper.Aliquots of the cell suspension were treated with 20 μg/mL dispersin B or 10 μg/mL DNase I for 15 min.One aliquot of cells was left untreated to serve as a control.A total of 300 μL of each cell suspension was transferred to a 0.5-mL polypropylene centrifuge tube (model 6,530; Corning).The tube was then mixed by high-speed vortex agitation for 10 s, incubated statically for 20 min, and photographed.
Treatment of biofilms with enzymes and detergent
72-h-old biofilms grown in glass tubes were rinsed vigorously with water and then treated with 1 mL of 20 μg/mL dispersin B, 10 μg/ mL DNase I, or 1% SDS.After 15 min, tubes were rinsed vigorously with water and stained with crystal violet as described above.
Benzoyl peroxide killing assay
72-h-old biofilms grown in glass tubes were treated directly with 20 μg/mL dispersin B in PBS for 15 min followed by 70 or 140 μg/mL benzoyl peroxide for 10 min.Biofilms were then rinsed, detached from the tubes by sonication, diluted, and plated on agar for CFU enumeration.
Tetracycline tolerance assay
Biofilms were cultured in glass tubes in broth supplemented with 100 μg/mL dispersin B and/or 0.2 μg/mL tetracycline (MIC = 0.5 μg/ mL).After 72 h, biofilms were rinsed, detached from the tubes by sonication, diluted, and plated on agar for CFU enumeration.
Statistics and reproducibility of results
All experiments were performed in triplicate or quadruplicate tubes.All experiments were performed 2-3 times with similar results.The significance of differences between means was calculated using a Student's t-test.A p-value < 0.01 was considered significant.
Detection of PNAG on Cutibacterium spp. cells
Cells of Cutibacterium sp.strain KPL2009 and C. acnes strain KPL1849 were reacted with PNAG-specific mAb F598 or control mAb F429, both directly conjugated to Alexa Fluor 488, and then visualized for immunofluorescence by confocal microscopy (Figure 1).Bacteria embedded in an immunoreactive matrix of PNAG were observed with mAb F598 but not with control mAb F429, suggesting that both strains produce PNAG.
Dispersin B inhibits Cutibacterium acnes biofilm formation in glass tubes
The ability of four C. acnes strains to form biofilms in glass culture tubes was investigated using a crystal violet binding assay (Figure 2).Two of the four strains tested (HL043PA1 and HL036PA1) formed strong biofilms as evidenced by the large amount of bound crystal violet dye at the bottom of the tube.To determine whether dispersin B inhibits C. acnes biofilm formation, biofilm-forming strains HL043PA1 and HL036PA1 were incubated in unsupplemented broth or broth supplemented with 100 μg/mL dispersin B (Figure 3).Dispersin B significantly inhibited biofilm formation by both strains as evidenced by a lower amount of bound crystal violet dye in tubes supplemented with the enzyme.Quantitation of bound dye for strain HL034PA1 yielded a biofilm inhibition value of 99% compared to the no enzyme control (p < 0.001).
Dispersin B inhibits attachment of Cutibacterium acnes cells to polystyrene rods
Photographs of HL043PA1 and HL036PA1 glass culture tubes taken directly after incubation (prior to rinsing and crystal violet staining) revealed the presence of thin biofilms along the sides of the tubes that were absent in tubes supplemented with dispersin B (Figure 4).This phenomenon was also evident for HL043PA1 and HL036PA1 cultured in conical-bottom polypropylene centrifuge tubes (Figure 5 and data not shown).In both types of tubes, however, a significant amount of cell clumping was observed at the bottom of the tube, even in the presence of the enzyme.These observations suggest that PNAG may promote the attachment of C. acnes cells and biofilms to surfaces.To test this hypothesis, untreated and dispersin B-treated C. acnes planktonic cells were incubated in the presence of polystyrene rods and the number of cells that attached to the rods after 30 min was enumerated (Figure 6).Significantly fewer dispersin B-treated cells attached to the rods than untreated cells, whereas cells treated with heat-inactivated dispersin B attached to the rods at the same level as untreated cells.These results suggest that PNAG contributes to C. acnes surface attachment.
DNase I inhibits Cutibacterium acnes autoaggregation
Autoaggregation (also termed intercellular adhesion) often plays a role in biofilm formation (Trunk et al., 2018).To test whether PNAG contributes to C. acnes autoaggregation, C. acnes HL036PA1 cells were treated with dispersin B or DNase I for 30 min, mixed by vortex agitation, transferred to a microcentrifuge tube, allowed to settle for 15 min, then photographed (Figure 7).Untreated control cells and dispersin B-treated cells settled to the bottom of the tube, whereas DNase I-treated cells remained in suspension.These findings suggest that extracellular DNA, but not PNAG, contributes to C. acnes autoaggregation.
Cutibacterium acnes biofilms from glass tubes
To further investigate the composition of C. acnes biofilms, 72-h-old HL043PA1 biofilms were treated with dispersin B, DNase I, or SDS for 15 min, and then stained with crystal violet to visualize the biofilm remaining after treatment (Figure 8).DNase I and SDS, but not dispersin B, efficiently detached the mature biofilms, suggesting that extracellular DNA and proteinaceous adhesins, but not PNAG, contribute to biofilm stability in mature C. acnes biofilms.
Dispersin B sensitizes Cutibacterium acnes biofilms to benzoyl peroxide killing
HL043PA1 biofilms (72-h-old) were treated directly with 20 μg/mL dispersin B for 15 min followed by 70 or 140 μg/mL benzoyl peroxide for 10 min.The biofilms were then detached from the tubes by sonication, diluted, and plated on agar for CFU enumeration.Control experiments showed that dispersin B alone did not kill C. acnes cells or inhibit their growth (data not shown).Treatment of C. acnes biofilms with 70 or 140 μg/ mL benzoyl peroxide alone resulted in a 1-log reduction in C. acnes CFUs, while pre-treatment of biofilms with dispersin B increased benzoyl peroxide killing by approximately 0.5 log (p < 0.001; Figures 9A,B).These findings suggest that PNAG protects C. acnes biofilm cells from killing by benzoyl peroxide.
Dispersin B decreases tetracycline tolerance in Cutibacterium acnes biofilms
The effect of dispersin B on the tolerance of HL043PA1 biofilms to tetracycline was measured by culturing biofilms in broth supplemented with 100 μg/mL dispersin B and/or 0.2 μg/mL tetracycline (MIC = 0.5 μg/mL).After 72 h, fewer C. acnes cells were recovered from tubes supplemented with dispersin B plus tetracycline compared to tubes supplemented with tetracycline alone (p < 0.02; Figure 9C).These findings suggest that PNAG contributes to tetracycline tolerance C. acnes biofilms.
Discussion
Cutibacterium acnes is one of the most abundant bacteria on the skin of most people (Oh et al., 2014).Cutibacterium acnes is both a beneficial commensal that helps maintain homeostasis of the skin microbiome, and an opportunistic pathogen associated with acne vulgaris and invasive infections of implanted medical devices (Achermann et al., 2014).Biofilm formation likely plays an important role in the ability of C. acnes to colonize skin and device surfaces (Coenye et al., 2022).Biofilms may also play a role in the pathogenesis of acne vulgaris (McLaughlin et al., 2019).Cutibacterium acnes biofilms have been observed in acne lesions and on implanted medical devices in vivo (Bayston et al., 2006;Jahns et al., 2012).Biofilm formation is also a common phenotype among C. acnes clinical isolates in vitro (Coenye et al., 2022).Understanding the mechanism of C. acnes biofilm formation may lead to the development of novel antibiofilm agents that can be used to treat acne or prevent invasive infections.
Bacteria in a biofilm are encased in a self-synthesized polymeric matrix that holds the cells together in a mass, attaches them to a tissue or surface, and protects them from killing by biocides and host immunity (Flemming and Wingender, 2010).Several previous studies have investigated the composition of the C. acnes biofilm matrix in vitro.Gannesen et al. (2019) found that the biofilm matrix of C. acnes acneic strain RT5 consisted of approximately 63% polysaccharides, 10% proteins, 4% DNA, and 23% other compounds.The major polysaccharide was a linear polymer of glucose, galactose, mannose, galactosamine, and diaminomannuronic acid in a molar ratio of 1:1:0.3:1:2.Jahns et al. (2016) detected similar components in the biofilm matrix of C. acnes skin isolate KPA171202 by performing fluorescent microscopy with carbohydrate-, protein-, and DNA-specific stains.Kuehnast et al. (2018) found that proteinase K significantly detached pre-formed biofilms produced by 7 of 8 C. acnes strains in flow-cells, and that DNase I detached 4 of the 8 strains tested.Similarly, Fang et al. (2021) found that cationic liposomes loaded with DNase I or proteinase K significantly detached pre-formed biofilms produced by C. acnes acneic strain ATCC6919 in 24-well microtiter plate wells, and Okuda et al. (2018) found that proteinase K and DNase I significantly inhibited biofilm formation by 2 of 5 C. acnes implant isolates in 96-well microplates when the enzymes were added to the culture medium prior to biofilm formation.Taken together, these results are consistent with the presence of polysaccharides, proteinaceous adhesins and eDNA in the biofilm matrix of some C. acnes strains.Our results demonstrating inhibition of HL043PA1 autoaggregation by DNase I (Figure 7) and detachment of C. acnes HL043PA1 biofilms by DNase I and SDS (Figure 8) are consistent with the presence of proteinaceous adhesins and eDNA in the biofilm matrix of this strain.Several studies have revealed an important role for eDNA in biofilm formation, adhesion, and structural integrity in diverse bacterial species (Panlilio and Rice, 2021).
Previous investigations of PNAG production in C. acnes were inconclusive.Gannesen et al. (2019) observed no NMR spectroscopic signal N-acetylglucosamine, 2-acetamido-2-deoxy-galactose, or in the biofilm matrix of C. acnes strain RT5, suggesting that this strain does not produce PNAG.Okuda et al. (2018) found that dispersin B did not inhibit biofilm formation by five strains of C. acnes isolated from cardiac pacemakers when cultured in polystyrene microtiter plate wells.Since dispersin B was previously shown to inhibit biofilm formation by other PNAG-producing bacteria in vitro (Itoh et al., 2005;Parise et al., 2007;Pérez-Mendoza et al., 2011;Turk et al., 2013), these results suggested that PNAG was not a major adhesive component of biofilms produced by these five C. acnes strains.
In the present study we reinvestigated PNAG production in C. acnes using the PNAG-specific mAb F598 and the PNAG-specific glycosidase dispersin B. We found that mAb F598 reacted with cells of C. acnes strain KPL1849 and Cutibacterium sp.strain KPL2009 cultured on agar (Figure 1), suggesting that these two strains produce PNAG under some conditions.We also found that dispersin B exhibited antibiofilm activities against C. acnes strains HL036PA1 and HL043PA1 including inhibition of surface attachment (Figures 3-6) and sensitization to biocide killing (Figure 9).Since dispersin B is an accurate indicator of PNAG production (Cywes-Bentley et al., 2013;Eddenden and Nitz, 2022), these findings suggest that these two C. acnes strains also produce PNAG under some conditions.The antibiofilm activities exhibited by dispersin B against C acnes HL036PA1 and HL043PA1 are consistent with those exhibited by dispersin B against other species of bacteria (Itoh et al., 2005;Izano et al., 2007;Parise et al., 2007;Ganeshnarayan et al., 2009).The lack of a NMR signal for PNAG in the biofilm matrix of C. acnes strain RT5 (Gannesen et al., 2019) may be due to the fact that not all C. acnes strains produce PNAG, or that PNAG is a minor component of the RT5 biofilm matrix.The fact that dispersin B did not exhibit biofilm inhibiting activity against five implant-associated C. acnes strains in 96-well microplates (Okuda et al., 2018) may be due to strain differences or to the fact that dispersin B exhibits different antibiofilm activities depending on the shape and size of the culture vessel (Izano et al., 2007).It is also possible that the enzyme used by Okuda et al. (2018) was inactive because no positive control for enzyme activity was reported.
Our results suggest that PNAG mediates C. acnes surface attachment and biocide resistance, but that eDNA is the major intercellular adhesin in mature C. acnes biofilms.C. acnes be similar to Staphylococcus aureus, where double-stranded DNA is the most common extracellular component of biofilms produced by most strains (Sugimoto et al., 2018), while PNAG functions to confer resistance to killing by biocides (Serrera et al., 2007;Darouiche et al., 2009;Gawande et al., 2014) and innate host immune mediators (Kropec et al., 2005).Like C. acnes, S. aureus biofilms are readily detached by DNase I but not by dispersin B (Izano et al., 2008;Kaplan et al., 2012).More experiments are needed to determine the functions of PNAG in C. acnes cells and biofilms, and to determine whether PNAG production correlates with skin colonization, biocide resistance, immune tolerance, acne pathogenesis, and medical device infections in vivo.
FIGURE 2
FIGURE 2Biofilm formation by four Cutibacterium acnes strains in 13 × 100 mm glass tubes.Cultures were incubated for 3 days, rinsed with water, and stained with crystal violet.Strain names are indicated below.The tube at the left was incubated with sterile broth.This experiment was performed in duplicate tubes on three separate occasions with identical results.Representative tubes are shown.
FIGURE 3
FIGURE 3 Biofilm formation by Cutibacterium acnes strains HL036PA1 and HL043PA1 in the absence or presence of 100 μg/mL dispersin B. Biofilms were cultured in glass tubes for 3 days, then rinsed with water and stained with crystal violet.Triplicate tubes for each condition are shown.The control tubes at the left (No bacteria) were incubated with sterile broth.These experiments were performed on two separate occasions with similar results.Tubes from representative experiments are shown.
FIGURE 4
FIGURE 4 Growth of Cutibacterium acnes strains HL036PA1 and HL043PA1 in glass tubes in the presence of 0 or 100 μg/mL dispersin B. Tubes were photographed after 3 days of incubation.Triplicate tubes for each condition are shown.Strain names are indicated at the left.Enzyme treatments are indicated below.These experiments were performed on three separate occasions with identical results.Tubes from representative experiments are shown.
FIGURE 6
FIGURE 6 Attachment of Cutibacterium acnes HL043PA1 planktonic cells to polystyrene rods.Cells were treated with phosphate buffered saline (PBS), 20 μg/mL dispersin B (DspB), or 20 μg/mL heat-inactivated dispersin B (ΔDspB) for 15 min prior to contacting the rods.Each point represents one individual rod.Horizontal lines indicate mean values.This experiment was performed on two separate occasions with similar results.Results from one representative experiment are shown.
FIGURE 7
FIGURE 7Autoaggregation of Cutibacterium acnes HL043PA1 cells in the presence of 20 μg/mL dispersin B or 10 μg/mL DNase I. Cell suspensions supplemented with the indicated enzyme were transferred to polypropylene tubes, incubated statically for 20 min, and then photographed.This experiment was performed in duplicate tubes on two separate occasions with identical results.Representative tubes are shown.
FIGURE 8
FIGURE 8 Detachment of 3-days-old Cutibacterium acnes HL043PA1 biofilms by enzymes and detergents.Biofilms were rinsed with water, treated with the indicated agent for 30 min at 37°C, re-rinsed, and stained with crystal violet.Dispersin B was 50 μg/mL, DNase I was 10 μg/mL, and sodium diodecyl sulfate (SDS) was at 1%. Duplicate tubes for each treatment are shown in the top panel and triplicate tubes for each treatment are shown in the bottom panel.These experiments were performed on three separate occasions with identical results.Representative experiments are shown.
FIGURE 5
FIGURE 5Growth of Cutibacterium acnes strain HL036PA1 in polypropylene tubes in the presence of 0 or 100 μg/mL dispersin B. Bacteria were photographed after 3 days of growth.Triplicate tubes for each condition are shown.This experiment was performed on two separate occasions with identical results.Tube from one representative experiment are shown.
FIGURE 9
FIGURE 9 Dispersin B (DspB) sensitizes Cutibacterium acnes HL043PA1 biofilms to killing by benzoyl peroxide (BP) (A,B) and to growth inhibition by tetracycline (Tet) (C).Panels (A,B) show the effect of a 10-min BP treatment on 72-h-old biofilms cultured in glass tubes.Panel (A) 70 μg/mL BP; panel (B), 140 μg/ mL BP.Some biofilms were treated with 20 μg/mL DspB in PBS for 15 min prior to contact with BP as indicated in the legends below.In panel (C), biofilms were cultured for 72-h in 0 or 0.2 μg/mL Tet.Some tubes were supplemented with 100 μg/mL DspB as indicated in the legend below the graph.Each dot represents one individual tube.The experiment in panel (C) was performed on two separate occasions with similarly significant differences between tetracycline alone and tetracycline + dispersin B.
TABLE 1
Cutibacterium spp.strains used in this work. | 5,275.2 | 2024-05-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
Chemical composition , antimicrobial and anti-acetylcholinesterase activities of essential oil from Lantana camara ( Verbenaceae ) flowers
Post-Graduate in Biodiversity and Biotechnology Program, Bionorte, State Coordination of Roraima, Federal University of Roraima UFRR, Campus Cauamé, BR 174, Km 12, District Monte Cristo, CEP 69310-250, Boa Vista-RR-Brazil. Post-Graduate in Chemistry Program, Center for Research and Post-Graduate Studies in Science and Technology, NPPGCT, UFRR, Av Capitão Ene Garcez, n. 2413, Campus Paricarana, CEP 69310-000, Boa Vista-RR-Brazil. Embrapa Brazilian Agricultural Research Corporation. Rodovia 174, Km 8, Industrial District, CEP 69301-970, Boa Vista-RR-Brazil. Institute of Exact Sciences, Department of Chemistry, Federal University of Minas Gerais, UFMG, Av Antonio Carlos, n. 6627, Pampulha, CEP 31270-901, Belo Horizonte-MG-Brazil. Chromatography Laboratory, Institute of Exact Sciences, Department of Chemistry, UFMG, Belo Horizonte-MG-Brazil.
INTRODUCTION
Lantana camara L. (Verbenaceae) is a shrub with distribution in tropical, subtropical and temperate regions, being considered a weed difficult to control.However, L camara has ornamental uses, and is reported to improve soil quality for agriculture as well as possessing insecticide, antifungal, and herbicide activities.In folk medicine, this species is known by its sudorific and antipyretic activities, action on bronco-lung problems, rheumatism and against scabies (Patel, 2011;Passos et al., 2009;Lorenzi, 2008;Kohli et al., 2006;Lorenzi, 2002).The essential oil of its flowers and leaves possess a variety of chemical compounds with leishmanicidal, antimicrobial, anticancer, antiulcer, anti-inflammatory activities, among others.However, in high doses, these species can be toxic to some animals (Oyourou et al., 2013;Machado et al., 2012;Montanari et al., 2011;Sousa et al., 2011;Costa et al., 2009;Sharma and Kumar, 2009).
The objectives of this work was to analyze the chemical constitution of essential oil obtained from dried flowers of L. camara collected in Boa Vista, Roraima, and to evaluate its bioactivities on the acetylcolinesterase inhibition, and on the pathogenic microorganisms Escherichia coli, Salmonella typhimurium (Gram-negative bacteria), Staphylococcus aureus and Streptococcus sanguinis (Gram-positive bacteria), an yeast (Candida albicans) and the filamentous fungi Aspergillusflavus and Fusariumproliferatum.
Plant material and essential oil extraction
The flowers of L. camara were collected in the Cauamé Campus of Federal University of Roraima (UFRR) in Boa Vista, Roraima, Brazil.The plant material was identified by José Ferreira Ramos (Instituto Nacional de Pesqusias da Amazonia, INPA), and a voucher specimen (268126) was deposited at the INPA Herbarium.
The flowers were dried at room temperature and 100 g of the sample were used to obtain the essential oil by hydrodistillation using a Clevenger type apparatus.The essential oil was dried over anhydrous sodium sulphate and stored at -20°C before analysis (Rubiolo et al., 2010;Sefidkon, 2002).
Gas chromatography/mass spectrometry analysis
A GCMS-QP2010 ULTRA (Shimadzu) was used.Column: Rxi-1MS dos Santos et al. 923 30 m × 0.25 mm × 0.25 microns (Restek).Column Temp: 70°C (2 min), 5°C min -1 to 250°C.Injector: 250°C Split (1:20), GC-MS interface at 250°C.MS detector (electron impact at 70 eV) temperature was 250°C.Carrier gas: helium at 1.5 ml min -1 .Vol injection: 1 μl.Essential oil was diluted at 0.1% in chloroform.Data acquisition software used was GC-MS Solution (Shimadzu) together with NIST11 library.Identification of peaks was made by comparison of the mass spectra obtained by GC-MS spectra with the NIST11 library and also by comparing the Kovats indices calculated by GC-FID and literature data.
Antibacterial and yeast assay
E. coli (ATCC 25922), S. tiphymurium (ATCC 14028), S. aureus (ATCC 25923) and S. sanguinis (ATCC 49456) bacteria and C. albicans (ATCC 18804) yeast were used in the assay.Concentrations assayed were 500, 250, 125, 62.5, 31.25, 15.6, and 3.9 µg ml -1 (Zacchino and Gupta, 2007).Samples were weighed and dissolved in DMSO to 50 mg ml -1 .40 µl of this solution was added to a flask containing 960 µl of BHI (Brain Heart Infusion) broth (working solution).A pre-inoculum was prepared in which the bacteria and the yeast, stored under refrigeration, were transferred with a platinum loop to test tubes containing 3 ml of freshly made BHI broth.The tubes were incubated at 37°C for 18 h.Then, the pre-inoculum (500 µl) was transferred to tubes containing 4.5 ml of sterile distilled water.The tubes were homogenized and the concentration adjusted to 0.5 of McFarland turbidity standard (10 8 CFU ml -1 ), thereby obtaining the inocula used in the bioassays.
Assays were performed in 96-microwell plates in duplicate.100 µL of BHI broth was added to each well.In the first well, 100 µl of working solution were also added.The solution was homogenized and 100 µl transferred to the next well and so on until the last well, from where 100 µl was discarded.Then, 100 µl of microorganism inocula were added to wells.Eight different concentrations of each sample were tested.A positive control devoid of the working solution allowed us to examine microorganism growth.A negative control, which lacked the inoculum permitted us to discount the colour coming from the working solution.A control plate containing 100 µl of BHI culture medium and 100 µl of sterile distilled water were added to the experiment as a control of BHI broth sterility.
Another control was also prepared, containing the standard antibiotics Ampicillin (antibacterial), miconazole and nystatin (antifungals) to observe the activity of these antibiotics over the microorganisms.Microorganism growth was measured in ELISA plate reader (492 nm) immediately after ending the experiment (0 h).They were incubated at 37°C and read again after 24 h of experiments, ending the test.Results were calculated as percentual inhibition using the formula: % inhibition = 100 -AC -AC × 100AH -AM AC = absorbance of the sample; AC = absorbance of control sample; AH = absorbance of microorganisms in the control control and AM = absorbance of culture medium control.
Filamentous fungi assay
Filamentous fungi used in this test were A. flavus (CCT 4952) and *Corresponding author.E-mail: ricardocs.br@gmail.com.Author(s) agree that this article remain permanently open access under the terms of the Creative Commons Attribution License 4.0 International License F. proliferatum (CML 3287).DMSO was used for sample preparation and the concentration of sample in the assay was 250 mg mL -1 .Sabouraud broth was used for fungal growth.A spore suspension at a concentration of 5 × 10 -5 spores ml -1 was used after spores counting on a Neubauer chamber.The sample incubation time was 48 h after which absorbance was read at 490 nm on a microtitre plate reader.Data were processed using the Outlier method, Grubbs test with 95% significance level.The percentage of inhibition was calculated by using the formula: % inhibition = 100 -AC -AC × 100AH -AM AC = absorbance of the sample; AC = absorbance of control sample; AH = absorbance of microorganisms in the control control and AM = absorbance of culture medium control.
The concentrations of compounds detected in the essential oil of L. camara flowers showed to be influenced by the collection spot.For instance, in the oil obtained from a L. camara voucher collected in India (Khan et al., 2002), the major compounds identified were β-elemene (14.5%), germacrene D (10.6%), α-copaene (10.7%), αcadinene (7.2%), β-caryophyllene (7.0%) and γ-elemene (6.8%); on the other hand, the essential oil obtained from L. camara flowers grown in Iran showed sabinene (16.5%), β-caryophyllene (14.0%), 1,8-cineole (10.0%), bicyclogermacrene (8.1%) and α-humulene (6.0%) as major compounds (Sefidkon, 2002).El Baroty et al. (2014) report the essential oil chemical composition of L. camara flowers obtained in Cairo, Egypt, where the majority chemicals differ from those presented in Table 1 among other differences.The yield differences, variations in chemical composition and their respective yields can be result of a number of biotic and abiotic factors (Figueiredo et al., 2008).The variations in the chemical composition can lead to different bioactive effects against diseases and microorganisms responsible for causing various pathologies.L. camara essential oil was assayed for acetylcholinesterase inhibition and, as the result, a good inhibition was detected reaching 77.15% of inhibition.
Neurodegenerative syndromes like Alzheimer's disease have been causing great concern worldwide.This disease causes difficulties in language, memory, emotional behaviour, personality and cognitive abilities (Singh et al., 2013).The World Health Organization (WHO), presents alarming data for Alzheimer's disease.Since it was estimated in 2010 about 35.6 million people were already suffering with this illness, with projections that this value would be tripled by 2050, with approximately 115.4 million people directly affected by Alzheimer's disease (WHO, 2012).Plants components have been extensively screened for their potential for acetylcholinesterase inhibition, since many plants can be used for the treatment of neurodegenerative diseases (Mukherjee et al., 2007).According to the classification for acetylcholinesterase reducing potential of crude extracts, weak inhibitors present inhibitory value below 30%; moderate inhibitors present 30 to 50% inhibition, and potent inhibitors show over 50% of inhibition enzyme (Vinutha et al., 2007).
Considering this, essential oil of L. camara dried flowers stands out as a potent inhibitor of AChE enzyme.This inhibitory potential may have been highlighted by the synergism of the several chemical constituents present.It has been found that the inhibitory potential can be associated to synergistic compounds.As an example, interactions between 1,8-cineole/α-pinene and 1,8cineole/caryophyllene oxide are reported to improve reduction of acetylcholinesterase activity; the same effect can be also caused by miscellaneous compounds (Singh et al., 2013;Savalev et al., 2003).Therapies involving essential oils can be accomplished in several ways, the Bacteria 500 μg ml -1 (%) 250 μg ml -1 (%) 125 μg ml -1 (%) 31.25 μg ml -1 (%) 3.91 μg ml -1 (%) most common being aromatherapy.It was possible to observe improvement in the cognitive function of patients with Alzheimer's through aromatherapy (Savalev et al., 2003).Essential oils act in the central nervous system with excellent results in improving the living conditions and treatment of several diseases, especially neurodegenerative diseases like Alzheimer's and Parkinson (Dobetsberger and Buchbauer, 2011;Jimbo et al., 2009).Filamentous fungi A. flavus and F. proliferatum were inhibited by L. camera essential oil (44.52 and 28.97%, respectively).These microorganisms affect humans and crops causing a lot of damage, especially economic losses in various parts of the world (Mulè et al., 2004;Howard, 2002).Gram (+) and gram (-) bacteria (Table 2) and the yeast were inhibited in the assay showing activity Relative absorbance Time for the essential oil from L. camara flowers.The values ranged from 3.91 μg ml -1 to 500 μg ml -1 , potentially some satisfactory results in this bioassay.Ampicillin was used as a control, a standard clinical use of antibiotics, was very efficient in the concentrations shown.Growth of the pathogenic yeast, C. albicans, was inhibited more than 90% at all concentrations.It is noteworthy that there was 95% of C. albicans inhibition at 15.6 μg/ml, while the positive controls, miconazole and nystatin where less active at the same concentration (92 and 91%, respectively).
Pathogenicity of this yeast is usually associated to low immunity in decurrence of aging, infection or therapies, which can lead to the development of candidiasis.This infection affects the skin, oral cavity, esophagus, gastrointestinal tract, vagina and the human vascular system (Calderone and Fonzi, 2001).The above results reveal the potential of this essential oil against pathogenic microorganisms, broadening the biological importance of the species, already known for its actions as antipyretic, antimutagenic, and insecticide, among others (Naz and Bano, 2013;Seth et al., 2012;Zandi-Sohani et al., 2012;Kurade et al., 2010;Sharma and Kumar, 2009;Sonibare and Effiong, 2008;Verma and Verma, 2006;Barre et al., 2005;Deena and Thoppil, 2000;Siddiqui et al., 1995).
CONCLUSION
Despite L. camara is considered a weed, it has several biological benefits, being a promising source of a natural and powerful oil that acts as an AChE inhibitor.This essential oil may be a possibility for the development of new medicines to treat neurodegenerative diseases.Besides, it has efficacy against pathogenic microorganisms that affect humans, therefore it is used as an alternative antibiotic is also suggested.
Figure 1 .
Figure 1.Chromatogram of the essential oil from L. camara flowers.
Table 1 .
Percentage composition of the L. camara flowers essential oil.
Table 2 .
Inhibition of gram (+) and gram (-) bacteria by the essential oil from L. camara flowers. | 2,725.8 | 2015-09-17T00:00:00.000 | [
"Biology"
] |
All Fiber Mach–Zehnder Interferometer Based on Intracavity Micro-Waveguide for a Magnetic Field Sensor
: A magnetic fluid (MF)-based magnetic field sensor with a filling-splicing fiber structure is proposed. The sensor realizes Mach–Zehnder interference by an optical fiber cascade structure consisting of single mode fiber (SMF), multimode fiber (MMF), and single-hole-dual-core fiber (SHDCF). The core in the cladding and the core in the air hole of SHDCF are used as the reference and sensing light path, respectively, and the air hole of SHDCF is filled with magnetic fluid to realize magnetic field measurement based on magnetic controlled refractive index (RI) characteristics. The theoretical feasibility of the proposed sensing structure is verified by Rsoft simulation, the optimized length of SHDCF is determined by optical fiber light transmission experiment, and the SHDCFs are well fused without collapse through the special parameter setting. The results show that the sensitivity of the sensor is − 116.1 pm/Gs under a magnetic field of 0~200 Gs with a good long-term operation stability. The proposed sensor has the advantages of high stability, fast response, simple structure, and low cost, which has development potential in the field of miniaturized magnetic field sensing.
Introduction
Magnetic field measurement is of great significance in various fields, such as biomedical, aerospace, military war, the construction of electrical equipment, and so on [1][2][3]. The traditional electromagnetic sensor is easy to be damaged or disturbed by the magnetic field [4]. In contrast, optical fiber magnetic field sensors, with the advantages of antielectromagnetic interference, small size, high stability, corrosion resistance, and remote measurement, are widely studied in the field of magnetic field sensing [5,6]. Magnetic fluid (MF) is a kind of promising magneto-optical nanomaterial consisting of magnetic particles and base fluid. The MF has no magnetic attraction in static state, and the magnetic particles are evenly distributed in the base fluid in static state. However, a magnetic linkage structure will be gradually formed by the magnetic particles along the direction of the applied magnetic field when the magnetic field is applied, which will cause the effective dielectric constants of MF to change, and then the change in the refractive index (RI) of the magnetic fluid [7][8][9][10][11]. Hence, the MF has attracted extensive attention for serving as crucial sensing elements of an optical fiber magnetic field sensor owing to the advantages of high sensitivity and good fiber compatibility [12][13][14].
The MF-based optical fiber magnetic field sensors mainly include Fabry-Perot (FP) microcavity magnetic field sensor, fiber Bragg grating (FBG) magnetic field sensor, mode interference magnetic field sensor, and surface plasmon resonance (SPR) magnetic field sensor. In 2016, Xia et al. [15] proposed an FP magnetic field sensor with temperature compensation by FBG; the sensitivity is 0.53 nm/mT. In 2018, Bao et al. [16] designed a phase-shifting fiber Bragg grating magnetic field sensor with micro-slit, with sensitivity of 2.42 pm/Oe and the measurement range of 0~120 Oe. In 2019, Zhang et al. [17] proposed a tapered optical fiber Mach-Zehnder interferometer (MZI) magnetic field sensor with a sensitivity of 71.98 pm/Oe and 0.11 dB/Oe at 40~120 Oe. In 2019, Zhou et al. [18] demonstrated an optical fiber magnetic field sensing system based on SPR with a sensitivity of 303 pm/Gs. In the above literature, the method of gluing-immersing MF was used in preparation of the sensor, which is simple to operate, but generally requires the use of capillary tubes for packing MF, and it also has disadvantages including a high dosage of MF, relatively large volume, and long response time. In addition, because of the use of glue to seal the sensing structure, the MF is prone to be contaminated, which will affect the sensing properties of MF. Compared with the gluing-immersion method, the all-fiber structures prepared by filling-splicing have the advantages of small volume, fast response rate, strong stability, and low cost. According to the research, the smaller the volume of magnetic fluid, the faster the response speed of the MF sensor [19]. In 2017, Yin et al. [20] prepared photonic crystal fiber misplaced fused magnetic field sensors by the method of filling-splicing, and the X-axis sensitivity reached 114.5 pm/mT. In 2018, Li et al. [21] used multimode fiber and photonic crystal fiber to make a magnetic field sensor by the method of filling-splicing with a sensitivity of 72 pm/Gs. However, the MF inside the porous fibers is carbonized and gasified when fused, which affects the reflectivity of the fiber end face, and eventually the performance of the entire sensor is affected. In order to avoid the influence of the carbonization and gasification of the MF on the performance of the sensor, the air hole filled with the MF should not be used as the light transmission channel. If an optical waveguide can be added to the air hole, it can not only ensure the transmission of light, but also sense the change of liquid refractive index in the air hole, which will be very helpful to improve the sensing performance.
In this paper, an intracavity micro-waveguide structure is proposed based on the SHDCF. The air cavity plays the role of filling MF and protecting micro-waveguide, and micro-waveguide as a sensing arm can sense the change in the RI of MF. The effective RI of the biased core as a reference arm in the cladding is hardly affected by the magnetic fluid. In addition, the theoretical feasibility of the proposed sensing structure structural scheme is verified by Rsoft simulation, the optimized length of SHDCF is determined by optical fiber light transmission experiment, and the SHDCFs are well welded without collapse through the special splicing parameter setting. The transmission spectrum of proposed sensor is blue shifted with the magnetic field intensity increase in the range of 0~200 Gs, and the sensitivity is −116.1 pm/Gs.
Principle and Preparation
The proposed MZI sensor is composed of a cascade structure of SMF1-MMF1-SHDCF-MMF2-SMF2. A beam of light is coupled into MMF1 through SMF1, and then expands to two cores of SHDCF, and the two beam lights in the SHDCF are coupled to SMF2 through MMF2. The schematic diagram is shown in Figure 1a. The MF is filled into the air hole of the SHDCF by capillary action, as shown in Figure 1b and the cross-section diagram of the SHDCF is shown in Figure 1c. The SMF used in the experiment is commercial SMF with a cladding diameter of 125 µm and core diameter of 8.2 µm, and the cladding diameter and core diameter of MMF are 125 µm and 105 µm, respectively. The structural parameters of the SHDCF can be referred to in Table 1. The fabrication process of the proposed sensor is shown in Figure 2. First, a segment of SMF and a segment of MMF are cleaved and welded together by an optical fiber cleaver (Fitel, S325) and fusion splicer (Fitel, S179), as shown in Figure 2I. Second, The SMF-MMF section is obtained by cutting with an optical fiber cleaver, and the SHDCF with a flat end face is vertically inserted into the MF for capillary effect filling, as shown in Figure 2II. Third, SMF-MMF and SHDCF are fused together without collapse. Finally, another MMF-SMF section is fused to the right side of the SHDCF. In this structure, the output transmission spectrum can be described as follows [22]: where I1 and I2 are the light intensity of core 1 and core 2, respectively; ϕ Δ is the phase difference between the guide lights in core 1 and core 2, which can be calculated by the following formula: The fabrication process of the proposed sensor is shown in Figure 2. First, a segment of SMF and a segment of MMF are cleaved and welded together by an optical fiber cleaver (Fitel, S325) and fusion splicer (Fitel, S179), as shown in Figure 2I. Second, The SMF-MMF section is obtained by cutting with an optical fiber cleaver, and the SHDCF with a flat end face is vertically inserted into the MF for capillary effect filling, as shown in Figure 2II. Third, SMF-MMF and SHDCF are fused together without collapse. Finally, another MMF-SMF section is fused to the right side of the SHDCF. The fabrication process of the proposed sensor is shown in Figure 2. First, a segment of SMF and a segment of MMF are cleaved and welded together by an optical fiber cleaver (Fitel, S325) and fusion splicer (Fitel, S179), as shown in Figure 2I. Second, The SMF-MMF section is obtained by cutting with an optical fiber cleaver, and the SHDCF with a flat end face is vertically inserted into the MF for capillary effect filling, as shown in Figure 2II. Third, SMF-MMF and SHDCF are fused together without collapse. Finally, another MMF-SMF section is fused to the right side of the SHDCF. In this structure, the output transmission spectrum can be described as follows [22]: where I1 and I2 are the light intensity of core 1 and core 2, respectively; is the phase difference between the guide lights in core 1 and core 2, which can be calculated by the following formula: In this structure, the output transmission spectrum can be described as follows [22]: where I 1 and I 2 are the light intensity of core 1 and core 2, respectively; ∆ϕ is the phase difference between the guide lights in core 1 and core 2, which can be calculated by the following formula: where λ is the wavelength of operation, L is the length of the SHDCF, and the ∆n eff is the RI difference between core 2 and core 1. Hence, the calculation formula of interference valley wavelength λ k is as follows: where k is an integer. The RI of the MF in air hole (n MF ) will change with the external magnetic field, which changes the reflection condition of the core in the air hole, and finally affects the effective RI of the core in air hole with reference to the research results of Chen et al. [23]; under a constant temperature value of T, the relationship between the RI of the MF and the magnetic field is as follows: where n s is the saturation value of the RI of MF, which depends on the type of carrier liquid and the concentration of MF; n 0 is the RI of the MF with the critical magnetic field; H c,n is the critical value of the applied magnetic field (H > H c,n ); H is the field strength in Oe; T is the temperature in Kelvin; and α represents the fitting parameters, which is a data fitting value of magnetic saturation experiment of fixed magnetic fluid at 8°C. ∆n eff are related to the changes in the RI of MF (∆n e f f MF according to the classic perturbation theory, through the overlap factor f defined as follows [24]: When the intensity of the external magnetic field changes, the refractive index of the magnetic fluid (n MF ) will change, resulting in the change in the effective refractive index of core 2, according to Formula (3), the characteristic wavelength of the interference spectrum will shift. Hence, the measurement of the external magnetic field can be realized by detecting the shift of the interference spectrum.
Simulation of Sensing Structure
The change in the effective RI of the suspension core is determined by MF, and eventually leads to the movement of the interference spectrum. Therefore, the sensitivity of the RI change of the suspension core waveguide of this structure is simulated based on Rsoft software. The optical path diagram of the simulation model is shown in Figure 3a, and the simulated optical path diagram is consistent with the optical path structure shown in Figure 1. The RI of the core in the air hole of the simulation model is changed in the input light wavelength range of 1520~1600 nm, and the wavelength shift of the simulated transmission spectrum as RI rises of the suspension core is shown in Figure 3b. There are obvious interference valleys (peaks) at 1525, 1540, 1565, and 1570 nm of the transmission spectrum, and it is found that the sensitivity is 1620 nm/RIU at about 1565 nm after wavelength demodulation. The linear fitting of wavelength demodulation is shown in Figure 3c, and the value of R 2 is 0.9881.
The Light Path Receiving Verification
To verify the actual sensing optical path, an experimental system is built as shown in Figure 4. The SMF-MMF-SHDCF structure connected with a light source and the SMF connected with a spectrometer are placed on the left and right motors of the fiber fusion splicer, respectively, and the distance between SHDCF and SMF is as small as possible.
The Light Path Receiving Verification
To verify the actual sensing optical path, an experimental system is built as shown in Figure 4. The SMF-MMF-SHDCF structure connected with a light source and the SMF connected with a spectrometer are placed on the left and right motors of the fiber fusion splicer, respectively, and the distance between SHDCF and SMF is as small as possible. Move the SMF in the X-axis and Y-axis directions to receive the transmitted light from core 1, core 2, air hole, and cladding of the SHDCF, respectively. The position of core 1 can be determined by observing the transmission spectrum in the spectrometer, and the position of core 2 and cladding can be determined by geometric calculation of cross section and transmission spectrum.
The Light Path Receiving Verification
To verify the actual sensing optical path, an experimental system is built as shown in Figure 4. The SMF-MMF-SHDCF structure connected with a light source and the SMF connected with a spectrometer are placed on the left and right motors of the fiber fusion splicer, respectively, and the distance between SHDCF and SMF is as small as possible. Move the SMF in the X-axis and Y-axis directions to receive the transmitted light from core 1, core 2, air hole, and cladding of the SHDCF, respectively. The position of core 1 can be determined by observing the transmission spectrum in the spectrometer, and the position of core 2 and cladding can be determined by geometric calculation of cross section and transmission spectrum. The SMF receives transmitted light from SHDCF with lengths of 3 cm, 5 cm, and 7 cm, as shown in Figure 5. The core in the air hole is exposed and the loss is greater than that in the cladding, so core 2 in Figure 5 is the core in the air hole. It can be seen from the results that the loss of core 2 is much less than that of the air holes and the cladding, and the longer the SHDCF, the greater the loss of transmitted light. According to Formula (5), the longer the length of SHDCF, the higher the sensitivity. As shown in Figure 5a, the light intensity in the cladding and air holes in the 3 cm SHDCF is too high, which will bring high-order mode interference. As shown in Figure 5c, the loss of core 2 in 7 cm SHDCF is too large, so the intensity of interference spectral will be very low. Referring to the above factors, the length of SHDCF is determined as 5 cm, and the light transmission intensity The SMF receives transmitted light from SHDCF with lengths of 3 cm, 5 cm, and 7 cm, as shown in Figure 5. The core in the air hole is exposed and the loss is greater than that in the cladding, so core 2 in Figure 5 is the core in the air hole. It can be seen from the results that the loss of core 2 is much less than that of the air holes and the cladding, and the longer the SHDCF, the greater the loss of transmitted light. According to Formula (5), the longer the length of SHDCF, the higher the sensitivity. As shown in Figure 5a, the light intensity in the cladding and air holes in the 3 cm SHDCF is too high, which will bring high-order mode interference. As shown in Figure 5c, the loss of core 2 in 7 cm SHDCF is too large, so the intensity of interference spectral will be very low. Referring to the above factors, the length of SHDCF is determined as 5 cm, and the light transmission intensity of each part of the SHDCF is shown in Figure 5b. The length of MMF is 1 mm, because of the beam can be expanded into two cores of SHDCF with the length, and the relevant theoretical analysis can refer to the published article [25].
The splicing parameters are observed to fuse well. The automatic splicing procedure in the splicing machine is used to fuse SMF and MMF, and manual splicing mode is used to fuse SHDCF and MMF. The main parameters of splicing are discharge intensity and discharge time, and the collapse of the air hole ( Figure 6a) and the over carbonization of the MF (Figure 6b) will be caused when the discharge intensity is too high, or the discharge time is too long. The SHDCF and MMF cannot be fused when the discharge time is short and the discharge intensity is small, and multiple discharges are needed, which will lead to the taper (Figure 6c). After a series of comparative experiments, the discharge time of 300 ms and the discharge intensity of 30 unit were finally selected; the effect drawing of no collapse splicing is shown in Figure 6d.
Appl. Sci. 2021, 11, x FOR PEER REVIEW 6 of 10 of each part of the SHDCF is shown in Figure 5b. The length of MMF is 1 mm, because of the beam can be expanded into two cores of SHDCF with the length, and the relevant theoretical analysis can refer to the published article [25].
(a) (b) (c) The splicing parameters are observed to fuse well. The automatic splicing procedure in the splicing machine is used to fuse SMF and MMF, and manual splicing mode is used to fuse SHDCF and MMF. The main parameters of splicing are discharge intensity and discharge time, and the collapse of the air hole ( Figure 6a) and the over carbonization of the MF (Figure 6b) will be caused when the discharge intensity is too high, or the discharge time is too long. The SHDCF and MMF cannot be fused when the discharge time is short and the discharge intensity is small, and multiple discharges are needed, which will lead to the taper (Figure 6c). After a series of comparative experiments, the discharge time of 300 ms and the discharge intensity of 30 unit were finally selected; the effect drawing of no collapse splicing is shown in Figure 6d. The splicing parameters are observed to fuse well. The automatic splicing procedure in the splicing machine is used to fuse SMF and MMF, and manual splicing mode is used to fuse SHDCF and MMF. The main parameters of splicing are discharge intensity and discharge time, and the collapse of the air hole (Figure 6a) and the over carbonization of the MF (Figure 6b) will be caused when the discharge intensity is too high, or the discharge time is too long. The SHDCF and MMF cannot be fused when the discharge time is short and the discharge intensity is small, and multiple discharges are needed, which will lead to the taper (Figure 6c). After a series of comparative experiments, the discharge time of 300 ms and the discharge intensity of 30 unit were finally selected; the effect drawing of no collapse splicing is shown in Figure 6d.
The Experimental Analysis
The magnetic field measurement platform is shown in Figure 7. The optical spectrum analyzer (AQ6370D) amplified spontaneous emission source, Gauss meter (CH-1500), and two 700-turn coils are used in the measurement platform. The programmable DC power (IT6164B) is used for power supply, and the magnetic field calibration is carried out by a Gauss meter with precision of 0.01 Gs.
The coil will heat up with a long working time, which will lead to the increase in coil resistance and instability of magnetic field environment. The temperature will also affect the RI of MF, which will cause inaccurate magnetic field measurement data. Therefore, circulating coolant is used to reduce the temperature. The coolant driven by a motor pump circulates in the inner of the coil frame, and the pumped coolant is cooled by the fan and finally flows back to the power coil. During the experiment, the temperature was set at 20 • C.
The Experimental Analysis
The magnetic field measurement platform is shown in Figure 7. The optical spectrum analyzer (AQ6370D) amplified spontaneous emission source, Gauss meter (CH-1500), and two 700-turn coils are used in the measurement platform. The programmable DC power (IT6164B) is used for power supply, and the magnetic field calibration is carried out by a Gauss meter with precision of 0.01 Gs. The coil will heat up with a long working time, which will lead to the increase in coil resistance and instability of magnetic field environment. The temperature will also affect the RI of MF, which will cause inaccurate magnetic field measurement data. Therefore, circulating coolant is used to reduce the temperature. The coolant driven by a motor pump circulates in the inner of the coil frame, and the pumped coolant is cooled by the fan and finally flows back to the power coil. During the experiment, the temperature was set at 20 °C .
The prepared magnetic field sensor is placed on the magnetic field measurement platform, as shown in Figure 7. Gauss meter zeroing and pre-heat treatment for 30 min. The magnetic field strength is increased by 25 Gs every 20 min by increasing the output current of the DC power supply, and the measured data of each magnetic field strength are saved on the optical spectrum analyzer. Fourier transform of the magnetic field measurement data is carried out, as shown in Figure 8a, and a small amount of light passes through the air hole and cladding and causes high-order mode interference, according to the results of Fourier transform. The band-pass filter is used to eliminate the high-order mode interference from the air hole and cladding in magnetic field measurement data; the comparison of transmission spectrum before and after filtering is shown in Figure 8b. The prepared magnetic field sensor is placed on the magnetic field measurement platform, as shown in Figure 7. Gauss meter zeroing and pre-heat treatment for 30 min. The magnetic field strength is increased by 25 Gs every 20 min by increasing the output current of the DC power supply, and the measured data of each magnetic field strength are saved on the optical spectrum analyzer. Fourier transform of the magnetic field measurement data is carried out, as shown in Figure 8a, and a small amount of light passes through the air hole and cladding and causes high-order mode interference, according to the results of Fourier transform. The band-pass filter is used to eliminate the high-order mode interference from the air hole and cladding in magnetic field measurement data; the comparison of transmission spectrum before and after filtering is shown in Figure 8b.
Gauss meter with precision of 0.01 Gs. The coil will heat up with a long working time, which will lead to the increase in coil resistance and instability of magnetic field environment. The temperature will also affect the RI of MF, which will cause inaccurate magnetic field measurement data. Therefore, circulating coolant is used to reduce the temperature. The coolant driven by a motor pump circulates in the inner of the coil frame, and the pumped coolant is cooled by the fan and finally flows back to the power coil. During the experiment, the temperature was set at 20 °C .
The prepared magnetic field sensor is placed on the magnetic field measurement platform, as shown in Figure 7. Gauss meter zeroing and pre-heat treatment for 30 min. The magnetic field strength is increased by 25 Gs every 20 min by increasing the output current of the DC power supply, and the measured data of each magnetic field strength are saved on the optical spectrum analyzer. Fourier transform of the magnetic field measurement data is carried out, as shown in Figure 8a, and a small amount of light passes through the air hole and cladding and causes high-order mode interference, according to the results of Fourier transform. The band-pass filter is used to eliminate the high-order mode interference from the air hole and cladding in magnetic field measurement data; the comparison of transmission spectrum before and after filtering is shown in Figure 8b. As shown in Figure 9a, the transmission spectrum after band-pass filtering of the sensor in the magnetic field range of 0~200 Gs is blue shifted with the increase in magnetic field intensity, and the sensitivity is −116.1 pm/Gs with a linearly fitted index of 0.9940, as shown in Figure 9b. The magnetic field measurement data are saved every 30 min at the magnetic field of 0 Gs for the verification of sensor stability, as shown in Figure 10. The maximum wavelength shift is 0.12 nm within 120 min through wavelength demodulation.
As shown in Figure 9a, the transmission spectrum after band-pass filtering of the sensor in the magnetic field range of 0~200 Gs is blue shifted with the increase in magnetic field intensity, and the sensitivity is −116.1 pm/Gs with a linearly fitted index of 0.9940, as shown in Figure 9b. The magnetic field measurement data are saved every 30 min at the magnetic field of 0 Gs for the verification of sensor stability, as shown in Figure 10. The maximum wavelength shift is 0.12 nm within 120 min through wavelength demodulation. The performance comparison of the proposed sensor with the MF-based optical fiber MZI sensors in recent years is given in Table 2. From Table 2, the sensitivity and the measurement range of [21,[26][27][28] are lower than that of the structure proposed in this paper, and although the sensitivity of [29] is higher than that of the proposed sensor, the measurement range is only from 0.024 to 0.2 Gs and the glue immersion method is adopted. In As shown in Figure 9a, the transmission spectrum after band-pass filtering of the sensor in the magnetic field range of 0~200 Gs is blue shifted with the increase in magnetic field intensity, and the sensitivity is −116.1 pm/Gs with a linearly fitted index of 0.9940, as shown in Figure 9b. The magnetic field measurement data are saved every 30 min at the magnetic field of 0 Gs for the verification of sensor stability, as shown in Figure 10. The maximum wavelength shift is 0.12 nm within 120 min through wavelength demodulation. The performance comparison of the proposed sensor with the MF-based optical fiber MZI sensors in recent years is given in Table 2. From Table 2, the sensitivity and the measurement range of [21,[26][27][28] are lower than that of the structure proposed in this paper, and although the sensitivity of [29] is higher than that of the proposed sensor, the measurement range is only from 0.024 to 0.2 Gs and the glue immersion method is adopted. In The performance comparison of the proposed sensor with the MF-based optical fiber MZI sensors in recent years is given in Table 2. From Table 2, the sensitivity and the measurement range of [21,[26][27][28] are lower than that of the structure proposed in this paper, and although the sensitivity of [29] is higher than that of the proposed sensor, the measurement range is only from 0.024 to 0.2 Gs and the glue immersion method is adopted. In addition, only the basic fusing structure is used in this paper, and the sensitivity of the sensor can be further improved through the methods of tapering, bending, and dislocation of SHDCF. The simple preparation process, low cost, and fast response make the sensor proposed in this paper a potential candidate in practical magnetic field measurement applications.
Conclusions
A magnetic field sensor based on MZI fabricated by filling-splicing is proposed in this paper. The SHDCF is fused after filling MF in the air hole to ensure that the MF is in a completely enclosed space, which makes the sensor stable. The sensitivity of the RI change of the suspension core waveguide of this structure is simulated based on Rsoft. The SMF connected to the optical spectrum analyzer is used to receive the transmitted light from core 1, core 2, air hole, and cladding of the SHDCF, respectively, to verify the sensing theory and determine the optimized length of SHDCF. Finally, the sensor was fabricated with the appropriate splicing parameters, and the experimental results show that the sensitivity of the sensor is −116.1 pm/Gs in the range of 0~200 Gs with a good linearity. The sensor has the advantages of a simple structure, low cost, fast response, and high structural strength, and the structure is expected to be used in magnetic field measurement of industrial production. | 6,743.8 | 2021-12-06T00:00:00.000 | [
"Physics"
] |
SOURCE: Sheridan Institutional Repository SOURCE: Sheridan Institutional Repository
The purpose of this theoretical article is to highlight the role that dialogic pedagogy can play in critical multicultural education for pre-service teachers. The article starts by discussing the problematic that critical multicultural education poses in a democratic society that claims freedom of speech and freedom of expression as a basic tenet of democracy. Through investigating research findings in the field of critical multicultural education in higher education, the author argues that many of the educational approaches-including the ones that claim dialogue to be their main instructional tool-could be described as undemocratic, and thus have done more harm than good for the multicultural objectives. On the other hand, the author argues that dialogic pedagogy could be a better approach for critical multicultural education as it promises many opportunities for learning that do not violate the students’ rights of freedom of expression and freedom of association. Throughout this article, the author tries to clarify the difference between dialogic pedagogy and other conceptualizations of dialogue in critical multicultural education arguing for the better suitability of dialogic pedagogy for providing a safer learning environment that encompasses differing and at times conflicting voices.
As a policy that is imposed on all students, the multicultural objectives, especially those concerned with antiracist education (Lee, 1995;Sleeter & Bernal, 2003), provoke the resistance of the majority of the students-especially those who are White and middle class (Ladson-Billings, 1999;Solomon, Portelli, Daniel, & Campbell, 2005). This is in addition to the fact that any success that the multicultural course might achieve in reducing prejudices and changing negative stereotypes about minorities during the time of the course is short lived and difficult to sustain in the long term (Holins & Guzman, 2005). This suggests that such superficial success only reflects students' desire to give instructors the answers they are looking for without actual changes in the students' convictions or perceptions. Such findings raise red flags about the worth of a multicultural education that could not sustain its learning outcomes for long. Moreover, besides the futility of multicultural education if students do not see its relevance or credibility, could multicultural education with its insistence on certain curricular endpoints that are already contested in the public discourse be anti-democratic obliterating the voices of students who do not agree with its learning objectives? Could the multicultural curriculum be imposing on the students the pre-pondered and the pre-packaged answers of policy makers and teachers? Finally, could a monolithic discourse in multicultural education, even if it were for a good cause such as social justice, be as oppressing to the pre-service teachers as conventional education is to minority students who do not conform to the school's culture or to cultural codes as defined and approved by the school (Delpit, 1995;Fordham, 1993)? In this paper, I argue that dialogic pedagogy, as conceptualized by Bakhtinian scholars, could potentially provide an answer to this dilemma. In a democratic and free multicultural class, the role of the educator is to encourage the students to explore, investigate, and examine different views and perspectives including both the authoritative word of politics, ideology, and religion, and the internally persuasive discourse arising from other class members' subjectivities, opinions, experiences, and struggles with the topics. I propose that dialogic pedagogy is different from approaches that have been used in many multicultural classes that claimed dialogue or rather critical dialogue to be the main instrument of instruction (Amos, 2010;Delpit, 1988;Martin, 2010;Solomon et al., 2005). The alleged collapse of dialogue and the proclaimed failure or little success of the multicultural praxis in these classes could be due to a conceptualization of dialogue that radically differs from Bakhtinian dialogic pedagogy. In what follows, I will examine historical and contemporary dialogic approaches in critical multicultural education highlighting the challenges that they posited for a free and democratic dialogue.
Dialogue in Critical Pedagogy
Much of the discourse on multicultural education has conceptualized it as an antiracist education (Ladson-Billings & Tate, 1995;Lee, 1995;Nieto, 2004). By antiracist education, multicultural educators
Dialogic Multicultural Education Theory and Praxis
Nermine Abd Elkader A3 emphasized the importance of advancing a social justice agenda in teacher education programs and invited educators to lead their pre-service teachers in an examination of their racial identity, the privileges of their White middle class status, and the subordination, socioeconomic disadvantage, and inequitable access to opportunities that such privilege caused to other groups (Lee, 1995;Sleeter, 1995). Antiracist education not only challenges conservative views that call for assimilation to Eurocentric norms, language, and culture (Bennett, 1992;Hirsch, 1987), but also liberal views that claim that all people groups in the society have access to equal opportunities in education and that hard work and meritocracy are the only basis for success in the American society (Delgado & Stefancic, 2011). In an attempt to target prejudices against minorities, different paradigms of multicultural education with antiracist orientation sought to discuss negative stereotypes about minorities that have been circulated in the public discourse by the aid of the media, the law, and the political arena (Solorzano, 1997;Stovall, 2004;Taylor, Gillborn, & Ladson-Billings, 2009). Such paradigms were also meant to show pre-service teachers that stereotypes affected how teachers perceived their minority students in a way that impacted these students' school success and achievement. Despite the plethora of research studies that promote antiracist multicultural education in teacher education programs and that proclaim the value of the experiential knowledge of teachers and students of color in introducing White pre-service teachers to an alternative epistemology and alternative curriculum that their formal mainstream education never addressed (Kohli, 2008;Yosso, 2002), antiracist multicultural education has been faced with many challenges and many setbacks in teacher education to the extent that Martin (2010) proclaims that faculty who teach such courses often feel disappointed, drained, and in need of self-replenishing after the courses are over.
One of the biggest challenges of the anti-racist paradigm in multicultural education is the students' resistance to it. Such challenges were reported for anti-racist multicultural education, anti-sexist (DePalma, 2007), and anti-homophobic multicultural education (Whitlock, 2010); however, with the last two, students' resistance was presented as being more vocal and more explicit than their resistance to racial conversations. For example, Milner (2008) and Ladson-Billings and Tate (2006) maintained that when racial issues were raised, White students often resolved to silence and disengagement. Ladson-Billings et al. (2006) raised the concern that such silence could be deafening to the extent that it could in turn silence students and teachers of color on racial issues. However, by listening to some students talk about the racial discourse outside the classroom, Ladson-Billings et al. (2006) discovered that such silence hid behind it strong feelings of anger, resentment, and insecurity. These strong emotions were not just specific to White students. Students of color, too -particularly Black students -expressed the same feelings, though more explicitly, for being put in a position of having to teach Whites about things they should have already known. Likewise, Milner (2008) explored the issue of White students' silence on the discourse of racism and maintained that it often felt intimidating to the instructors especially those of color. Ladson-Billings et al. (2006) believed that educators should encourage their students to be more vocal and to voice their feelings and opinions and not to just assume that their silence emerged from a position of consent or ignorance of the topic. On the other hand, Milner (2008) took a different stance calling for a political action and solidarity among likeminded educators to effect change in the multicultural curriculum of the entire academy. Milner's recommendations besides Ladson-Billings et al.'s (2006) findings bring up the question of which direction multicultural education policies should take in a democratic society, and if policy could be so radicalized that it could hinder democracy and pull in the other direction of traditionalist conservatives with both parties envisioning a reality in which only their version of a good citizen and a good society should exist.
Another question that presents itself is if students' resistance to the critical multicultural discourse arises from a place of ignorance, strong emotions of guilt, and resentment and anger toward accusations A4 of racism and privilege (Milner, 2008;Solomon et al., 2005) or whether it arises from a place of personal convictions deeply rooted in political or even religious ideology? If the latter is the case, at least in some of these cases, could a multicultural curriculum that insists on producing the results of prejudice reduction, identity examination and re-construction, and social action be a totalitarian project calling for conformity rather than diversity and democracy? Kukathas (2003) maintains that the problem of diversity in a free and liberal society emerges when the State is allowed any role in determining what a good life for its citizens should look like or should not look like. Within the context of diversity, assuming that the state should have any authority in imposing equality among all groups is inherently anti-democratic. Kukathas (2003) sets the social justice agenda against diversity and claims that striving for equality creates an egalitarian system that suppresses and intentionally oppresses diversity. According to Kukathas (2003), diversity necessarily entails inequality; and the assumption that multiculturalism entails fighting for equality is unrealistic and erroneous. For example, equality might not be a value cherished by certain communities within a society that seeks it. To mention a few, Kukathas illustrates that the Amish in the US, the Indians in Brazil, and the aboriginal people of Malaysia are more likely to be indifferent to political equality and sharing power; in fact, many of them have no desire to embrace it and are considered as victims of those who try to bring them forcibly into it.
Similarly, Kukathas (2003) opposes a liberal theory that mandates the State to uphold justice because in such case, justice would be defined according to liberal values and thus eliminates any other definitions or conceptions of justice that do not agree with the liberal views which in turn affects diversity and suppresses it. Case in point: while Kukathas (2003) maintains that diversity should not and could not be sacrificed in any liberal and free society, he does not think that it is compatible with equality or with a single definition of what constitutes social justice. He would, thus, rather sacrifice these two for the sake of diversity. A theory in multiculturalism in this case would therefore look at diversity as a human condition that is reliant on individual and group characteristics and risk aversion. The two latter factors will create differences among individuals, groups, and societies making some wealthy, some bigger in size, and some die out completely. Moreover, the role of the State in such case is not to ensure equality but to ensure tolerance and the freedom to associate with any group membership that an individual should desire. The concept of group association will present one plausible solution to the problem of minorities being ostracized by their own group members when they demonstrate cultural traits and cultural preferences that are more in congruence with the mainstream culture than with the culture of their own communities (Fordham, 1993;Fordham & Ogbu, 1986;Ogbu, 2008).
In teacher education programs, however, multicultural education has often taken that trend that Kukathas fears would be detrimental to diversity. Many of the research studies that investigate undergraduate or even graduate students' attitudes (Amos, 2010) toward multicultural courses have not taken into consideration these students' voices, personal growth, past experiences, subjectivities, or even the effect of the authoritative word of politics, religion, and past education on how they reflect on and process the controversial topics that the multicultural course exposes. For example, Solomon et al. (2005), recommend that teacher educators in multicultural education should regard their role in effectuating an education for democracy as "equitable, socially just, and prepare society's citizens to become active participants in the human community…As such, teachers' conceptions of democracy as it relates to notions of citizenship (which are intricately linked to discourses of race, racialization and belongingness), need to be examined" (p. 148). The political undertone of the last statement cannot be overlooked and does not take into account the value of democracy as respecting differences and human agencies. Similar language can be found in other paradigms of critical multicultural education that call for
Dialogic Multicultural Education Theory and Praxis
Nermine Abd Elkader A5 identity deconstruction/reconstruction, interrogating Whiteness, decentering Whiteness (Bergerson, 2003;Holins & Guzman, 2005). They all seem to impose a political and partisan agenda in multicultural courses that contradicts the tenets of democracy that multicultural education seems to be calling for especially when these courses are offered as a core requirement for a degree program.
Politicizing the multicultural curriculum could be a reason for the proclaimed ineffectiveness of the movement (Mattai, 1992). However, considering any educational project as neutral and without political implication is both unrealistic and harmful. Cuenca (2010) argues that apoliticizing education especially in the field of social subjects with their emphasis on civic and citizenship participation might lead to the apoliticization of democracy in schools reducing citizenship to mere "good" civic deeds and producing a form of citizenship unable to develop voice or to question the government on big issues such as federal spending and health care. In fact, any monolithic discourse-on either ends of the spectrum-might lead to students' resistance and might harm democratic education rather than reform it; thus emerges the importance of dialogic pedagogy in providing an answer for the problem of diversity and democracy. Yet, Bakhtinian dialogue should not be confused with Freire's dialogic philosophy, which could easily happen since both approaches emphasize the students' role in dialogue as subjects who define their own goals and drive their own learning project. However, the author contends that while Freire's dialogic approach could proclaim democracy and emancipation, a clearer examination of this paradigm reveals it to be problematic and potentially oppressing to both teachers and students in a diverse setting.
Freirean vs. Bakhtinian Dialogue: For or Against Democratic Education?
A problem with any dialogic project that has an end goal is how power relations could limit and hinder its authenticity and the ability or even safety of the students and educators who engage in it. Much of the discourse on multicultural education is grounded in the work of critical pedagogues such as Freire and Shor (Nieto, 2004). Both scholars claim dialogue as the basis for democratic and emancipatory education and argue against a banking education that treats students as vessels of the system's attempts to use them according to the whims and needs of the dominant power (Freire, 1993;Freire & Macedo, 1987;Shor & Freire, 1987). In the Freirean dialogue, it is proclaimed that students should presume the role of subjects (Freire & Macedo, 1987) who partner with the educator to set the goals, directions, and even assessment criteria and procedures of the curriculum (Shor, 1992). However, in reading Freire, one could not overlook the wide gap between theory and practice. Freire's (1993) cultural circle was meant to raise the Brazilian peasants' critical awareness of their oppressed situation and the positions of dominance and privilege that the nobles had over them. These cultural conversations had the goal to create literacy among the peasants that would lead to revolution for liberation. In other words, Freire's literacy paradigm was meant to alert the peasants that their position under the oppressor was not pre-determined and that it was changeable relying on how they view themselves as historical humans and not as existential animals with no heritage (Freire, 1993). However and despite the apparently empowering Freirean literacy theory, a concern regarding it is that it does not have an answer for the issue of diversity especially that of students' voices within such paradigm. Freire's failure in Guinea Bissau highlights the relevance of this issue: what happens when students and teachers do not agree with each other on the end goals of their dialogue? Facundo (1984) ascertains that tens of ethnic groups existed in Guinea Bissau post the revolution, many of whom had a policy of complete segregation and thus did not have a direct contact with the colonizer and in turn with oppression as presumed by Freire. Besides, with Freire's persistence on reaching out to all people's group in Guinea Bissau, he might have overlooked the economic reality of some of these groups which relied heavily on agriculture and thus might have not seen any value in literacy (Facundo, 1984). Freire proclaimed (Freire & Macedo, 1987) that his literacy program had failed in Guinea Bissau because of the government's
Dialogic Multicultural Education Theory and Praxis
Nermine Abd Elkader A6 insistence on using the Portuguese language which was the language of the colonizer and thus might have sacrificed the interests of the larger student population in the society who spoke their ethnic languages and could not learn Portuguese. However, such evaluation did not include the voices of the stakeholders in that project. We only hear a subtle voice for the country's revolutionary leader and thinker, Amical Cabral, expressing his dilemma over the many ethnic languages that the country had, and emphasizing his belief that the only answer he could deem for the problem was to use Portuguese as a unifying language for the nation. However, Freire acknowledged Cabral's voice only after the program had already failed. Freire himself (Freire & Macedo, 1987) confessed that at the time of his involvement in Guinea Bissau, he did not know Cabral's reasons for his insistence on Portuguese as a language of instruction. Besides, Facundo (1984) worries that Freire blamed the failure of the volunteer workers on their lack of training and motivation to help the oppressed without considering other minor reasons that had nothing to do with ideologies; for example simple logistics such as hard and unsafe transportation and lack of access to printing and copying material could have had negative effects on workers' motivation and productivity and thus on the attrition rate. For all intents and purposes, although Freire wanted to spend months in Guinea Bissau conducting interviews about Amical Cabral (Freire & Macedo, 1987), we hear no such similar desire on his part regarding the students of Guinea Bissau or their educators. All the above could suggest that some reasons for the gap between theory and practice in Freire's approach is his totalitarian approach to education that proclaims the subjectivity of the other in theory but ignores it in practice. In fact, when asked about how he could deal with the problem of diversity in the educational context, Freire could not give a clear cut answer. In Guinea Bissau, he kept emphasizing the shortcoming of the country's leadership in insisting on using Portuguese. In the United States, Freire refused to give what he proclaimed to be a prescription for educators, which suggests that he could have evaded a problem that he had no answer for. Thus, while Freire (Freire & Macedo, 1987) promoted his literacy model as universal, in practice, it did not work as was proposed when the context changed and became more diverse.
From a dialogic pedagogy standpoint, one could anticipate that within the Freirean paradigm, the student-teacher relationship broke down when the literacy project finalized the student population too early and treated them as predictable finalized beings who needed to attain a curricular end point predefined for them. Since much of the work of critical multicultural educators in the United States have been impacted by Freire (Facundo, 1984;Nieto, 2004), we can hypothesize that the monologism of the Freirean paradigm has something to do with the challenges and acknowledged failures they met in their program (Holins & Guzman, 2005).
Dialogic Pedagogy in Multicultural Education: Theoretical Framework
Many of the conceptualizations and pedagogical approaches of critical multicultural education could be viewed as monologic according to a Bakhtinian conceptualization of dialogue (Bakhtin, 1991). Dialogic pedagogy scholars (Sidorkin, 1999;Sullivan, 2011) argue that a true human dialogue engages the other in a relationship where answers remain between the dialogic partners rather than outside them and where the author while interpreting the word of the other reflects on how he/she has been a part of this interpretation. Sidorkin (1999) expresses this relationship as the self being at the boundaries of a social and dialogic relationship with another. Dialogic pedagogy scholars have thus realized a deficit in multicultural education pedagogy that did not take into account the dialogic relationship that needed to exist between teachers and students for any learning to be humane, realistic, and transformative (Madison, 2011;Matusov & Smith, 2007) ; hence, from a dialogic pedagogy perspective, there emerges a need for an approach in critical multicultural education that engages both teachers and students, as well as their future stakeholders, in a dialogue that respects their final stance on any issue even if or especially when this means disagreement among those involved. I contend that critical pedagogy that
Dialogic Multicultural Education Theory and Praxis
Nermine Abd Elkader A7 seeks to prepare critical citizens for a democratic society should also accept learners who end up disagreeing and even acting against its own project. In doing so, educators could have the right to believe education to be a political project and to express their subjectivity within that project. However, they do not ignore that from an ethical and a democratic standpoint, no teacher should impose on the students any side or view; in fact, according to Sidorkin (1999), a teacher should discourage a student from taking any side too early in life and they should leave it up to them to be the final judge of the truth. In Sidorkin's words, "Learning in itself is an exposure to complexity. The school may teach the evolutionism and creationism; the variety of different religions and atheism; the 'rainbow curriculum' and 'family values'. The double message is, in fact, the only truly educational message" (Sidorkin, 1999, p. 125). According to this perspective, therefore, monologism could be a significant threat to any successfully free and democratic educational project whether it be in multicultural education or otherwise.
Monologism is two folded. It is the contention that there is one ultimate truth which can be attained through consensus in a free dialogue; and it is the conviction that truth pre-exists, is pre-defined, and has been previously achieved (Sidorkin, 1999). Therefore, Freirean and critical pedagogy dialogues, despite their alleged promotion of students' engagement in a free and open discussion, could represent an educational project that is anti-dialogic and that has major ontological harms (Matusov, 2009). Alternatively, Matusov (2011) maintains that teachers need to bring to the discussion their personal convictions, passions, and ideologies as long as the discussion is a two-way or taking the nature of a diversity class into consideration-a multiple way conversation where all voices are heard and legitimized as valid to exist outside the power or hierarchical relationships of the institution that might suppress differences and hinder the authenticity of the dialogue.
Based on the above, students' resistance in the multicultural courses could be interpreted as a breakdown in the communication between students and teachers and students and one another. Sidorkin (1999) maintains that although resistance in a classroom context could be interpreted as behavioral dysfunction, most of the time, resistance is some sort of students' agency responding to those who through monologism deny students' voices the right to exist. For example, Skidmore (2000) found a breakdown in the dialogue between the teacher and one of her elementary school students in a reading class when the teacher tried to impose on that student an answer, which she [the teacher] perceived as the right answer. Interestingly, when the breakdown took place, it was not just between the teacher and that specific student but it was between that teacher and the whole class. Although the teacher herself blamed the class for behavioral problems that caused the breakdown, Skidmore (2000) proclaimed that it was the teacher's insistence on a monologic answer provided by the textbook as the only right answer that caused the breakdown. Similar results were found in a study of a science class in which students' power in dialogue was shown to combat the teacher's authority and even the authoritative knowledge presented by the textbook (Candela, 1999). In that lab experiment, the teacher asked the students to confirm certain results for an experiment she prepared for them to conduct. When a student challenged the results expected by the teacher, and the teacher insisted on her pre-set goals without listening to the student, the whole class ended up in a state of silence in which the students stopped talking to one another and to the teacher, and stopped responding to the teacher's direct and indirect questions. Matusov (2007) fears what he deems as teacher's objectification of the students in which students become the objects of the teacher's fantasies, aspirations, and expected or imagined outcomes. This issue is relevant when we consider that the goal of a critical multicultural project is mostly for social justice, transformation, and equity pedagogy (Banks, 1994). When teachers take the approach of excessively objectifying their students without investigating their students' subjectivity, perception of the world, and ways of knowing, they resemble a computer designer who complains about a problem with a
Dialogic Multicultural Education Theory and Praxis
Nermine Abd Elkader A8 machine and not with actual human beings (Matusov, 2007). Such dehumanizing of students contradicts what Freire's emancipatory literacy project is proclaimed to set out to achieve i.e. the humanization of both the oppressed and their oppressors (Freire, 1993). On a similar note, Sidorkin (1999) maintains that while many studies might claim that the dialogue between teachers and students fails for cultural reasons, often the reason is more relational than cultural. Matusov (2003) alerts us to the dualistic psychology of the discourse on culture; when people's differences (and cultures) are ignored in dialogue, misunderstandings happen and dialogue breaks down, but also when dialogue breaks down because of misunderstandings, people are characterized as cultural. Therefore, Sidorkin (1999) maintains that reducing dialogue to the level of mere communication among people is counterproductive to its role in effectuating learning. Sidorkin (1999) maintains that learning takes place when a tension happens between the authoritative word that people bring into the dialogue (this usually comes in a whole unit packaged by the authority of religion, ideology, political power, and cultural values) and the internally persuasive discourse which is the word re-told in one's own word and thus appropriated and modified to reflect one's own subjectivity in interacting with both the authoritative word and the word of another. Thus, relationships among teachers and students and students and one another could not just be explained in simplistic terms as many of the studies of multicultural education have shown them to be i.e. in terms of racial tension and White privilege, but they also need to be investigated in terms of students' agency, subjectivities, and past relationships with members of their own racial and ethnic groups, members of other groups, and the authority of the institutions and the educators. This kind of investigation reveals the complexity of students' learning through participation in dialogue rather than homogenizing all White students as privileged due to their dominant status and as resistant to change the status quo.
Students' agency and subjectivity thus emerge as an important opportunity that dialogic pedagogy can present for free, democratic, and authentic learning. I will explore this issue more in what follows highlighting why human agency is relevant to a democratic discourse on multicultural education. Ladson-Billings (1999) proclaims that unless pre-service teachers see some significance for the multicultural project in their future careers, it is unlikely that multicultural education could affect any change in pre-service teachers' learning and instructional practices. According to Ladson-Billings (1999), one of the most significant motivators for pre-service teachers to learn about multiculturalism is their desire to succeed with their future students and to avoid burnout and public embarrassment caused by problems of classroom management. Thus, many pre-service teachers enter the multicultural course with a desire, at least a proclaimed one, to know and understand their diverse students (Holins & Guzman, 2005). Teacher educators should, therefore, structure their multicultural courses to take advantage of this initial interest. Matusov (2011) proclaims that this could be done by allowing students' agency to author their own learning.
The Relationship between Human Agency, Dialogic Pedagogy, and a Democratic Multicultural Education
According to Matusov (2001), agency "involves processes of developing and prioritizing goals, problems and choices, problem solving, and making and realizing solutions (including moral ones). By this definition, the notion of agency has inherently a sociocultural nature, since the final cause of an individual's actions always has a distributed character in time, space, meaning, and among direct and indirect participants of the activity" (p. 369). For students to be able to collaborate together toward a successful learning experience, they need to form among one another a community of learners. The community of learners recognizes the need of its members for one another not to reach common goals, but rather, because they acknowledge one another's dialogic agency in developing one's views and
Dialogic Multicultural Education Theory and Praxis
Nermine A9 values even when in conflict. This is not to say that members of the community of learners will always disagree about their common goals; instead, this suggests that the teacher in this role will recognize his or her responsibility to provide guidance and to accept and even appreciate differences and disagreements. Moreover, when the challenge arises with students who have no desire to learn and no interest in the multicultural course, an authentic dialogic project would recognize the students' agency even if it were against participation (Brown & Renshaw, 2006), and so does democratic education according to Kukathas' (2003) liberal theory.
Besides, an authentic dialogue could not be limited to the time and space of the classroom. Research studies (Fecho, Collier, Friese, & Wilson, 2010;Matusov, Hayes, & Pluta, 2005) have shown that dialogic pedagogy, as a human activity, often times requires a dialogic space and time that goes beyond the chronotope (the intersection between time, space, and the approved values and traditions) of the classroom. Ellsworth (1989) realized that need when she feared that dialogic pedagogy across differences might turn into mere rationalization but not ontological or ideological change (in the context of her study) or even worse into another form of repression where the participants become radicalized into an "us-ness" against "them-ness". She also feared that voices in the class were similar to voices in the society and thus might have not carried equal legitimacy, sense of safety, and power in dialogue. In her study, affinity groups formed among students who did not feel empowered enough to "speak back" or tell of their own experiences with racism and oppressions. These affinity groups met outside the class time and shared potlucks, field trips, and cultural discussions of their experiences. Then these groups decided to state their voice to the class not as individuals but as members of a social group and they decided that this would not be in a dialogue form but in a way of sharing while silencing the other as they had traditionally been silenced. Bakhtinian scholars might disagree with the concept of silencing under any circumstances besides they might interpret affinity groups as excessive monologism which I will discuss later. Nevertheless, Ellsworth's (1989) study confirms the realization that any authentic dialogue would need to continue beyond the time and space of the classroom and to engage members of a wider community than that of the class. Elsewhere, Matusov et al. (2005) and Fecho et al. (2010) provided this space through class websites where students could "chat" about the topics of the curriculum. Matusov et al. (2005) maintained through discourse analysis that students' contributions were only 3% social in nature, and that most of the students' postings were an extension of the topics discussed in class. However, the fact that students also used the class web for social communication indicated how the class dialogue penetrated their everyday activities as opposed to the traditional class discussion where the class space and time are separated from the wider activities of students' lives. Similarly, Fecho et al. (2010), sharing students' contribution during a class on critical literacy, suggested transformation in the students' subjectivity through dialogic pedagogy that extended beyond the classroom setting. In this case, students' own reflections on their students' writings and on one another's writing led to major life changing decisions for some. For example, one of the research subjects through dialogic pedagogy had her attention directed to the mutual lessons that could be learned by teachers and students as they engaged in the dialogic journey. While she criticized one of her Middle Eastern students for allowing her family to dictate her life, she discovered that within her own religious community, her lived experience might have not differed much from her student's. That teacher thus had to embark on her own self-discovery and self-identification journey that ended with her denouncing her religious organization and accepting a position in life that could be transient or permanent by putting her religious belief under investigation and thus separating herself from her own community.
Thus, providing a venue beyond the classroom boundaries ensured the continuity of dialogue through students' authorial learning -an aspect of students' agency (Matusov, 2011). Matusov (2011) defines students' authorial learning as the opportunity that students have to "realize themselves, define
Dialogic Multicultural Education Theory and Praxis
Nermine Abd Elkader A10 their own voices; address and respond to others; engage and transform the culture; define new goals; develop new desires and interests; take responsibility for their actions, opinions, views, and values; reply and address voices of relevant and important others (living in past and now)" (p. 36). Students' authorial learning can be both responsive authorship and self-generated authorship and the teachers should be able to promote and support both types of authorship for learning and teaching to be successful. Both Matusov at al. (2007) and DePalma et al. (2006), therefore, believe with antiracist educators that minority students should be authors of the multicultural curriculum because they could teach pre-service teachers valuable knowledge about their lives and about their education; however, they might differ in their conceptualization of the role of opposing students in such paradigm. I would expect dialogic pedagogy authors to desire mainstream students as well to provide material for the curriculum even when this material is in direct opposition to the objectives of the multicultural course.
The significance of educators' legitimizing the voices of all students is that as an authentic human activity (Matusov, 2009;Sidorkin, 1999), dialogue cannot be turned on and off as the situation requires it. Dialogue could be suppressed and students' voices could be muffled by the dominance and sometimes tyranny of the authority but this does not mean that it is not taking place. One example of that is provided by Sidorkin's description of the three discourses that take place at a school setting. While the first two discourses reflect formal ways of communication that could start with lecture or presentation followed by a discussion that is usually instructional and highly structured by the teacher, the third discourse in which the class breaks up into clutter and chatter and students engage in unstructured and unguided conversations could be the time when true learning happens; it is this time that students informally author their own learning. This third discourse could be found whenever educators allow students to interact about the topics of the curriculum in a safe environment but it also takes place even when educators do not allow it. Bakhtin analyzed the Renaissance carnival with implications for education -though not directly linked. Carnival was a time for birth, creation and rejuvenation (Gardiner, 2002); it was also a time when language emerged into new forms away from its tight traditional structure to be rebirthed into new meanings even in its most conceivably obscene form. According to Gardiner (2000), the success of this new birth was owing to it happening away from the eyes of the officialdom. In the context of conventional education, therefore, and despite the institution's dire attempt to create structure, conformity, and standardization (Caldéron, 2006;Giroux, 2010), students can always find a time and place away from the eyes of "officialdom" (Gardiner, 2002, p.51) where they could engage in a constant dialogue that leads to social critique and even perhaps rebellion against the conventional codes of etiquette, propriety, and the monolithic seriousness of officialdom. In the context of multicultural education, carnival could represent a significant challenge to any pre-defined multicultural objective, and could be why educators have found that resistance was not individualistic but rather more of a group resistance; besides even though researchers report that minority students felt either hurt or intimidated by their White peers' attitude and comments (Amos, 2010;Solorzano, 1997), they did not feel that their White peers meant them any harm personally or intentionally (Amos, 2010); which suggests that this kind of resistance among White students built in solidarity and unity (probably in private conversations outside the tight surveillance of the course instructor) was actually directed against attempts to muffle their voices or impose upon them an agenda they did not choose. Matusov (2009) describes this group resistance as excessive monologism. Excessive monologism could take place even when authority or, in this case, educators are not involved. Members of the same community or who are likeminded could form alliances to affirm one another's views and values and to have a strong voice. In the absence of other voices to counteract these alliances, one might wonder what learning, if any, could take place. DePalma (2010) tried to deal with this issue by inviting members from the communities that the class discussed to be guest speakers. However, DePalma's
Dialogic Multicultural Education Theory and Praxis
Nermine Abd Elkader A11 struggle to bring polyphony in her class was constant. Despite the fact that a one-time opportunity to invite the LGBT group on campus offered fresh ideas and new perspectives on the topic of antihomophobic education that her authoritative word and that of the texts she chose did not reveal, DePalma was aware that the fact that she was the one who chose the guest speakers and facilitated the logistics of their coming to class still reflected her authority rather than the students'. The second aspect of students' authorial learning -namely students' generative authorship could provide an answer to this problem. Students' generative authorship allows them to bring up issues and questions, and problematize conventional knowledge to allow for more provocations for the dialogue. However, the issue of voice and representation in dialogic multicultural education needs more investigation especially within the context of students' generative authorship.
In the context of multicultural education, a number of studies (DePalma, Santos Rego, & del Mar Lorenzo Moledo, 2006;Matusov & Smith, 2007) suggest that the most successful dialogic experiences took place in after school programs where hierarchy and authority among pre-service teachers faded away from the school context. This out of school context allowing pre-service teachers and school children to dialogue in a free setting away from the surveillance of the authority and the pressures of teaching for the test benefited the pre-service teachers in such a way that helped them learn about their future student population and reduced their prejudices against these students and their communities when they came to discover through collaboration in different projects that minority students might have certain strengths that they (the undergraduate students) and their peers did not have. For example, Matusov and Smith (2007) found that their pre-service teachers spoke about their Latino population before meeting them as objects of their own imagination; they either romanticized them or demonized them but once they came into contact with them and had the opportunity to engage them in an authentic dialogue, they started to discover true problems that they might face with their future students apart from any imagined discourse. One of these problems was surprising to the undergraduate students because it was not talked about in the grand narrative about Latino students; this was the problem of trust. Undergraduate university students came to realize that the Latino students in the center did not have much trust in the teachers and the school administration and thus would not go to their teachers if they needed help. They also came to realize that while peer pressure had an impact on that group of students, they usually would take their parents' advice over their peers. DePalma et al. (2006) assert that such out of school experiences could have a long term effect on pre-service teachers creating inside them nostalgia for success: one day when they graduate and get jobs in public schools, they could remember a time when they worked with students from a minority background, were successful with them, and were productive with outcomes that reflected true learning. Moreover, I contend that such dialogic encounters offer a first step answer to the issue of representation away from the stress related to institutional power and students' discussing absent communities in the artificial setting of the classroom. DePalma et al. (2006) maintain that learning projects should be planned and structured by educators to allow for dialogue to occur naturally while pre-service students and school children collaborate, disagree, negotiate, and resolve their disagreements. In both of these studies, students' voice and agency (whether these students were the pre-service teachers or the school children) were given priority over the curriculum. Although educators designed these activities to engage both groups of students, the end point of learning was not pre-determined, but rather depended on students' authorial learning.
Toward a Theory of Dialogic Pedagogy in Critical Multicultural Education
Dialogic pedagogy, from a Bakhtinian perspective, and as outlined by Bakhtinian scholars (Gardiner, 2002;Matusov, 2009;Morson, 2004;Sidorkin, 1999) offers a vehicle for different views and perspectives to be tested and contested without any party imposing their agenda, political or social, on the other. At the end of the day, making curricular and instructional decisions in a democratic society
Dialogic Multicultural Education Theory and Praxis
Nermine Abd Elkader A12 should be in the hands of both teachers and students (Shor & Freire, 1987); however, as Giroux (2010) maintains, whereas faculty should have the choice of promoting their political or social justice agenda while teaching, students should also have the choice to reject or accept this agenda.
One of the most significant opportunities of such an approach is polyphony. Dialogic pedagogy students participate in the dialogue in a way that encompasses the authoritative word of the text, the teacher, and their own ideologies while also have the opportunity to engage the word of another granted that they have enough access to that word. DePalma (2010) is concerned that the instructor's voice is hegemonic in the educational institution since textbooks and learning materials are chosen by him/her and are subject to his/her own subjectivity and curricular goals. The studies and approaches that have been previously discussed in this article suggest that critical multicultural education often represent the views and perspectives of the instructors conducting any specific course and they often reveal one side of multicultural education and ignore other sides that extend it or disagree with it. However, within dialogic pedagogy, this hegemony is counteracted by polyphony and students' authorial learning.
For example, DePalma (2010) recommends using texts about similar topics that expose different viewpoints as well as inviting guest speakers to the class who are members of the communities that the class talks about. In this context, DePalma's (2010) guest speakers from the LGBT community provided a level of polyphony, in her study, exposing the broad diversity among members of that group. Polyphony goes beyond the presence of multiple voices in the class. Gardiner (2002) expresses Bakhtin's conceptualization of polyphony as follows: "just as no single voice can constitute polyphony, no one viewpoint can be adequate to the apprehension and understanding of the object. In order fully to conceptualize the object in its totality, that is to say, a multiplicity of perspectives or vantage-points is required" (p. 94).
Dialogic pedagogy, with its emphasis on polyphony and students' agency deals with another challenge that critical multicultural educators reportedly faced in teaching these classes i.e. their concern with the power dynamic in such courses. Some White instructors expressed their concerns about sharing their experiences for fear of recentering Whiteness and preventing the voices of minorities already weakened by the society from receiving adequate and rightful focus (Bergerson, 2003;Ellsworth, 1989). However, in DePalma's study (2010), she maintained that she, being an instructor who shared the ethnic and socioeconomic background of her students, did not shy away from sharing her White experience because dialogic pedagogy allows for these experiences to interact and collide with those of the texts and the students. Other educators from a minority background feared that their students had more power over them especially when they responded with an attitude of silence and resentment (Chávez-Reyes, 2010;Milner, 2008). The issue of polyphony and students' authorial learning should move the burden of representation either of self or other from the instructor to multiple sources and thus minimize threat and students' resistance.
Moreover, students' authorial learning promises sustaining effects especially that students' ontological engagement in the dialogue make it relevant to different aspects of their lives and their future practices. This directly challenges conventional methods of assessment that focuses on quantification, standardization, and measurement. In dialogic pedagogy, transformation to the objectives of the course might not happen or might happen as a byproduct of the learning that takes place within the internally persuasive discourse of the class. Besides, transformation might not take place immediately but through continued dialogue in students' subsequent field placements or even with other stakeholders and members of the wider society; however, Matusov (2009) contends that the real achievement for learning is that the individual cannot claim innocence or ignorance for their practices. Therefore, within the dialogic
Dialogic Multicultural Education Theory and Praxis
Nermine Abd Elkader A13 project, the educators should regard their main role as facilitators of learning rather than as executives of the policies of the institutions, as trainers for the employer, or as leaders of their own social movement.
Finally, dialogic pedagogy safeguards critical pedagogy from appropriation into any specific political agenda. Since dialogic pedagogy exists on the boundaries of the subjectivities of those involved in it, teachers cannot claim ownership of the educational outcomes, rather students are more authoritative in claiming that ownership and in guiding their own learning. Hence, we can expect more sustainable effects for multicultural education since students' engagement in such a project becomes ontological and develops within a process of becoming (Morson, 2004).
However, dialogic pedagogy is not void of challenges. These challenges together with the opportunities that such an approach present need to be investigated more closely to be able to fully theorize it. Matusov (2009) poses the problem that dialogic pedagogy with its emphasis on language and speech could be potentially culturally insensitive favoring one culture that might be more vocal than another. DePalma (2008) maintains that the one Black student she had in one of her multicultural classes was uncomfortable to voice her opinions on race and racism in a class where she felt powerless and a numerical minority. Moreover, Casey (2005) warns that the ideology of dialogue in its current form of implementation in higher education through seminars is a middle class value that might stymie low class students who might come to the institution under-prepared to participate in a dialogue especially in the field of humanities. Sharing about oneself and one's communities among a majority of middle class students could also be embarrassing to members of that group. According to Casey, not only students but faculty from a low class background as well might, in their struggle to advance their career by conforming to middle class norms and values, shy away in the dialogue from revealing their roots. While Casey (2005) worries about the freedom and desire to share from a class perspective, Ladson-Billings (1999) worries about this issue from a racial perspective since the mode of communication is different among White students and Black students and this could cause dissonance and misunderstanding in a way that is not conducive to learning. For example, Ladson-Billings (1999) maintains that when Black students are angry, they become loud and vocal while White students resort to silence in such a way that can deceive the educator to think that they are in compliance with what is being taught while in reality they are hiding deep emotions within. Matusov et al.'s study (2007) suggests Latino students to be reticent because of their distrust of their White educators. Furthermore, Matusov (2009) worries about the issue of excessive dialogicity among certain marginalized ethnic groups whose identity has been obliterated by several forms of oppression historically and contemporarily that they have not learned to develop a clear and distinct voice backed by a community that affirms it and gives it legitimacy. In the context of dialogic multicultural education, the absence of the voices of minority students could be a problem in depriving the dialogue from the experiential knowledge of these students; however, could the presence of such minorities offer these experiences in light of their socialization and education within a society that could have masked their ethnic identity in favor of allowing them to pass as White and thus to succeed academically and socially (Fordham, 1993;Fordham & Ogbu, 1986;Gayles, 2005)? This issue requires more research in the context of dialogic pedagogy and multicultural education.
The last challenge for dialogic pedagogy is the nature of the institution of conventional education. There seems to be skepticism among some Bahktinian scholars and others (DePalma, 2010;Giroux, 2010;Matusov, 2009) that such an approach could be possible in the setting of conventional education with its emphasis on hierarchical relationships among students and teachers, teachers and administrators, and teachers, administrators and the sociopolitical context of the wider society. Despite claims in the field of education that dialogic class discussions could produce better school achievement and improve students' performance and engagement, from a Bakhtinian dialogic perspective, what takes
Dialogic Multicultural Education Theory and Praxis
Nermine Abd Elkader A14 place in the majority of these studies is far from an authentic dialogue and is another attempt toward a banking education that places teachers and students in an erroneous role of experts and knowledge receptors. In the field of multicultural education and despite claims that multiculturalism is a reform movement that seeks equal educational opportunities, a monologic discourse has been prevalent in the policies, practices, and research regarding this area. However, in dialogic pedagogy, with its emphasis on freedom of expression and freedom of association, how could assessment take place or could it take place in any way that could satisfy institutional requirements? This is an issue for further investigation.
Conclusion
Bakhtinian dialogic pedagogy in critical multicultural education in the conventional higher institution is an educational approach that promises for learning to occur within a community of learners in which students contribute to the class discourse through their own subjectivity, histories, past educational experiences, and the authoritative word they bring to the discourse. The monologism of the standardized movement in education, the policies of the institution, and the hierarchical structure of conventional education represent major challenges for such dialogue. Since dialogue is essentially relational (Sidorkin, 1999), educators could expect that relationships among class members could hinder or enhance the class dialogue and in turn the quality of learning that takes place. Thus, as opposed to conventional educational research that focuses on the relationship between teachers and students, more research needs to be conducted to investigate relationships among students and how it could be better developed to enhance learning. These relationships need to be investigated both within the classroom and beyond the classroom setting because in an authentic dialogic project, educators and researchers should expect the dialogue to continue and to penetrate students' lives beyond the institution.
Furthermore, since dialogic pedagogy offers the opportunity for an internally persuasive discourse (Matusov, 2009) in which knowledge becomes contextualized, historicized, and integrated within an interconnected network of relationships and propositions, dialogic pedagogy promises much learning to take place; however, it is the kind of learning that is mainly authored and controlled by the students rather than the instructors' lesson plans or curricular endpoints. Thus, new methods of assessment that move away from quantifiable learning objectives need to be investigated to judge the success or failure of such an approach in multicultural education. | 12,201 | 0001-01-01T00:00:00.000 | [
"Education",
"Philosophy",
"Political Science"
] |
Analysis of physiological signals for recognition of boredom, pain, and surprise emotions
The aim of the study was to examine the differences of boredom, pain, and surprise. In addition to that, it was conducted to propose approaches for emotion recognition based on physiological signals. Three emotions, boredom, pain, and surprise, are induced through the presentation of emotional stimuli and electrocardiography (ECG), electrodermal activity (EDA), skin temperature (SKT), and photoplethysmography (PPG) as physiological signals are measured to collect a dataset from 217 participants when experiencing the emotions. Twenty-seven physiological features are extracted from the signals to classify the three emotions. The discriminant function analysis (DFA) as a statistical method, and five machine learning algorithms (linear discriminant analysis (LDA), classification and regression trees (CART), self-organizing map (SOM), Naïve Bayes algorithm, and support vector machine (SVM)) are used for classifying the emotions. The result shows that the difference of physiological responses among emotions is significant in heart rate (HR), skin conductance level (SCL), skin conductance response (SCR), mean skin temperature (meanSKT), blood volume pulse (BVP), and pulse transit time (PTT), and the highest recognition accuracy of 84.7 % is obtained by using DFA. This study demonstrates the differences of boredom, pain, and surprise and the best emotion recognizer for the classification of the three emotions by using physiological signals.
Background
Emotions are known as multi-componential responses that are composed of coordinated changes in subjective feeling, motor expression, and physiological activation [1]. Additionally, they are processes directed towards a specific internal or external event or object, which result in changes in both behavior and bodily state (i.e., physiological change) [2,3]. Emotions increase our chances of survival by providing us with the ability to deal with sudden events in our surroundings [4]. In a positive state, optimistic feelings dominate and cognitive functions (e.g., problem-solving abilities) are improved. On the other hand, in negative states, pessimistic feelings dominate, our capacities are underestimated and analytical thinking is increased [5,6]. In particular, because emotion plays an important role in contextual understanding of messages from others in speech or visual forms (i.e., facial expressions, body gestures), it has been recognized as one of the most important ways of people to communicate with each other.
Recently, many attempts have been made in humanhuman or human-computer interaction (HCI) for robots or machines to improve their abilities to understand humans' intentions or affective states. The accurate emotion recognition would allow computers to understand humans' emotions and to interact with humans in accordance with humans' affective states for better communications and more natural interactions between human and computers [7]. For effective understanding of emotions, emotion recognition systems based on physiological signals have been demonstrated by Picard and colleagues at the MIT Media Laboratory [8] and other previous researchers [9] resulting in the recognition accuracy of more than 80 % on average. Physiological signals have some advantages as the following although they may be venerable to artifact caused by motions or other external factors. First, the acquisition of physiological signals by noninvasive sensors is relatively simple and it makes us possible to monitor users' autonomic activity associated with emotional or cognitive states in real time. Second, physiological responses are robust to social masking or factitious emotion expressions since they can be acquired through spontaneous emotional responses and are less sensitive in social and cultural difference [10]. For those reasons, in the field of psychophysiology, the investigation of human emotional status has been based on the analysis of physiological signals from both the central and autonomic nervous systems [11,12].
Affective computing by using psychophysiology, basic emotions such as happiness, anger, sadness, fear, disgust, and surprise have been studied commonly. For example, Kreibig [13] has examined the relationship between basic emotions and physiological responses suggesting typical response pattern of each emotion based on the review of 134 articles. Although she has investigated emotionspecific physiological responses among basic emotions, she failed to identify the surprise-specific response due to the limited number of studies on the emotion. The surprise among the basic emotions has been rarely investigated. Additionally, the relationship between nonbasic emotions and physiological signals has not been revealed yet. Since humans' emotions experienced in their daily lives are very delicate and complex and in order to understand the human emotions better, non-basic and social emotions are needed to be studied. For this purpose, it is needed to understand the physiological underpinning, underlying basic emotions, particularly, surprise and nonbasic emotions. Therefore, we have focused the relationship between basic emotions, particularly, surprise and non-basic emotions, i.e., boredom and pain and physiological responses. For this purpose, firstly, we were to identify the difference of physiological responses for the three emotions. Secondly, we aimed to classify the three emotions by using discriminant function analysis (DFA) as a statistical method and machine learning algorithms to test the possibility whether the three emotions could be classified and applied in the field of affective computing. The machine learning algorithms used in the study were five preferred emotion recognizers, i.e., linear discriminant analysis (LDA), classification and regression trees (CART), selforganizing map (SOM), Naïve Bayes algorithm, and support vector machine (SVM). The reason why we used six emotion recognizers is that we were to find the best classifier for boredom, pain, and surprise through the comparative analysis of the results of classification based on the physiological signals. Before the explanation of the experimental methods, we included the operational definitions of the three emotions.
The definition of emotions: boredom, pain, and surprise
Boredom is an emotional experience when an individual is left without anything in particular to do and not interested in his/her surroundings. It has been defined as an unpleasant, transient affective state in which the individual feels a pervasive lack of interest and difficulty concentrating on the current activity. Leary and colleagues [14] describe boredom as an affective experience associated with cognitive attention processes. In positive psychology, boredom is defined as a response to a moderate challenge for which the human has more than enough skill [15].
Regarding definition of pain, it is an unpleasant sensory and emotional experience associated with actual or potential tissue damage, or described in terms of such damage according to the International Association for the Study of Pain. It is divided into physical pain and psychological pain. The former is a feeling of the nerves telling the brain that there is a physical sensation causing discomfort in the present. In this study, although physical pain may be possible to have direct physiological responses to the physical stimulus originated from autonomic nervous system, without much relevance with "emotional response" from the brain, we have used a stimulus to induce physical pain. It could be attributed from various causes when psychological needs such as the need for love, autonomy, affiliation, and achievement are frustrated, or the need to avoid harm, shame, and embarrassment occurs [16]. It is likely to be evoked mixing with other emotions. For example, Vangelisti [17] describes psychological pain as a blend of fear and sadness. It could be accompanied by other emotions, including fear, sadness, anger, anxiety, and shame [18][19][20]. Since the psychological pain is so complex due to the difficulty of defining and provoking, we have chosen physical pain in the study.
Surprise is defined as a transient emotional state experienced resulting from an unexpected event and can have any intensity and valence, i.e., neutral/moderate and pleasant/unpleasant, respectively [21]. It can be divided into "wonder" that people feel when perceiving something rare or unexpected [22] and "startle" response that is generated by a sudden stimulus such as a flash of light, a loud noise, or a quick movement [23,24]. Considering that the surprise experienced induces the startle response commonly and the main function of startle response is to interrupt an ongoing action and reorient attention to a new and possibly significant event, we have included surprise emotion as a startle response with negative valence in this study.
Emotional stimuli
The selected emotional stimuli are shown in Table 1. The stimulus for boring induction was the combination of a presentation of "+" symbol on screen and a repetitive sound of numbers from 1 to 10 for 3 min. For provoking pain, standard blood pressure with a maximum pressure of 300 mmHg was given while the participant is wearing a blood pressure cuff on his/her non-dominant arm for 1 min. The surprise-provoking stimulus was the sudden presentation of the sounds of the hog-caller, breaking glass, and thunder while the participants concentrated on a game-like computer task for 1 min.
These stimuli have been verified for their appropriateness and effectiveness through a preliminary psychometric experiment. One hundred and twenty-two college students participated in the experiment. They rated their experienced emotions during exposure to each emotional stimulus. In the experiment, the appropriateness of emotional stimulus means a consistency between the intended emotion by an experimenter and feeling experienced by the participants. It can be demonstrated as (the number of participants who reported the intended emotion/the number of total participants) *100 by using mathematical expressions. The effectiveness is an intensity of emotions that participants rated on a 1-to-7-point Likert-type scale (e.g., 1 being "least weak" and 7 being "most intense"). The averages (SD) of appropriateness and effectiveness for these stimuli are as follows: the stimulus inducing boredom had 86.0 % and 5.23(1.36), pain inducing stimulus was 97.3 % and 4.96(1.34), and 94.1 % and 6.12(1.14) in surprise.
Experimental procedure
Two hundred seventeen healthy (97 males and 120 females) aged 20.0 (SD 1.80) years old college students participated in this experiment. They had no history of medical illness attributed to heart disease, respiration, or central nervous system disorder. They were introduced to the experiment protocols and filled out a written consent before the beginning of experiment. Also, they were paid $30 USD per session to compensate for their participation. Prior to the experiment, they were introduced to experiment procedures in details and have an adaptation time to feel comfortable in the laboratory setting. Then, electrodes are attached on their wrist, finger, and ankle for measurement of physiological signals. Physiological signals are measured for 1 min during relaxation while viewing the "fixation" on the computer screen (baseline state) and for 1~3 min during the presentation of emotional stimuli (emotional state), and then an additional 1 min after presentation of the emotional stimuli (recovery state). After the physiological signal acquisition, psychological assessment was conducted. During the assessment, participants were asked to label the experienced emotion (i.e., happiness, boredom, sadness, fear, anger, disgust, surprise, and others) and rate the intensity of the emotion in response to the emotional stimulus. The experimental procedure and the study protocol was approved by the Institutional Review Board of the Chungnam National University (No. 201309-SB-004-01).
Physiological signals acquisition and feature extraction
The data of physiological signals were acquired by using the MP150 and AcqKnowledge v 4.1 (Biopac, USA). The signals were recorded at a 250-Hz sampling rate and were digitized by an analog-to-digital converter. Additionally, appropriate amplification and band-pass filtering were performed. The acquired signals were as follows. Electrocardiography (ECG) is possible to gain an insight into the relative effects of the parasympathetic and sympathetic components at the nodes using a noninvasive recording technique. It is used to measure the rate and regularity of heartbeats, as well as the size and position of the chambers, the presence of any damage to the heart, and the effects of drugs or devices used to regulate the heart, such as a pacemaker. For acquisition of ECG signal, ECG electrodes (Meditrace 100, Kendall_LTP, USA) were placed on both wrists and one left ankle with two kinds of electrodes, sputtered and AgCl ones. The electrode on the left ankle was used as a reference.
Electrodermal activity (EDA) is one of physiological signals that can easily be measured from the body surface and represents the activity of the autonomic nervous system. It characterizes changes in the electrical properties of the skin due to the activity of sweat glands and is physically interpreted as conductance. Sweat glands distributed on the skin receive input from the sympathetic nervous system only, and thus this is a good indicator of arousal level due to external sensory and
Pain
Induction of pain by using a blood pressure cuff (1 min) Surprise Sudden sounds of hog-caller, breaking glass, and thunder during concentration on a game-like computer task (1 min) cognitive stimuli. EDA signal was measured with the use of 8-mm AgCl electrodes (TSD203, Biopac, USA) placed on the volar surface of the distal phalanges of the index and middle fingers of the non-dominant hand. The electrodes were filled with a 0.05 molar isotonic NaCl paste to provide a continuous connection between the electrodes and the skin.
Skin temperature (SKT) measures the thermal response of human skin. Variations in the SKT mainly come from localized changes in blood flow caused by vascular resistance or arterial blood pressure. Local vascular resistance is modulated by smooth muscle tone, which is mediated by the sympathetic nervous system. The mechanism of arterial blood pressure variation can be described by a complicated model of cardiovascular regulation by the autonomic nervous system. Thus, it is evident that the SKT variation reflects autonomic nervous system activity and is another effective indicator of emotional status. The SKT was measured from the fingertip. SKT signals were measured on the first joint of the non-dominant ring finger using a SKT100B amplifier and a fast response thermistor (TSD202A). SKT were calculated for each time unit from the raw SKT signals.
Photoplethysmography (PPG) is a process of applying a light source and measuring the light reflected by the skin. At each contraction of the heart, blood is forced through the peripheral vessels, producing engorgement of the vessels under the light source, thereby modifying the amount of light to the photo-sensor. PPG is a wave form signal that indicates pulsation of the chest wall and great arteries followed by a heartbeat, that is, the blood pressure and vascular diameter change with cardiac cycle, and it means that these arterial pulsatile alterations are propagating to the peripheral vascular system. It aims to observe on mechanical movement of heart and kinetics of blood flow and manifests the pulsation of the chest wall and great arteries followed by a heartbeat as a wave form. For recording of PPG, the sensor (TSD200, Biopac, USA) was attached on the first joint of the non-dominant thumb. The signals were amplified by each amplifiers, ECG100C, GSR100C, SKT100C, and PPG100C (Biopac, USA).
To extract features, the acquired signals were analyzed for 30 s each from the baseline state and the emotional state by using AcqKnowledge v 4.1 (Biopac, USA). Twenty-seven features were extracted from the obtained physiological signals (Table 2). In ECG, heart rate (HR), low-frequency heart rate variability spectral power [0.04~0. 15 Hz] (LF), high-frequency heart rate variability spectral power [0.15~0.4 Hz] (HF), and ratio of low-to high-frequency power (LF/HF HRV) were extracted. The skin conductance level (SCL), average of skin conductance response (SCR) and number of skin conductance response are obtained from EDA. The mean SKT were calculated by averaging the SKT amplitude values during the 30-s baseline and emotional state each. The blood volume pulse (BVP) and pulse transit time (PTT) from PPG were also extracted. Five hundred and thirty-seven datasets of physiological signals were selected for data analysis after excluding the dataset due to severe artifact effects by movements, noises, etc.
Emotion recognition methods
In order for the comparative analysis of emotion recognition, we use one statistical method and five machine learning algorithms. The six methods have been briefly described in this section. Refer to [25][26][27][28][29][30][31] for detail of the methods.
Discriminant function analysis
Discriminant function analysis (DFA) is a statistical analysis used to predict a categorical dependent variable (called a grouping variable) by one or more continuous or binary independent variables (called predictor variables) [25]. The model is composed of a discriminant function based on linear combinations of independent variables, and those independent variables provide the best discrimination between groups. DFA is used to maximally separate the groups, to determine the most parsimonious way to separate groups, or to discard variables which are little related to group distinctions. DFA is similar to regression analysis. A discriminant score can be calculated based on the weighted combination of the independent variables.
D i is the predicted score (discriminant score), x is the predictor, and b is the discriminant coefficient.
When interpreting multiple discriminant functions, which arise from analyses with more than two groups and more than one continuous variable, the different functions are first tested for statistical significance. If the functions are statistically significant, then the groups can be distinguished based on predictor variables. The basic idea underlying discriminant function analysis is to determine whether groups differ with regard to the mean b_ baseline, e_ emotional state, d_ "e_" − "b_", ECG electrocardiography, EDA electrodermal activity, SKT skin temperature, PPG photoplethysmography, HR heart rate, LF low-frequency, HRV heart rate variability, SCL skin conductance level, SCR skin conductance response, BVP blood volume pulse, PTT pulse transit time of a variable, and then to use that variable to predict group membership.
Linear discriminant analysis
Linear discriminant analysis (LDA) which is one of the linear models is a method used in statistics, pattern recognition, and machine learning to find a linear combination of features which characterizes or separates two or more classes of objects or events. LDA finds the direction to project data on so that between-class variance in maximized and within-class variance in minimized, and then offers a linear transformation of predictor variables which provides a more accurate discrimination [26]. In LDA, the measurement space is transformed so that the separability between the emotional states is maximized. The separability between the emotional states can be expressed by several criteria. LDA finds the direction to project data on so that between-class variance (S B ) in maximized and withinclass variance (S W ) in minimized, and then offers a linear transformation of predictor variables which provides a more accurate discrimination. S W is proportional to the sample covariance matrix for the pooled d-dimensional data. It is symmetric and positive semi-definite, and it is usually nonsingular if n > d. Likewise, S B is also symmetric and positive semi-definite, but because it is the outer product of two vectors, its rank is at most one [26].
In terms of S B and S W , the criterion function J is written as This expression is well known in mathematical physics and the generalized Rayleigh quotient. It is easy to show that a vector w that maximizes J must satisfy For some constant λ, which is a generalized eigenvalue problem.
LDA works when the measurements made on independent variables for each observation are continuous quantities. When dealing with categorical independent variables, the equivalent technique is discriminant correspondence analysis.
Classification and regression trees
Classification and regression tree (CART) [26,27] is one of decision tree and nonparametric techniques that can select from among a large number of variables the most important ones in determining the outcome variable, given the data represented at a node, either declare that node to be a leaf (and state what category to assign to it), or find another property to use to split the data into subsets. This is a generic tree-growing methodology known as CART. The fundamental principle underlying tree creation is that of simplicity. We prefer decisions that lead to a simple, compact tree with few nodes. In formalizing this notion, the most popular measure is the entropy impurity (or occasionally information impurity): Where, P(ω j ) is the fraction of patterns at node N that are in class ω j . By the well-known properties of entropy, if all the patterns are of the same category, the impurity is 0; otherwise, it is positive, with the greatest value occurring when the different classes are equally likely.
In most general terms, the purpose of the analyses via tree-building of CART is to determine a set of if-then logical (split) conditions that permit accurate prediction or classification of cases. It is relatively simple for nonstatisticians to interpret. Decision rules based on trees are more likely to be feasible and practical, since the structure of the rule and its inherent logic are apparent.
Self-organizing map
Self-organizing map (SOM) [26,28], called Kohonen map, is a type of artificial neural networks in the unsupervised learning category and generally present a simplified, relational view of a highly complex dataset. This is called a topology-preserving map because there is a topological structure imposed on the nodes in the network. A topological map is simply a mapping that preserves neighborhood relations. The goal of training is that the "winning" unit in the target space is adjusted so that it is more like the particular pattern. Others in the neighborhood of output are also adjusted so that their weights more nearly match those of the input pattern. In this way, neighboring points in the input space lead to neighboring points being active. Given the winning unit i, the weight update is where, h ci is called the neighborhood function that has value 1 for i = c and smaller for large value of the distance between units i and c in the output array. h 0 and σ are suitable decreasing functions of time. Units close to the winner as well as the winner itself have their weights updated appreciably. Weights associated with far away output nodes do not change significantly. It is here that the topological information is supplied.
The Naïve Bayes algorithm
The Naïve Bayes algorithm [26] is a simple probabilistic classification algorithm based on applying Bayes' rule with strong (naive) independent assumptions and particularly suited when the dimensionality of the inputs is high. Naïve Bayes classifier assumes that the presence (or absence) of a particular feature of a class is unrelated to the presence (or absence) of any other feature, given the class variable. This helps alleviate problems stemming from the curse of dimensionality, such as the need for datasets that scale exponentially with the number of features. While Naïve Bayes often fails to produce a good estimate for the correct class probabilities, this may not be a requirement for many applications. When the dependency relationships among the features used by a classifier are unknown, we generally proceed by taking the simplest assumption, namely, that the feature are conditionally independent given the category, that is, This so-called Naïve Bayes rule often works quite well in practice, and it can be expressed by a very simple belief net.
Support vector machine
Support vector machine (SVM) is a non-linear model, which have been used for the well-known emotion algorithms and support vector classifier separates the emotional states with a maximal margin. The advantage of support vector classifier is that it can be extended to non-linear boundaries by the kernel trick. SVM supervised learning models with associated learning algorithms that analyze data and recognize patterns, used for classification and regression analysis. SVM is designed for a two-class classification by finding the optimal hyperplane where the expected classification error of test samples is minimized and has been utilized as a pattern classifier to overcome the difficulty in pattern classification due to the large amount of within-class variation of features and the overlap between classes, although the features are carefully extracted [29]. The goal in training SVM is to find the separating hyperplane with the largest margin. We expect that the larger the margin, the better generalization of the recognizer [30].
SVM [26,31] finds a hyperplane based on support vector to analyze data and recognize patterns. The complexity of the resulting classifier is characterized by the number of support vectors rather than the dimensionality of the transformed space. The goal in training SVM is to find the separating hyperplane with the largest margin. We expect that the larger the margin, the better the generalization of the classifier. The distance from any hyperplane to a pattern y is |g(y)|/||a||, and assuming that a positive margin b exists The goal is to find the weight vector a that maximizes b. Here, z k is the class of kth pattern, b is the margin, and g(y) is a linear discriminant in an augmented y space, The parameters of the maximum-margin hyperplane are derived by solving the optimization. There exist several specialized algorithms for quickly solving the QP problem that arises from SVMs, mostly relying on heuristics for breaking the problem down into smaller pieces. The SVM algorithm has been widely applied in the biological and other sciences.
Experimental result
For the results of psychological assessment, we analyzed the intensity of each emotion that the participants experienced. The average (SD) intensity of boredom was 5.23(1.35), pain 5.74(0.71), and surprise 6.35(0.69). . We used bar graphs to plot the comparison of the significant differences among emotions (i.e., results of LSD post-hoc test). The bars display the changes of emotional state from the baseline state (i.e., emotional state minus baseline state) in nine physiological features such as d_HR, d_SCL and d_SCR, etc. shown in Table 2. The upward bars show the means of physiological response during when emotional state was higher and downward bars show the means of physiological response during when emotional state was lower compared to baseline. In LSD post-hoc tests, the increased HR in surprise was significantly higher in boredom and pain (p < .001) (Fig. 1). Regarding the EDA response, SCL and SCR changes showed significant differences among emotions. SCL and SCR during surprise were significantly higher than boredom and pain (p < .001). In addition to that, SCL and SCR during pain were significantly higher than boredom (SCL p < .05; SCL p < .001) (Figs. 2 and 3). The change in meanSKT, during boredom was higher in pain (p < .05) and there was no significant difference between boredom and surprise or between pain and surprise (Fig. 4). On the other hand, the value of BVP signals during pain and surprise were significantly decreased than during boredom (p < .001) and PTT in surprise showed a significant decrease compared to during boredom and pain (p < .001) (Figs. 5 and 6).
Results of emotion recognition
In the assessment of the performance of the five wellknown recognizers, we used the recognition accuracy for emotions. We used the Classification Toolbox of MATLAB for CART and Naïve Bayes, and Duda's Toolbox (http://www.yom-tov.info/computer_manual.html) for LDA and SVM. SOM toolbox (www.cis.hut.fi/projects/ somtoolbox/) is available in MATLAB. We used feature normalization, and default values implemented in the toolbox were used for the related parameters of algorithms. The classification conducted in the study underwent a 10-fold cross-validation, which is a statistical method for evaluating models. In 10-fold cross-validation, the original sample is randomly partitioned into 10 equal size subsamples. Nine subsamples of 10 are used as training data, and 1 subsample is used as the testing data. Next, the process is repeated 10 times (the 10 folds), with each of the 10 subsamples are used exactly once as the testing data. Table 3 shows the recognition accuracy as a percentage (%) in algorithms. The results show that 84.7 % of test cases can be correctly classified by DFA as a statistical method. In the results of machine learning algorithms, the classification rates using LDA, CART, SOM, Naïve Bayes, and SVM were 74.9, 67.8, 61.5, 71.9, and 62.0 %, respectively. For emotion recognition, the statistical method showed more accurate recognition accuracy compared to the machine learning algorithms. As a result, DFA was the optimal method to classify three emotions, i.e., boredom, pain, and surprise. Table 4 shows the classification results about DFA, i.e., 76.5 % in pain, 89.5 % in boredom, and 88.9 % in surprise. The LDA provided 74.9 % of accuracy in total and regarding the accuracy in each emotion, pain was recognized with 76.3 %, boredom 75.6 %, and surprise 72.9 % (Table 5). In analysis of CART, accuracy of each emotion had a range from 58.9 to 76.1 %, and classification accuracy of 69.2 % was achieved in pain, 76.1 % in boredom, and 58.9 % in surprise ( Table 6). In Table 7, SOM showed recognition accuracy of 69.3, 52.7, and 62.0 in pain, boredom, and surprise, respectively. The result of the Naïve Bayes was 71.9 % in all emotions and this algorithm successfully recognized 77.8 % in pain, 71.6 % in boredom, and 66.7 % in surprise (Table 8). Finally, the result of SVM showed that classification accuracy of 67.0, 62.1, and 57.3 % according to the order of pain, boredom, and surprise (Table 9).
Discussion
We identified the specific responses of different emotions, i.e., boredom, pain, and surprise, based on the physiological signals. These emotions were then classified using a statistical method and five emotion recognizers.
Physiological responses induced by emotions
Our purpose was to identify the differences of physiological responses among boredom, pain, and surprise. We have measured physiological signals while inducing different emotions by emotional stimuli and have statistically verified that HR, SCL, SCR, meanSKT, BVP, and PTT among physiological signals are meaningful features being able to identify the differences of these emotions. In analysis based on this features, emotion-specific responses induced by each emotional stimulus are as follows.
In boredom, there was a significant increase of meanSKT. Skin temperature serves as a surrogate marker of blood flow changes that result from vascular reactivity. SKT is influenced mainly by sympathetic adrenergic vasoconstrictor nerves, and increased SKT in our result means vasodilation through withdrawal of neural activity. For example, the more tense muscles under strain, the blood vessels would be contracted and the temperature would decrease. Increased SKT in boredom is supported by previous studies [32][33][34]. Skin temperature shows an extreme decrease by stress and fear and increase during relaxation, boredom, and sleep. In particular, skin temperature shows a great change under emotional stress (i.e., rapid decrease and rapid return) [32][33][34]. We observed that pain and surprise were associated with mild increased SKT instead of extreme changes. This might be caused by our period for the data analysis, i.e., a 30-s emotional state. SKT is considered a relatively slow indicator of changes in emotional state. We might have a longer time for emotional state in order to confirm overall changes in SKT as in previous studies, which make us obtain extreme changes of SKT.
Physiological responses induced by pain showed a greatly decreased BVP, mildly decreased PTT, and mildly increased both SCL and SCR. The BVP is the measure of the amount of blood currently running though the vessels in the finger of participants, and provides an activity on vasoconstriction. Changes in the BVP signal could indicate relative changes in the vascular bed due to vasodilation or vasoconstriction (increase or decrease in blood perfusion) as well as changes in the elasticity of the vascular walls, reflecting changes in blood pressure [35]. Decrease in BVP amplitude during pain compared to the baseline state might be implying peripheral vasoconstriction in the finger associated with arousal [36,37]. Regarding the PTT, it is the measure of the elapsed time between the R-wave of the ECG and the arrival of the pulse wave at the finger [38]. It is affected by changes in the contractile force of the heart and changes in the mean arterial blood pressure. Since the increased PTT is linked to a suppression of sympathetic activation, the great decrease of PTT during pain in the study could reflect the sympathetic activation. Additionally, increased SCL and SCR mean that the skin is sweaty and the sympathetic nervous system is activated. In particular, SCL and SCR are related to sympathetic-adrenalmedullary (SAM) activation which indicates the pain progress. In conclusion, we confirmed that pain-specific responses were caused by the SAM activation and peripheral vasoconstriction. These findings are associated with physical pain since this study was focused on physical pain instead of psychological pain as described in the introductory section.
Surprise-specific response significantly increased SCL, SCR, and HR as well as greatly decreased BVP and PTT. In a review article, Kreibig [13] has reported that surprise is associated with short-term duration SCR of medium response size and characterized by rapid increase and rapid return of SCR [22], increased SCL [39], increased HR [38][39][40][41], decreased [41] or increased finger temperature [22,39]. Increased SCL and SCR have been proposed to reflect cognitively or emotionally mediated motor preparation [42] and an increase in action tendency [4,43]. HR is controlled by the central nervous system, which varies the impulse traffic in the sympathetic and parasympathetic nerve fibers terminating in the SA node. HR is the result of the intrinsic automaticity of the SA node and the modulating influence of the autonomic nervous system [44,45]. During relatively mild work, HR effects primarily by a withdrawal of parasympathetic restraint on the SA node. At higher levels of work, further withdrawal of parasympathetic restraint occurs, but increases in sympathetic activity become progressively more important in accelerating the cardiac rate. In other words, increased HR means a sympathetic activation (related to fight/flight) to prepare for action and decreased HR the parasympathetic activation (related to relaxation) as signals for resting and recovery. In our results, there were great decreases of BVP and PTT during surprise. These responses derived from vasoconstriction are considered as by α-adrenergic stimulus [46]. As noted above, decreased BVP reflects peripheral vasoconstriction in the finger and great decrease of PTT implies strong sympathetic activation. In sum, physiological responses during surprise could be characterized by strong sweat activity, vasoconstriction, and increased heart rate activated by the sympathetic nervous system.
Results of emotion recognition
Our study demonstrated the possibility of emotion recognition for boredom, pain, and surprise based on physiological signals. To examine the optimal method to effectively classify these emotions, we used both a statistical method and machine learning algorithms. The results showed that the classification rate by DFA was 84.7 %. The DFA was the best recognition method to classify the three emotions. In the result of machine learning algorithms, emotion recognition by LDA was the highest as 74.9 %. As mentioned earlier in the "Emotion recognition methods" section, DFA was the statistical method composed of a discriminant function based on linear combinations of independent variables. Those independent variables provide the best discrimination between classification groups. The LDA algorithm also looks for linear combinations of variables which best explain the way of data classification. Linear methods find the vectors in the underlying space that best discriminate among classes and tries to maximize the betweenclass differences and minimize the within-class ones. Also, they are good at discriminating different classes because it is a surveillance method. As a result, the emotions appear to be accurately classified by linear methods compared to non-linear methods such as SOM, CART, etc. Linear methods offer many advantages in other pattern recognition such as face or speech recognitions. On the other hand, classification rates by other algorithms had a range from 61.5 to 71.9 %. It is assumed that the overlapped tendency in non-linear methods could influence the low classification rate.
Conclusion
In conclusion, we examined the difference of three emotions, i.e., boredom, pain, and surprise, based on physiological signals and the possibility for the classification of the three emotions by using six classification methods. This study has a few limitations. Firstly, the physiological and classification results on pain or surprise are limited to the overall generalization since the physical pain was focused as excluding psychological pain and startle reflex as surprise reaction not the surprise response that could be caused by any psychological processes in the study. As we described in the introduction, psychological pain or surprise are relatively complex (e.g., psychological pain is likely to be a mixture of sadness and fear) making it hard to observe the emotion-specific physiological responses. In the further study, it might be required to divide pain or surprise emotions in details as physical or psychological pain and pleasant or unpleasant surprise. Secondly, the recognition accuracy of machine learning algorithms used in the study was relatively lower, particularly in the non-linear methods such as SOM and CART, than the linear methods. We confirmed that for the classification of the three emotions, the algorithms based on the linear method was more effective rather than based on the non-linear method. To improve the classification accuracy of emotions, it might be needed to extract new physiological features (e.g., changes of SKT over time) and to modify the testing method in order to discriminate the three emotions accurately. Despite a few limitations, the findings in the study could contribute to expanding the understanding of human affective states and the underlying physiological mechanism. The results help us develop and expand emotion models on emotion-specific physiological patterns by adding the emotion-specific physiological responses for non-basic emotion in addition to the basic emotion observed by previous researchers. In the further study, if the gender or age effects in the physiological responses of boredom, pain, and surprise are included, the findings could contribute to anthropology by revealing humans' emotional response characteristics by different groups. The recognition of various emotions as well as different groups could be applied to develop user-friendly emotional interaction system between human and computer or machine in affective computing and HCI. | 8,642.2 | 2015-06-18T00:00:00.000 | [
"Computer Science"
] |
Recent Advances in Non-Precious Transition Metal / Nitrogen-doped Carbon for Oxygen Reduction Electrocatalysts in PEMFCs
: The proton exchange membrane fuel cells (PEMFCs) have been considered as promising future energy conversion devices, and have attracted immense scientific attention due to their high e ffi ciency and environmental friendliness. Nevertheless, the practical application of PEMFCs has been seriously restricted by high cost, low earth abundance and the poor poisoning tolerance of the precious Pt-based oxygen reduction reaction (ORR) catalysts. Noble-metal-free transition metal / nitrogen-doped carbon (M–N x C) catalysts have been proven as one of the most promising substitutes for precious metal catalysts, due to their low costs and high catalytic performance. In this review, we summarize the development of M–N x C catalysts, including the previous non-pyrolyzed and pyrolyzed transition metal macrocyclic compounds, and recent developed M–N x C catalysts, among which the Fe–N x C and Co–N x C catalysts have gained our special attention. The possible catalytic active sites of M–N x C catalysts towards the ORR are also analyzed here. This review aims to provide some guidelines towards the design and structural regulation of non-precious M–N x C catalysts via identifying real active sites, and thus, enhancing their ORR electrocatalytic performance.
Introduction
The worldwide demand of energy is rising drastically with the rapid increasing global population and the progressing development of society [1][2][3]. However, there is an inevitable reality that the traditional fossil fuels have been in a danger of drying up with continuous exploitation and utilization, which has resulted in a universal concern about the energy crisis [4,5]. Besides, the urgent environmental issues caused by consuming fossil fuels, such as pollutants emission and global warming, have also severely threatened the future of human society. These problems have spurred intensive researches on the development of sustainable, eco-friendly and high-efficient new energy systems [6,7]. The fuel cell can convert chemical energy directly into electricity, and is regarded as one of the most promising energy technologies, due to its high efficiency and eco-friendliness [8][9][10]. Among various kinds of fuel cells, the proton exchange membrane fuel cells (PEMFCs) have attracted the most attention over the past decades, due to their high energy conversion rate, high reliability, quick startup, low operating temperature and low pollute emission etc. [11][12][13].
However, the widespread application of PEMFCs is greatly hampered by the high-cost and low-performance electrocatalysts towards the oxygen reduction reaction (ORR) at the cathode to accelerate the sluggish reaction kinetics [14,15]. Typically, there are three main obstacles limiting the mass production of ORR catalysts: (i) the high cost. Currently, the most common-used and effective catalysts in PEMFCs are still Pt-based catalysts, of which the scarcity and high cost have resulted in an excessive pricing of PEMFCs [16]; (ii) the low performance. The complex mechanisms and sluggish reaction kinetics of ORRs necessarily lead to high potential demands and low current density outputs [17]. The main research aim is to fabricate suitable catalysts with abundant active sites and high reaction selectivity, and therefore greatly decreasing the ORR energy barrier and raising the conversion efficiency; (iii) the insufficient stability. The high stability of catalysts is the guarantee of the PEMFCs with excellent durability and lifetime. During the long-term operation of PEMFCs, the catalyst would go through Ostwald ripening that causes particle agglomeration and growth, surface oxidation state changes, component migration and loss, etc., which would gradually age and inactivate the catalyst, thus directly leading to the degradation of PEMFCs' performance [18][19][20][21].
From an economic point of view, some noble metal-based (Au, Ag, and platinum group metals: Pt, Pd, Ru, Rh, Ir and Os) electrocatalysts would definitely increase the manufacturing cost of fuel cells as the scarce and high cost of noble metals [22][23][24]. To solve these problems, numerous efforts have been devoted to exploring non-precious transition metal catalysts, owing to their abundant reserves, economic applicability and their potential catalytic activity comparable to noble metals, while most of the main group metals have shown little potential towards ORR catalysis [25,26]. Particularly, one of the exceptions is the hybrid of transition metal supported on nitrogen-doped carbon (M-N x C, M = non-precious transition metals, generally x = 2 or 4, which corresponds to the bonding of MN 2 or MN 4 , respectively) due to two possible reasons below: (i) The introduction of carbon supports could greatly stabilize and disperse the transition metals, therefore enhancing the ORR catalytic performance; (ii) the synergetic effects between the nitrogen-doped carbon supports and transition metals could significantly increase the number of active sites and boost the catalytic behavior [17,[27][28][29][30]. Nevertheless, the reaction kinetics, and in particular, the modulating mechanism of N atoms on the surface state and electronic structure of the catalysts still remains ambiguous.
In this review, we aim to provide an overview of the development of M-N x C ORR catalysts and the summarization of recent advances in this area. Herein, we offer a brief introduction to the reaction of oxygen reduction and its thermodynamic mechanism. We specially pay attention to the research progress in M-N x C ORR catalysts with emphasis on the choice of carbon sources and nitrogen sources, the state-of-the-art Fe-N x C and Co-N x C catalysts, and the analysis of their catalytic sites.
Brief Introduction of the ORR Mechanism
As shown in Figure 1, there are three sections in the PEMFC, including an anode where H 2 would be oxidized (hydrogen oxidation reaction, HOR), a cathode where O 2 would be reduced (ORR), and a proton exchange membrane through which H + could be transferred from the anode to the cathode [31][32][33]. The reaction on the cathode, the ORR, is a complex process which involves various basic and irreversible intermediate reactions [34,35]. It can be seen from Figure 2 that the process could proceed through two pathways: A direct four-electron way or an indirect two-electron way, varying with the electrode material and pH of the electrolyte. When it comes to the two-electron pathway, the O 2 molecules would accept two electrons first to be reduced to H 2 O 2 (in acid and neutral solution) or HO − 2 (in alkaline solution), and then adsorb another two electrons to be transferred into H 2 O or OH − eventually [36,37]. The generation of H 2 O 2 in electrolyte surely cause damages to catalysts and the proton exchange membrane, thus reducing the lifetime of PEMFCs [38]. Hence, the most ideal mechanism of fuel cells is the four-electron pathway, where oxygen molecules could accept four electrons in succession to be reduced into H 2 O (in acid solution) or OH -(in alkaline solution), with no detectable intermediates (H 2 O 2 ) in the final electrolyte [39,40]. However, in most instances, the ORR proceeds via the two-electron pathway, or the parallelism of the two-electron and four-electron way on the electrode surface, as the dissociation energy of the O-O bond in O2 is 498 kJ/mol, which is much higher than that in H2O2 (143 kJ/mol), suggesting a decreasing activation energy of the ORR. The best solution is to develop effective catalysts to selectively reduce the bond energy of O2, therefore boosting the four-electron process [41].
Previous M-NxC Catalysts
To explore optimal approaches to produce highly stable and active catalysts with the M-NxC structure, lots of efforts have been devoted to exploring the synthesis strategies and the nature of the However, in most instances, the ORR proceeds via the two-electron pathway, or the parallelism of the two-electron and four-electron way on the electrode surface, as the dissociation energy of the O-O bond in O2 is 498 kJ/mol, which is much higher than that in H2O2 (143 kJ/mol), suggesting a decreasing activation energy of the ORR. The best solution is to develop effective catalysts to selectively reduce the bond energy of O2, therefore boosting the four-electron process [41].
Previous M-NxC Catalysts
To explore optimal approaches to produce highly stable and active catalysts with the M-NxC structure, lots of efforts have been devoted to exploring the synthesis strategies and the nature of the However, in most instances, the ORR proceeds via the two-electron pathway, or the parallelism of the two-electron and four-electron way on the electrode surface, as the dissociation energy of the O-O bond in O 2 is 498 kJ/mol, which is much higher than that in H 2 O 2 (143 kJ/mol), suggesting a decreasing activation energy of the ORR. The best solution is to develop effective catalysts to selectively reduce the bond energy of O 2 , therefore boosting the four-electron process [41].
Previous M-N x C Catalysts
To explore optimal approaches to produce highly stable and active catalysts with the M-N x C structure, lots of efforts have been devoted to exploring the synthesis strategies and the nature of the catalytic active sites [42][43][44][45][46][47][48][49]. This section reviews the research progress in transition metal catalysts supported on nitrogen-doped carbon with its emphasis on the choice of carbon sources and nitrogen sources, the control of experimental conditions, and the analysis of catalytic sites. Non-pyrolyzed and pyrolyzed transition metal macrocyclic compounds will be firstly discussed, as they are the pioneers of M-N x C catalysts to provide the fundamental research conditions on which the design of the follow-up discussed recent developed M-N x C catalysts are based.
Non-Pyrolyzed Transition Metal Macrocyclic Compounds
The earliest discovery of M-N x C catalysts should date back to 1964 when cobalt phthalocyanine (CoPc), a kind of transition metal macrocyclic compound, was investigated as a fuel cell cathode catalyst in alkaline electrolyte by R. Jasinski [50]. Since then, macrocyclic compounds composed of various central transition metal atoms (e.g., Fe, Co, Ni, Mn, Cr, etc.) and ligands, such as tetraphenylporphyrin (TPP), tetraazaannulenes (TAA), tetramethoxyphenylporphyrin (TMPP), phthalocyanine (Pc), tetradithiacyelohexeno-tetraazaporphyrin (TDAP) etc., have entered the researcher's vision and study fields [51][52][53]. Also, their catalytic behaviors were sequentially demonstrated in acidic electrolytes [54][55][56]. Figure 3 shows the typical structure of these compounds where there is usually a chelating group of four nitrogen atoms coordinating with a central metal atom. It has been proven that the catalytic activity of these macrocyclic materials is directly related to the central metal ion and the surrounding ligand structure [57,58]. For a given metal center, the ORR catalytic activity is strongly affected by the nature and electron density of the ligand in macrocyclic compounds [51,59]. Alt et al. [60] experimented with the ORR catalytic behaviors of several Co-ligand compounds in acidic electrolyte, including Pc, TPP, TAA, TMPP, TDAP and TPAP (tetrapyridino-tetraazaporphyrin), with an activity order of TAA > TDAP > MPP > Pc > TPAP > TPP. Similar conclusion was drawn by Song and his partners in 1998 [61]. Besides, K. Wiesener [62] and his colleagues proved that the catalytic performance depends greatly on central metal atoms (Fe, Co, Ni, Mn, Cu etc.) which mainly affect the ORR process, whether for a four-electron or a two-electron process for a given macrocyclic ligand. The ORR activities of various central transition metals coordinating with the Pc ligand followed the order of Fe > Co > Ni > Cu ≈ Mn. With many far previous explorations, transition metal macrocyclic compounds are regarded as a promising kind of cathode catalyst to substitute Pt for ORR. However, these macrocyclic materials are rarely used directly in PEMFCs, due to their poor performance in activity and durability. It was indicated that FePc could facilitate the ORR with a four-electron process, but with a poor stability, while CoPc with a good electrochemical stability were favorable to the two-electron pathway to produce H 2 O 2 [63,64]. They were all limited by theses inherent drawbacks to be used as cathode catalysts in PEMFCs. Therefore, it is of great significance to design a method to raise the stability and activity of transition metal macrocyclic compounds synchronously.
Catalysts 2020, 10, x FOR PEER REVIEW 4 of 23 catalytic active sites [42][43][44][45][46][47][48][49]. This section reviews the research progress in transition metal catalysts supported on nitrogen-doped carbon with its emphasis on the choice of carbon sources and nitrogen sources, the control of experimental conditions, and the analysis of catalytic sites. Non-pyrolyzed and pyrolyzed transition metal macrocyclic compounds will be firstly discussed, as they are the pioneers of M-NxC catalysts to provide the fundamental research conditions on which the design of the followup discussed recent developed M-NxC catalysts are based.
Non-Pyrolyzed Transition Metal Macrocyclic Compounds
The earliest discovery of M-NxC catalysts should date back to 1964 when cobalt phthalocyanine (CoPc), a kind of transition metal macrocyclic compound, was investigated as a fuel cell cathode catalyst in alkaline electrolyte by R. Jasinski [50]. Since then, macrocyclic compounds composed of various central transition metal atoms (e.g., Fe, Co, Ni, Mn, Cr, etc.) and ligands, such as tetraphenylporphyrin (TPP), tetraazaannulenes (TAA), tetramethoxyphenylporphyrin (TMPP), phthalocyanine (Pc), tetradithiacyelohexeno-tetraazaporphyrin (TDAP) etc., have entered the researcher's vision and study fields [51][52][53]. Also, their catalytic behaviors were sequentially demonstrated in acidic electrolytes [54][55][56]. Figure 3 shows the typical structure of these compounds where there is usually a chelating group of four nitrogen atoms coordinating with a central metal atom. It has been proven that the catalytic activity of these macrocyclic materials is directly related to the central metal ion and the surrounding ligand structure [57,58]. For a given metal center, the ORR catalytic activity is strongly affected by the nature and electron density of the ligand in macrocyclic compounds [51,59]. Alt et al. [60] experimented with the ORR catalytic behaviors of several Co-ligand compounds in acidic electrolyte, including Pc, TPP, TAA, TMPP, TDAP and TPAP (tetrapyridino-tetraazaporphyrin), with an activity order of TAA > TDAP > MPP > Pc > TPAP > TPP. Similar conclusion was drawn by Song and his partners in 1998 [61]. Besides, K. Wiesener [62] and his colleagues proved that the catalytic performance depends greatly on central metal atoms (Fe, Co, Ni, Mn, Cu etc.) which mainly affect the ORR process, whether for a four-electron or a two-electron process for a given macrocyclic ligand. The ORR activities of various central transition metals coordinating with the Pc ligand followed the order of Fe > Co > Ni > Cu ≈ Mn. With many far previous explorations, transition metal macrocyclic compounds are regarded as a promising kind of cathode catalyst to substitute Pt for ORR. However, these macrocyclic materials are rarely used directly in PEMFCs, due to their poor performance in activity and durability. It was indicated that FePc could facilitate the ORR with a four-electron process, but with a poor stability, while CoPc with a good electrochemical stability were favorable to the two-electron pathway to produce H2O2 [63,64]. They were all limited by theses inherent drawbacks to be used as cathode catalysts in PEMFCs. Therefore, it is of great significance to design a method to raise the stability and activity of transition metal macrocyclic compounds synchronously. The structures of phthalocyanine (Pc), tetraphenylporphyrin (TPP) and tetramethoxyphenylporphyrin (TMPP).
Pyrolyzed Metal Macrocyclic Compounds
In 1973, Alt et al. [65] found that Co-and Fe-TMPP complexes were more stable in 3 N H 2 SO 4 after a high-temperature treatment. In 1976, Jahnke et al. [66] pointed out in their article that the catalytic activity and stability of various N 4 -chelates in acidic media could be considerably improved through thermal pretreatment in an inert gas atmosphere (e.g., argon gas). Since then, heat treatment has aroused great research interests, and also been viewed as an effective way to improve the catalytic performance of transition metal macrocyclic compounds [52,[67][68][69]. Studies [70][71][72] have revealed that N 4 -chelates could obtain the highest catalytic activity via a pyrolysis at 500-700 • C, while a temperature of 800 • C is required to achieve a stable catalytic performance in PEMFCs. Moreover, a higher temperature above 1000 • C would lead to an obvious decrease of stability and activity due to (i) the formation and growth of metal particles [73,74] and (ii) a decline in the nitrogen content of catalysts surface [75].
As for the accounts to why thermal treating could improve the macrocyclic compounds' behavior, there are three main statements [76][77][78]: (i) Heat treatment could improve the dispersity of coordination compounds, therefore boosting the catalytic activity; (ii) heat treatment could lead to the polymerization of macrocyclic compounds, which results in a highest activity at lower temperature; (iii) heat treatment could facilitate the formation of compounds containing M-N 4 groups, which are acknowledged to account for the rising activity and long-term stability in low temperature range (500-700 • C). Although these effects would improve the catalysts' electrocatalytic performance, and could provide reference guidelines to materials design, the material structure and active components of pyrolyzed metal macrocyclic compounds after thermal treating still remain controversial.
Recent Developed M-N x C Catalysts
Although the aforementioned non-pyrolyzed and pyrolyzed transition metal macrocyclic compounds have presented impressive ORR electrocatalytic activities, some immanent problems restricting their further development should not be overlooked, including their high cost, complex fabrication process, low activity and stability in acidic electrolyte [79]. Hence, lots of efforts have been devoted to developing a kind of effective catalysts based on cheap and easily available precursors, while the metal macrocyclic compound itself serves as both a nitrogen source and metal donor on the carbon matrix. For the past few decades, researchers attempted to prepare M-N x C catalysts through processing the total mixture of individual metal salt, nitrogen source and carbon support under the high temperature with a certain gas atmosphere [80]. For example, Bouwkamp-Wijnoltz et al. [81] prepared the catalysts from heat-processing the mixture of cobalt acetate, carbon black and various nitrogen donors, among which the best catalytic results comparable to that of heat-treated cobalt porphyrin were obtained with 2,5-dimethylpyrrole. The results of Extended X-ray Absorption Fine Structure (EXAFS) revealed similar active sites (CoN 4 ) in both types of catalyst. This approach provides more possibilities of fabricating a variety of catalysts via the flexible permutation and combination of different metals, N donors and carbon sources. Indeed, it has been acknowledged as one of the most cost-effective and preparation-effortless methods to prepare M-N x C catalysts.
Choices of Nitrogen and Carbon Sources
As for the introduction of nitrogen and carbon sources, there are various options. First, just for transition metal macrocyclic compounds, such as FePc, the metal, N atoms and carbon support are provided by an individual precursor, which is usually expensive and hard-won in practical applications [82]. Second, the N atoms and carbon matrix in M-N x C catalysts could be introduced by using a kind of precursor containing both nitrogen and carbon, such as polyacrylonitrile (PAN) [83], polyaniline (PANI) [84,85], polypyrrole (PPy) [86], polythiophene (PT) [87], ethylenediamine (EDA) [88], cyanamide (CA) [89] and other organic polymers. Most recently, Liu et al. [90] reported a kind of Co/N-doped cross-linked porous carbon (Co/N-CLPC) catalyst from PAN through a one-step, in-situ synthesis method ( Figure 4). Co/N-CLPC showed an excellent ORR catalytic activity with an onset potential of 0.805 V (vs RHE) and a limiting current density of −5.102 mA cm −2 , which are comparable to those of commercial Pt/C catalysts.
Catalysts 2020, 10, x FOR PEER REVIEW 6 of 23 onset potential of 0.805 V (vs RHE) and a limiting current density of −5.102 mA cm −2 , which are comparable to those of commercial Pt/C catalysts. In most cases, to increase the electrical conductivity or form a certain structural morphology of the catalysts, researchers are pleased to introduce extra carbon supports. For example, Wu et al. [91] synthesized a kind of PANI-M-C catalysts with the short-chain aniline oligomers polymerizing together on the surface of carbon spheres, and then the metal aggregating encapsulated in carbon shells ( Figure 5). Liu et al. [92] developed a kind of catalyst with single iron atoms immobilized on the wall of a carbon nanotube (SAICNT), where the pyrrole polymerizes on an organic template (methyl orange) to fabricate a nanotube structure which is favorable to electron transport, and increases the number of active sites ( Figure 6). The SAICNT exhibited an ultrahigh ORR activity with a half-wave potential of 0.93 V and a current density of 59.8 mA cm −2 at 0.8 V, which are far better than that of commercial Pt/C. Besides, Huang et al. [93] prepared a trifunctional catalyst via electrodepositing Co ions and PPy onto carbon fibers to form a tube-shaped product with abundant active sites ( Figure 7).
The third method to introduce nitrogen and carbon is to use two kinds of materials which contain N atoms and carbon basis, respectively. Generally speaking, the carbon precursors are the same as those used to prepare carbon nanomaterials, such as biomass materials, while the existential forms of carbon supports used for electrocatalysts preparation vary from two-dimension nanosheets (e.g., graphene and its derivatives [1,[94][95][96][97]) to three-dimension carbon nanostructures (e.g., carbon nanotubes [98,99], carbon fibers [100][101][102], carbon spheres [103] and porous carbon [3,104]). As there are already some excellent reviews talking about the effects of carbon supports in electrocatalysts, we will not repeat them in this article. Nitrogen sources is an essential part of the M-NxC catalysts so as to provide N atoms chelating with metal ions to form the MN4 active sites. Apart from the aforementioned organic polymers, there are two types of common-used, individual nitrogen sources: (i) some organic monomers or small organic molecule compounds, such as pyrrole, phenanthroline, carbamide, melamine etc.; (ii) some inorganic compounds, such as NH3. In most cases, to increase the electrical conductivity or form a certain structural morphology of the catalysts, researchers are pleased to introduce extra carbon supports. For example, Wu et al. [91] synthesized a kind of PANI-M-C catalysts with the short-chain aniline oligomers polymerizing together on the surface of carbon spheres, and then the metal aggregating encapsulated in carbon shells ( Figure 5). Liu et al. [92] developed a kind of catalyst with single iron atoms immobilized on the wall of a carbon nanotube (SAICNT), where the pyrrole polymerizes on an organic template (methyl orange) to fabricate a nanotube structure which is favorable to electron transport, and increases the number of active sites ( Figure 6). The SAICNT exhibited an ultrahigh ORR activity with a half-wave potential of 0.93 V and a current density of 59.8 mA cm −2 at 0.8 V, which are far better than that of commercial Pt/C. Besides, Huang et al. [93] prepared a trifunctional catalyst via electrodepositing Co ions and PPy onto carbon fibers to form a tube-shaped product with abundant active sites ( Figure 7).
The third method to introduce nitrogen and carbon is to use two kinds of materials which contain N atoms and carbon basis, respectively. Generally speaking, the carbon precursors are the same as those used to prepare carbon nanomaterials, such as biomass materials, while the existential forms of carbon supports used for electrocatalysts preparation vary from two-dimension nanosheets (e.g., graphene and its derivatives [1,[94][95][96][97]) to three-dimension carbon nanostructures (e.g., carbon nanotubes [98,99], carbon fibers [100][101][102], carbon spheres [103] and porous carbon [3,104]). As there are already some excellent reviews talking about the effects of carbon supports in electrocatalysts, we will not repeat them in this article. Nitrogen sources is an essential part of the M-N x C catalysts so as to provide N atoms chelating with metal ions to form the MN 4 active sites. Apart from the aforementioned organic polymers, there are two types of common-used, individual nitrogen sources: (i) some organic monomers or small organic molecule compounds, such as pyrrole, phenanthroline, carbamide, melamine etc.; (ii) some inorganic compounds, such as NH 3 . For instance, Liang et al. [105] chose melamine and the ordered porous resin (OPR) as nitrogen and carbon sources, respectively, to prepare the catalysts of Fe-N-CNT-OPC, which displayed an ORR electrocatalytic activity similar to that of the 20% Pt/C catalyst. It was indicated that the Fe-N-CNT-OPC possessed abundant Fe-N active sites, high porosity favorable to the electron and transportation, as well as ample graphitic CNTs to ensure the conductivity ( Figure 8). Kim et al. [106] reported a sort of Fe-N-C catalyst from pyrolyzing the mixture of phenanthroline and Ketjenblack carbon supports. According to X-ray absorption spectroscopy (XAS) and X-ray photoelectron spectroscopy (XPS) analyses, the active sites FeNx would prefer to form with pyridinic N stemming from phenanthroline. Compared with organic nitrogen sources, the researches on inorganic compounds containing N atoms used for the electrocatalyst preparation are mainly focused on the utilization of NH3.
In 2016, Park et al. [107] reported a novel catalyst of FexNy/NC nanocomposite with an excellent ORR activity, a long-term catalytic stability, and a direct four-electron pathway in alkaline electrolyte. As show in Figure 9, the mixture of carbon black and Fe salt was heat treated in NH3, thus obtaining the FexNy/NC catalyst directly. Besides, some compounds which could release NH3 under heating conditions, such as urea, have also drawn researchers' attention. For example, Liu et al. [108] fabricated a sort of Fe/NG/C catalysts with a comparable ORR activity to commercial Pt/C catalysts in 0.1 M KOH. The procedure was simply to thermally decompose the hybrid of graphene oxide, urea, carbon black and iron species, where the N-doping process was accomplished via the decomposition of urea to release NH3. Usually, some porous structure would be generated on carbon supports during the process of NH3 blown off by heating the compounds adhering to the carbon precursor, that is to say the activation function of NH3 [109,110]. For instance, Liang et al. [105] chose melamine and the ordered porous resin (OPR) as nitrogen and carbon sources, respectively, to prepare the catalysts of Fe-N-CNT-OPC, which displayed an ORR electrocatalytic activity similar to that of the 20% Pt/C catalyst. It was indicated that the Fe-N-CNT-OPC possessed abundant Fe-N active sites, high porosity favorable to the electron and transportation, as well as ample graphitic CNTs to ensure the conductivity (Figure 8). Kim et al. [106] reported a sort of Fe-N-C catalyst from pyrolyzing the mixture of phenanthroline and Ketjenblack carbon supports. According to X-ray absorption spectroscopy (XAS) and X-ray photoelectron spectroscopy (XPS) analyses, the active sites FeN x would prefer to form with pyridinic N stemming from phenanthroline. Compared with organic nitrogen sources, the researches on inorganic compounds containing N atoms used for the electrocatalyst preparation are mainly focused on the utilization of NH 3 .
In 2016, Park et al. [107] reported a novel catalyst of Fe x N y /NC nanocomposite with an excellent ORR activity, a long-term catalytic stability, and a direct four-electron pathway in alkaline electrolyte. As show in Figure 9, the mixture of carbon black and Fe salt was heat treated in NH 3 , thus obtaining the Fe x N y /NC catalyst directly. Besides, some compounds which could release NH 3 under heating conditions, such as urea, have also drawn researchers' attention. For example, Liu et al. [108] fabricated a sort of Fe/NG/C catalysts with a comparable ORR activity to commercial Pt/C catalysts in 0.1 M KOH. The procedure was simply to thermally decompose the hybrid of graphene oxide, urea, carbon black and iron species, where the N-doping process was accomplished via the decomposition of urea to release NH 3 . Usually, some porous structure would be generated on carbon supports during the process of NH 3 blown off by heating the compounds adhering to the carbon precursor, that is to say the activation function of NH 3 [109,110].
Types of Metal
Apart from the aforementioned heat treatment, with choice of nitrogen and carbon precursors, the transition metal types also have extremely significant impacts on the nature and electrocatalytic performance of the M-NxC catalysts. Generally, there are several non-precious transition metals which are researched in ORR electrocatalysis, including Fe, Co, Ni, Mn, Cu, Zn and so on, among which the most frequently-studied and effective are mainly Fe and Co. The major merit of Fe-and Co-based ORR M-NxC catalysts is not only the low cost but also their excellent behavior in both acidic and alkaline media [111].
The Fe-NxC and Co-NxC catalysts have been widely studied for a long time, since the cobalt phthalocyanine was proven to exhibit a good oxygen reduction activity in alkaline solution in 1964 [50]. In the earliest studies, main attention was focused upon the exploration of the macrocycle compounds of these two metals used in ORR and the related study of performance improvements, active sites, as well as catalytic mechanism. Then, lots of efforts have been devoted into the exploration of inorganic and small organic metal donors, the design of new synthesis routes, and the structure and property optimization of Fe-NxC and Co-NxC.
Recently, researchers are challenging to raise the catalytic activity of Fe-NxC and Co-NxC catalysts in a wide pH range as high as possible via tuning and increasing the active sites [48]. Considerable progresses have been achieved in the synthesis and theoretical analysis of highly active M-NxC catalysts toward ORR in both acidic and alkaline solutions during the past few years.
Very recently, one of the most attractive research issues about M-NxC catalysts would be the design of single-atom M-NxC, which is favorable to realize the utmost utilization of metal sites in catalysts, and the requirements of high catalytic activity. For example, Xia et al. [112] prepared the carbon nanotubes doped with Fe single atoms (Fe-N/CNT) through direct heat treatment of the mixture of the Fe precursor and carbon matrix in the NH3 atmosphere ( Figure 10). The Fe single atoms were incorporated with N atoms to form Fe-Nx active sites during the growth process of CNT. As Figure 11 indicated, the single-atom Fe-N/CNT catalyst was more active and stable than the
Types of Metal
Apart from the aforementioned heat treatment, with choice of nitrogen and carbon precursors, the transition metal types also have extremely significant impacts on the nature and electrocatalytic performance of the M-NxC catalysts. Generally, there are several non-precious transition metals which are researched in ORR electrocatalysis, including Fe, Co, Ni, Mn, Cu, Zn and so on, among which the most frequently-studied and effective are mainly Fe and Co. The major merit of Fe-and Co-based ORR M-NxC catalysts is not only the low cost but also their excellent behavior in both acidic and alkaline media [111].
The Fe-NxC and Co-NxC catalysts have been widely studied for a long time, since the cobalt phthalocyanine was proven to exhibit a good oxygen reduction activity in alkaline solution in 1964 [50]. In the earliest studies, main attention was focused upon the exploration of the macrocycle compounds of these two metals used in ORR and the related study of performance improvements, active sites, as well as catalytic mechanism. Then, lots of efforts have been devoted into the exploration of inorganic and small organic metal donors, the design of new synthesis routes, and the structure and property optimization of Fe-NxC and Co-NxC.
Recently, researchers are challenging to raise the catalytic activity of Fe-NxC and Co-NxC catalysts in a wide pH range as high as possible via tuning and increasing the active sites [48]. Considerable progresses have been achieved in the synthesis and theoretical analysis of highly active M-NxC catalysts toward ORR in both acidic and alkaline solutions during the past few years.
Very recently, one of the most attractive research issues about M-NxC catalysts would be the design of single-atom M-NxC, which is favorable to realize the utmost utilization of metal sites in catalysts, and the requirements of high catalytic activity. For example, Xia et al. [112] prepared the carbon nanotubes doped with Fe single atoms (Fe-N/CNT) through direct heat treatment of the mixture of the Fe precursor and carbon matrix in the NH3 atmosphere ( Figure 10). The Fe single atoms were incorporated with N atoms to form Fe-Nx active sites during the growth process of CNT. As Figure 11 indicated, the single-atom Fe-N/CNT catalyst was more active and stable than the
Types of Metal
Apart from the aforementioned heat treatment, with choice of nitrogen and carbon precursors, the transition metal types also have extremely significant impacts on the nature and electrocatalytic performance of the M-N x C catalysts. Generally, there are several non-precious transition metals which are researched in ORR electrocatalysis, including Fe, Co, Ni, Mn, Cu, Zn and so on, among which the most frequently-studied and effective are mainly Fe and Co. The major merit of Fe-and Co-based ORR M-N x C catalysts is not only the low cost but also their excellent behavior in both acidic and alkaline media [111].
The Fe-N x C and Co-N x C catalysts have been widely studied for a long time, since the cobalt phthalocyanine was proven to exhibit a good oxygen reduction activity in alkaline solution in 1964 [50]. In the earliest studies, main attention was focused upon the exploration of the macrocycle compounds of these two metals used in ORR and the related study of performance improvements, active sites, as well as catalytic mechanism. Then, lots of efforts have been devoted into the exploration of inorganic and small organic metal donors, the design of new synthesis routes, and the structure and property optimization of Fe-N x C and Co-N x C.
Recently, researchers are challenging to raise the catalytic activity of Fe-N x C and Co-N x C catalysts in a wide pH range as high as possible via tuning and increasing the active sites [48]. Considerable progresses have been achieved in the synthesis and theoretical analysis of highly active M-N x C catalysts toward ORR in both acidic and alkaline solutions during the past few years.
Very recently, one of the most attractive research issues about M-N x C catalysts would be the design of single-atom M-N x C, which is favorable to realize the utmost utilization of metal sites in catalysts, and the requirements of high catalytic activity. For example, Xia et al. [112] prepared the carbon nanotubes doped with Fe single atoms (Fe-N/CNT) through direct heat treatment of the mixture of the Fe precursor and carbon matrix in the NH 3 atmosphere (Figure 10). The Fe single atoms were incorporated with N atoms to form Fe-N x active sites during the growth process of CNT. As Figure 11 indicated, the single-atom Fe-N/CNT catalyst was more active and stable than the commercial Pt/C catalyst in 0.1 M KOH. In practice, the preparation of Fe or Co single atoms is relatively difficult, as these non-precious transition metal atoms would migrate together and aggregate into nanoparticles or compounds at high temperature. To solve this problem, Li et al. [113] developed a secondary-atom-assisted method to prepare monatomic Fe anchored on porous N-doped carbon nanowires (Fe-NCNWs). The Fe ions were surrounded by secondary atoms (Al, Mg, or Zn) which could ensure Fe ions be transformed into Fe single atoms rather than nanoparticles, and produce porous structure by their pyrolysis under high temperature (Figure 12). Similarly, Yin et al. [114] obtained the Co SAs/N-C catalysts through the pre-carbonation of bimetallic Zn/Co metal-organic frameworks, and then the evaporation of Zn above 800 • C. The prepared Co SAs/N-C showed an excellent catalytic performance towards ORR with a half-wave potential of 0.881 V, which is much higher than that of commercial Pt/C (0.811 V). Overall speaking, this approach makes the preparation of single-atom M-N x C catalysts more facile and reliable. Apart from the fact that the monatomic catalysts have exhibited satisfactory activity towards ORR, recently Zeng's group [115] found that Fe single atoms together with Fe nanoclusters anchored on nitrogen-doped carbon support (Fe AC @Fe SA -N-C) displayed superior ORR catalytic activity ( Figure 13). Their work provides a novel insight and pathway to the catalysts design.
Catalysts 2020, 10, x FOR PEER REVIEW 10 of 23 commercial Pt/C catalyst in 0.1 M KOH. In practice, the preparation of Fe or Co single atoms is relatively difficult, as these non-precious transition metal atoms would migrate together and aggregate into nanoparticles or compounds at high temperature. To solve this problem, Li et al. [113] developed a secondary-atom-assisted method to prepare monatomic Fe anchored on porous Ndoped carbon nanowires (Fe-NCNWs). The Fe ions were surrounded by secondary atoms (Al, Mg, or Zn) which could ensure Fe ions be transformed into Fe single atoms rather than nanoparticles, and produce porous structure by their pyrolysis under high temperature (Figure 12). Similarly, Yin et al. [114] obtained the Co SAs/N-C catalysts through the pre-carbonation of bimetallic Zn/Co metalorganic frameworks, and then the evaporation of Zn above 800 °C. The prepared Co SAs/N-C showed an excellent catalytic performance towards ORR with a half-wave potential of 0.881 V, which is much higher than that of commercial Pt/C (0.811 V). Overall speaking, this approach makes the preparation of single-atom M-NxC catalysts more facile and reliable. Apart from the fact that the monatomic catalysts have exhibited satisfactory activity towards ORR, recently Zeng's group [115] found that Fe single atoms together with Fe nanoclusters anchored on nitrogen-doped carbon support (FeAC@FeSA-N-C) displayed superior ORR catalytic activity ( Figure 13). Their work provides a novel insight and pathway to the catalysts design. To increase the active sites in catalysts, some porous organic materials with high surface areas, tunable compositions and various structural topologies, such as metal organic frameworks (MOFs) [116], zeolitic imidazolate frameworks (ZIFs) and covalent organic frameworks (COFs), are used to serve as carbon support after being heat treated. For example, Zhang et al. [117] used an MOF material named NH2-MIL-101@PDA to synthesize the Fe-N-C catalysts with higher ORR activity and stability than those of Pt/C catalyst ( Figure 14). According to the XPS results, the Fe-N-C had a high content of N (8.07 at.%) and Fe-Nx (1.22 at.%), which is one of the reasons for high activity. Besides, the large surface area and porous structure are also favorable to the improvement of catalytic behavior. Mao et al. [118] studied the catalytic performance of several Co/N/C catalysts derived from Co-doped ZIF precursors by tuning the experimental conditions, such as the ratio of precursors, reaction temperature and reaction time. The sample of Co/N/C-1000, for which the heating temperature was 1000 °C, displayed a higher half-wave potential of 0.856 V than that of commercial Pt/C in alkaline condition and an outstanding stability in both alkaline and acidic solutions. To increase the active sites in catalysts, some porous organic materials with high surface areas, tunable compositions and various structural topologies, such as metal organic frameworks (MOFs) [116], zeolitic imidazolate frameworks (ZIFs) and covalent organic frameworks (COFs), are used to serve as carbon support after being heat treated. For example, Zhang et al. [117] used an MOF material named NH2-MIL-101@PDA to synthesize the Fe-N-C catalysts with higher ORR activity and stability than those of Pt/C catalyst ( Figure 14). According to the XPS results, the Fe-N-C had a high content of N (8.07 at.%) and Fe-Nx (1.22 at.%), which is one of the reasons for high activity. Besides, the large surface area and porous structure are also favorable to the improvement of catalytic behavior. Mao et al. [118] studied the catalytic performance of several Co/N/C catalysts derived from Co-doped ZIF precursors by tuning the experimental conditions, such as the ratio of precursors, reaction temperature and reaction time. The sample of Co/N/C-1000, for which the heating temperature was 1000 °C, displayed a higher half-wave potential of 0.856 V than that of commercial Pt/C in alkaline condition and an outstanding stability in both alkaline and acidic solutions. To increase the active sites in catalysts, some porous organic materials with high surface areas, tunable compositions and various structural topologies, such as metal organic frameworks (MOFs) [116], zeolitic imidazolate frameworks (ZIFs) and covalent organic frameworks (COFs), are used to serve as carbon support after being heat treated. For example, Zhang et al. [117] used an MOF material named NH 2 -MIL-101@PDA to synthesize the Fe-N-C catalysts with higher ORR activity and stability than those of Pt/C catalyst ( Figure 14). According to the XPS results, the Fe-N-C had a high content of N (8.07 at.%) and Fe-N x (1.22 at.%), which is one of the reasons for high activity. Besides, the large surface area and porous structure are also favorable to the improvement of catalytic behavior. Mao et al. [118] studied the catalytic performance of several Co/N/C catalysts derived from Co-doped ZIF precursors by tuning the experimental conditions, such as the ratio of precursors, reaction temperature and reaction time. The sample of Co/N/C-1000, for which the heating temperature was 1000 • C, displayed a higher half-wave potential of 0.856 V than that of commercial Pt/C in alkaline condition and an outstanding stability in both alkaline and acidic solutions. Catalysts 2020, 10, x FOR PEER REVIEW 12 of 23 Apart from some traditional synthesis methods, some novel and effective approaches are emerging for facile synthesis of M-NxC catalysts. For instance, Peng et al. [119] developed a nonpyrolysis method to prepare pfSAC-Fe catalysts via intermolecular interactions between the Fecontaining COF and the graphene matrix. The prepared catalysts exhibited the superior ORR activity with four times higher than that of commercial Pt/C. This method could also simplify the computational process, as no random structures are involved. Kiciński et al. [120] introduced a permanent magnet to the ORR test system to provide an external magnetic field ( Figure 15). This method was proven effective to increase activity and boost the 4e − pathway, as the catalytic performance of Fe-N-C/S was obviously enhanced with an applied magnetic field in the test. Apart from some traditional synthesis methods, some novel and effective approaches are emerging for facile synthesis of M-NxC catalysts. For instance, Peng et al. [119] developed a nonpyrolysis method to prepare pfSAC-Fe catalysts via intermolecular interactions between the Fecontaining COF and the graphene matrix. The prepared catalysts exhibited the superior ORR activity with four times higher than that of commercial Pt/C. This method could also simplify the computational process, as no random structures are involved. Kiciński et al. [120] introduced a permanent magnet to the ORR test system to provide an external magnetic field ( Figure 15). This method was proven effective to increase activity and boost the 4e − pathway, as the catalytic performance of Fe-N-C/S was obviously enhanced with an applied magnetic field in the test. Apart from some traditional synthesis methods, some novel and effective approaches are emerging for facile synthesis of M-N x C catalysts. For instance, Peng et al. [119] developed a non-pyrolysis method to prepare pfSAC-Fe catalysts via intermolecular interactions between the Fe-containing COF and the graphene matrix. The prepared catalysts exhibited the superior ORR activity with four times higher than that of commercial Pt/C. This method could also simplify the computational process, as no random structures are involved. Kiciński et al. [120] introduced a permanent magnet to the ORR test system to provide an external magnetic field ( Figure 15). This method was proven effective to increase activity and boost the 4e − pathway, as the catalytic performance of Fe-N-C/S was obviously enhanced with an applied magnetic field in the test. Since the earliest study of M-NxC catalysts, intensive focus and efforts have been attached to the exploration and discussion on the active structure favorable to the ORR. It should be noticed out that the ORR active site of M-NxC catalysts may be an integration of some active components varying with the potential changes, rather than a steadfast concrete structure. Although there are some people favoring of the statements that either some metal-free sites [121] or some carbon layer-coated metal sites [122,123] are the active sites in M-NxC catalysts for the ORR, the most widely accepted theory is still that M-Nx sites are the true active components for ORR catalysis [124][125][126]. However, the debate, about whether M-N2 or M-N4 is the real active sites, never stop. For example, the majority of researchers believe the Fe-N4 groups are the active sites towards ORR [127], while the rest of investigators think that Fe-N2 would play a major part in ORR catalysis [128].
In practice, M-NxC catalysts are usually consisted of M-Nx sites and graphene encapsulated metal nanoparticles simultaneously, which increases the difficulty to identify the true active sits in M-NxC. Most recently, Feng's group [109] proposed a low-temperature NH4Cl-treatment method to efficiently wipe off the graphene-encapsulated nanoparticles from M-NxC catalysts without destruction of M-Nx groups ( Figure 16). This strategy firmly demonstrates the dominant position of M-Nx sites in catalyzing ORR. Wang et al. [129] used first-principles calculation to build a microkinetic model for ORR on single atom Fe-N-C catalysts, and the modeling results indicated that the real active site of single atom Fe-N-C is the Fe(OH)N4 group, rather than inactive FeN4 center, as the latter is covered with an intermediate OH* which is a part of the active component, and the ΔG values along the associative path on the Fe(OH)N4 center are more favorable to ORR catalysis than that of FeN4 center ( Figure 17). Since the earliest study of M-N x C catalysts, intensive focus and efforts have been attached to the exploration and discussion on the active structure favorable to the ORR. It should be noticed out that the ORR active site of M-N x C catalysts may be an integration of some active components varying with the potential changes, rather than a steadfast concrete structure. Although there are some people favoring of the statements that either some metal-free sites [121] or some carbon layer-coated metal sites [122,123] are the active sites in M-N x C catalysts for the ORR, the most widely accepted theory is still that M-N x sites are the true active components for ORR catalysis [124][125][126]. However, the debate, about whether M-N 2 or M-N 4 is the real active sites, never stop. For example, the majority of researchers believe the Fe-N 4 groups are the active sites towards ORR [127], while the rest of investigators think that Fe-N 2 would play a major part in ORR catalysis [128].
In practice, M-N x C catalysts are usually consisted of M-N x sites and graphene encapsulated metal nanoparticles simultaneously, which increases the difficulty to identify the true active sits in M-N x C. Most recently, Feng's group [109] proposed a low-temperature NH 4 Cl-treatment method to efficiently wipe off the graphene-encapsulated nanoparticles from M-N x C catalysts without destruction of M-N x groups ( Figure 16). This strategy firmly demonstrates the dominant position of M-N x sites in catalyzing ORR. Wang et al. [129] used first-principles calculation to build a microkinetic model for ORR on single atom Fe-N-C catalysts, and the modeling results indicated that the real active site of single atom Fe-N-C is the Fe(OH)N 4 group, rather than inactive FeN 4 center, as the latter is covered with an intermediate OH* which is a part of the active component, and the ∆G values along the associative path on the Fe(OH)N 4 center are more favorable to ORR catalysis than that of FeN 4 center ( Figure 17). Since the earliest study of M-NxC catalysts, intensive focus and efforts have been attached to the exploration and discussion on the active structure favorable to the ORR. It should be noticed out that the ORR active site of M-NxC catalysts may be an integration of some active components varying with the potential changes, rather than a steadfast concrete structure. Although there are some people favoring of the statements that either some metal-free sites [121] or some carbon layer-coated metal sites [122,123] are the active sites in M-NxC catalysts for the ORR, the most widely accepted theory is still that M-Nx sites are the true active components for ORR catalysis [124][125][126]. However, the debate, about whether M-N2 or M-N4 is the real active sites, never stop. For example, the majority of researchers believe the Fe-N4 groups are the active sites towards ORR [127], while the rest of investigators think that Fe-N2 would play a major part in ORR catalysis [128].
In practice, M-NxC catalysts are usually consisted of M-Nx sites and graphene encapsulated metal nanoparticles simultaneously, which increases the difficulty to identify the true active sits in M-NxC. Most recently, Feng's group [109] proposed a low-temperature NH4Cl-treatment method to efficiently wipe off the graphene-encapsulated nanoparticles from M-NxC catalysts without destruction of M-Nx groups ( Figure 16). This strategy firmly demonstrates the dominant position of M-Nx sites in catalyzing ORR. Wang et al. [129] used first-principles calculation to build a microkinetic model for ORR on single atom Fe-N-C catalysts, and the modeling results indicated that the real active site of single atom Fe-N-C is the Fe(OH)N4 group, rather than inactive FeN4 center, as the latter is covered with an intermediate OH* which is a part of the active component, and the ΔG values along the associative path on the Fe(OH)N4 center are more favorable to ORR catalysis than that of FeN4 center ( Figure 17). Then they applied the conclusion to analyze the mechanism of ORR catalyzed on single atom Fe-N-C (shown in Figure 18), and the result was in good accordance with the previous experimental results. For Co-NxC, several studies have indicated that both the Co-N2 and Co-N4 sites have ideal catalytic performance towards ORR, while the Co-N4 could be converted into Co-N2, which has a stronger interaction with H2O2 under high temperature, therefore boosting the 4e − process of ORR [130][131][132][133]. Zhai et al. [134] prepared a catalyst of cobalt and nitrogen co-doped reduced graphene oxide (Co-N-rGO) with an increasing activity, a four-electron selectivity and a similar stability with commercial Pt/C. They gave some possible active sites in the Co-N-rGO towards ORR, including the edge plane CoN2/C, CoN4/C, and basal plane macrocyclic CoN4/C, via DFT calculations shown in Figure 19. Most recently, Xiao et al. [135] firstly reported a novel binuclear active site structure of Co2N5 with a Co-Co distance of 2.1-2.2 Å. The strategy is to encapsulate the CoN4 and Co particles in the carbon shell through the self-adjusting of the bimetal organic framework. The mechanism of Co2N5 for catalyzing ORR is shown in Figure 20. It was indicated that the ORR catalytic activity of Co2N5 is higher than that of CoN4, which opens a new door to the fabrication of highly effective ORR electrocatalysts. Then they applied the conclusion to analyze the mechanism of ORR catalyzed on single atom Fe-N-C (shown in Figure 18), and the result was in good accordance with the previous experimental results. For Co-N x C, several studies have indicated that both the Co-N 2 and Co-N 4 sites have ideal catalytic performance towards ORR, while the Co-N 4 could be converted into Co-N 2 , which has a stronger interaction with H 2 O 2 under high temperature, therefore boosting the 4e − process of ORR [130][131][132][133]. Zhai et al. [134] prepared a catalyst of cobalt and nitrogen co-doped reduced graphene oxide (Co-N-rGO) with an increasing activity, a four-electron selectivity and a similar stability with commercial Pt/C. They gave some possible active sites in the Co-N-rGO towards ORR, including the edge plane CoN 2 /C, CoN 4 /C, and basal plane macrocyclic CoN 4 /C, via DFT calculations shown in Figure 19. Most recently, Xiao et al. [135] firstly reported a novel binuclear active site structure of Co 2 N 5 with a Co-Co distance of 2.1-2.2 Å. The strategy is to encapsulate the CoN 4 and Co particles in the carbon shell through the self-adjusting of the bimetal organic framework. The mechanism of Co 2 N 5 for catalyzing ORR is shown in Figure 20. It was indicated that the ORR catalytic activity of Co 2 N 5 is higher than that of CoN 4 , which opens a new door to the fabrication of highly effective ORR electrocatalysts. Then they applied the conclusion to analyze the mechanism of ORR catalyzed on single atom Fe-N-C (shown in Figure 18), and the result was in good accordance with the previous experimental results. For Co-NxC, several studies have indicated that both the Co-N2 and Co-N4 sites have ideal catalytic performance towards ORR, while the Co-N4 could be converted into Co-N2, which has a stronger interaction with H2O2 under high temperature, therefore boosting the 4e − process of ORR [130][131][132][133]. Zhai et al. [134] prepared a catalyst of cobalt and nitrogen co-doped reduced graphene oxide (Co-N-rGO) with an increasing activity, a four-electron selectivity and a similar stability with commercial Pt/C. They gave some possible active sites in the Co-N-rGO towards ORR, including the edge plane CoN2/C, CoN4/C, and basal plane macrocyclic CoN4/C, via DFT calculations shown in Figure 19. Most recently, Xiao et al. [135] firstly reported a novel binuclear active site structure of Co2N5 with a Co-Co distance of 2.1-2.2 Å. The strategy is to encapsulate the CoN4 and Co particles in the carbon shell through the self-adjusting of the bimetal organic framework. The mechanism of Co2N5 for catalyzing ORR is shown in Figure 20. It was indicated that the ORR catalytic activity of Co2N5 is higher than that of CoN4, which opens a new door to the fabrication of highly effective ORR electrocatalysts.
Conclusions
We have introduced the mechanism of ORR briefly and summarized the history of M-N x C catalysts, including the earliest non-pyrolyzed transition metal macrocyclic compounds, then pyrolyzed transition metal macrocyclic compounds, and recent developed M-N x C catalysts with simple and general structures. Besides, we give more detailed information about the currently developed M-N x C catalysts from the perspectives of choice of carbon and nitrogen precursors, the recent emerging Fe-N x C and Co-N x C catalysts, the newly developed method to prepare Fe-N x C and Co-N x C catalysts, and their active sites analyses. The intrinsic structure and catalytic performance of M-N x C catalysts are directly influenced by the choice of precursors. In most recent achievements, some porous organic materials with high surface areas, tunable compositions and various structural topologies, such as MOFs, ZIFs and COFs, have gained many priorities to be the precursors. The size of metals in the M-N x C catalyst is reduced to single atoms, which contributes to an enhanced catalytic performance due to the utmost metal utilization and the low coordination to raise the activity, and the N-containing support's anchoring effect to improve the stability. The most acknowledged active sites of M-N x C catalysts are M-N 2 and M-N 4 groups, while there are some trials to design new kinds of active sites, such as the binuclear active sites. Overall speaking, the thorough understandings about the M-N x C catalysts stated in this review are favorable to provide a universal principle of highly effective M-N x C catalysts' design.
In fact, although the M-N x C catalysts with high ORR activities are demonstrated as one of the most promising substitutes for Pt-based catalysts, their performance is far from the practical requirements in PEMFCs due to the appeal to much higher activity and stability in acidic environments. According to this review, we think there are three main directions in the M-N x C ORR catalysts field: 1.
The study of the ORR catalytic mechanism. Understanding the character of M-N x C and the catalytic kinetics in the catalytic process is helpful to us to regulate the catalyst structure and composition accurately, therefore improving the catalytic performance.
2.
The study of active sites. Although there are already several researches on the identification of active structures, the true catalytic sites in M-N x C catalyst are still confusing which drastically hinder the development of catalysts with adequate activity and stability. It is suggested that the atom-by-atom structural and chemical analysis in graphene can be achieved by the gentle ADF-STEM, which is a promising pathway to explore the local environment of active sites [136,137]. 3.
The exploration of new types of active sites. The aforementioned design of binuclear active sites, which are proven to be more positive than traditional M-N 2 and M-N 4 active sites, is a successful example to search for new kinds of active sites. It has inspired us greatly to enter a newly emerging but promising world to design satisfactory ORR catalysts. | 12,982.2 | 2020-01-20T00:00:00.000 | [
"Chemistry",
"Engineering",
"Materials Science"
] |
Herbal Leaves Classification Based on Leaf Image Using CNN Architecture Model VGG16
Herbal leaves are a type that is often used by people in the health sector. The problem faced is the lack of knowledge about the types of herbal leaves and the difficulty of distinguishing the types of herbal leaves for ordinary people who do not understand plants. If any type of plant is used, it will have a negative impact on health. Automatic classification with the help of technology will reduce the risk of misidentification of herbal leaf types. To make identification, a precise and accurate herbal leaf detection process is needed. This research aims to facilitate the classification model of herbal leaf images with a higher accuracy value than previous research. Therefore, the proposed method in this classification process is one of the Transfer Learning methods, namely Convolutional Neural Network (CNN) with a pretrained VGG16 model. This research uses a dataset of herbal leaves with a total of 10 classes: Belimbing Wuluh, Jambu Biji, Jeruk
Introduction
Herbal plants have many benefits for human life apart from being foodstuffs, oxygen providers, and others.Herbal plants can also be specifically utilized for medical therapy in the health sector [1].One example is the utilization of bay leaves (Eugenia Polyantha Wight) as a food flavoring spices and herbal medicines for body health [2].Herbal plants can be used as medicinal plants or ornamental plants, this depends on the utilization itself.Herbal plants are identified through observations that begin with human observations.Herbal plants have a function to prevent and cure diseases [3].Herbal plants are often used as family medicinal plants because they have a high potential to grow into family medicinal plants.Various benefits are obtained from family herbal plants, such as improving nutrition, increasing income, greening the environment, and fulfilling other daily life needs.Related research that has been conducted from the results of interviews is from 16 types of herbal plants, it turns out that many people are not right in utilizing the properties of herbal plants.In the treatment of diseases, there are still misconceptions about the efficacy of herbal plants [1].Based on research conducted [4], around 80% of people depend on herbal plants for human health.
To find out the types of herbal plants, several things can be considered, such as herbal plant pattern recognition, plant shape, plant texture, and plant structure characteristics [5].The part of the herbal plant that is often utilized in medicinal therapy is the leaf [6].Classification of herbal leaves is generally only done based on observations of leaf shape and color.Leaves have a very important role in herbal plants.In addition, leaves are also easily available compared to other parts such as roots in herbal plants.Because of the many types of herbal plants, it will be difficult to distinguish them if identification is carried out based on the shape and color of the leaves with the direct eye by ordinary people.To be able to distinguish between types of herbal plants requires information and knowledge in this field.This manual identification process takes a long time and special knowledge [7].It takes an expert or experienced person to be able to correctly classify several types of herbal leaves [8].Misclassification can also result in errors in the composition of herbal leaf concoctions combined with other types of leaves for medicinal purposes.Therefore, a media is needed to be able to classify various types of herbal plants effectively and accurately with the aim of helping people identify herbal leaves without special knowledge.One solution is to provide information on the classification of herbal leaf to the public by using technology [7].
One of the artificial intelligence algorithms that can classify an object through machine learning is Deep Learning Algorithm.Deep Learning is the development of Machine Learning.Image or digital image processing often uses Deep Learning.One of the utilizations of Deep Learning is image processing.With the image processing system, it can help classify objects while processing a lot of data, as well as quickly and accurately [9].The reason for using the Deep Learning algorithm is to optimize the performance of unstructured data.An example of a Deep Learning Algorithm that is often used is Convolutional Neural Network (CNN).One of the derivatives of Multilayer Perceptron (MPL), Convolutional Neural Network (CNN) is a method designed to process data in twodimensional form, such as sound or images [10].
Deep Learning requires many datasets to get good results.In related research, the number of parameters has been reduced by taking datasets that have been trained so that they can classify new datasets without training data from scratch, this method is called Transfer Learning [11].Transfer Learning is a method that utilizes a Convolutional Neural Network (CNN) model with a pretrained model.It does not require training data from scratch because the weights from the trained model will be applied to the new dataset [12].The classification process using the Transfer Learning method can improve the performance of other Transfer Learning models and strategies by applying the end-toend Convolutional Neural Network (CNN) model [13].
Research related to the classification of herbal leaf types by utilizing technology has developed with various models and results obtained.Research using highresolution leaf images using the Convolutional Neural Network (CNN) method has obtained an accuracy value on the testing dataset of 82%.The research was used to recognize objects on 5 (five) types of plants.However, there is still a prediction error in the banana plant class, because it has a geometric correction that is almost the same as other types of plants [14].The detection method using Convolutional Neural Network (CNN) Deep Learning with the use of 7 Convolutional Layers has obtained research results in the form of an accuracy value between 80 -100% with 80 images of testing data and getting an average accuracy of 0.90296 using testing data [15].
Other research has implemented Deep Learning in the classification of plant species in an android-base application and obtained an accuracy value on the validation dataset of 86% [16].Research using Deep Learning Algorithms is also found in leaf image identification using the Convolutional Neural Network (CNN) method which gets an average accuracy of 85% and 90% using 40 image tests [10].Different research has identified 54 images of three types of Ficus plants utilizing Artificial Neural Network (ANN) and Support Vector Machine (SVM) [17].
The Convolutional Neural Network (CNN) method is also utilized in research that is used for the disease detection process on leaves, namely potatoes.Get good results at 95% accuracy on training data and 94% on validation data when reaching the 10th epoch with a batch size of 20 [18].In addition, the same thing is also done in the agricultural industry, namely the Convolutional Neural Network (CNN) model gets an accuracy value of 85% on testing data and 98.75% on training data from classification results carried out on digital images of spices and herbs [19].The Convolutional Neural Network (CNN) method can classify well, this is obtained from research conducted on succulent plants.The accuracy obtained is quite high, with an accuracy result of 93% of the testing data with as many as 500 datasets and 88% of internet data with a comparison of grayscale model training and color datasets [20].Optimization of the Convolutional Neural Network (CNN)-Transfer Learning model performed on the image by reducing the image matrix without reducing information gets an accuracy value of 90.5% [21].
Research by comparing datasets and utilizing the Support Vector Machine (SVM) method in plant classification based on ten classifications for classification of 240 Chinese herbal leaves has been done [22].The use of Convolutional Neural Network (CNN) in the classification of magnolia plants has resulted in an accuracy value on training data of 99.37% and testing data accuracy of 95.89% [23].Based on research relevant to leaf classification that utilizes Artificial Intelligence information technology, it can be concluded that Artificial Intelligence has an approach by utilizing expert systems with good and successful results for classic problems [24].An example of the application of Deep Learning architecture is Convolutional Neural Network (CNN) which can reduce the dimensions of the dataset owned by not eliminating the characteristics or features in the dataset [25] namely making a model by applying the Convolutional Neural Network (CNN) method which is considered more effective in classifying an object [27].The architecture model used in this research is VGG16.This is done to get a higher accuracy value based on previous research reference journal [14] and is expected to be able to get efficient classification results.The dataset used in this research is sourced from Indonesian Herb Leaf Dataset 3500 1 .Through the classification of herbal leaves in this research, it is hoped that it can add to the application of herbal leaf classification technology so that it can be implemented in applications that are in accordance with the needs of the community.The first stage is inputting herbal leaf images for processing.Next, dataset preprocessing is carried out in the form of training dataset augmentation which is then entered into the proposed VGG16-CNN method, in this stage the training data will be processed through Convolutional Layer, Pooling Layer, Flatten Layer, and Dense Layer.The last stage is the stage of evaluating the classification performance of the proposed method that has been trained by testing using the testing dataset.
Dataset
Referring to the reference research [14], the composition of the dataset is divided by the percentage Layer.Each layer produces an output shape and parameter value.The difference with the previous [14] and proposed is a more complex model architecture that uses a pretrained model VGG16 architecture, where the output shape and parameter values are higher.In the proposed model, the input image dimensions are changed to 150 x 150, then the Filter Layer and convolution process are carried out.After convolution, it is continued with the Pooling Layer and if it is finished it will continue the next convolution, so that the dimensions will change and continue the Pooling Layer.Furthermore, the Convolution and Pooling Layer process will repeat again.After the Convolutional and Pooling Layer is completed at the final stage, it will enter the Flatten Layer which will process the last Pooling Layer result.The results obtained from the Flatten Layer will be inserted one by one in the Dense Layer.The Dense Layer in the last layer amounts to 10 adjusting to the number of classes classified.
Data Augmentation
The data augmentation process aims to add images to the dataset.This process aims to prevent or reduce overfitting of the model and improve accuracy in classification [14].Augmentation in this research uses ImageDataGenerator as a class of preprocessing functions derived from libraries that can be accessed on Tensorflow, with the values of rescale = 1./255, shear_range = 0.3, zoom_range = 0.3, rotation_range = 30, horizontal_flip = True, vertical_flip = True and fill_mode = 'nearest'.
If without the use of augmentation, the accuracy value obtained is not as good as the use of augmentation, and overfitting of the model can occur.This can be seen in the comparison of accuracy values in Table 4.
Training and Testing
This research implements a callback function that is applied to the model training process.This callback has a function to stop the model training process when val_accuracy reaches the specified value.Storage is done every epoch completed if the val_accuracy matrix value has increased.The use of this callback is also to recover the weight learned from the best epoch as the final weight of the model.This callback is also important for scheduling the learning speed, because the learning speed can cause problems.The factor that can cause this problem is as the number of epochs increases.
Tests will be carried out in several scenarios such as augmented and un-augmented data, as well as evaluation of classification results in each class.The various scenarios are expected to prove the reliability of the proposed model.
Results and Discussions
This research uses ten types of herbal plant labels whose dataset properties are unstructured data including Belimbing Wuluh, Jambu Biji, Jeruk Nipis, Kemangi, the division ratio being 70% training data, 20% validation data and 10% testing data.Of the ten labels used as datasets have different forms of leaf characteristics that are similar and difficult to distinguish by lay people who do not have knowledge in the field of botany such as for example Sirih leaf with Jambu Biji, Belimbing Wuluh leaf with Kemangi, and Seledri leaf with Pepaya.Therefore, this research uses Deep Learning algorithms to optimize performance in classifying unstructured data to be manipulated more effectively and accurately.
Data Processing and Augmentation Results
Preprocessing step carried out is splitting the dataset, which divides the "Indonesian Herb Leaf Dataset 3500" dataset into three folders, namely training, validation, and test data.The results of splitting each group in Table I.The next step is data processing which is carried out using augmentation on training and validation data.This augmentation also enforces a change in image pixel size in training and validation data to a size of 150 x 150 which is also a criterion for using the VGG16 Transfer Learning architecture [28].
Data processing by creating a callback function for the model training process.The callback function uses features from the Tensorflow library in the Keras module, in this model training using the vall_accuracy matrix value as a parameter for the value of stopping the model training process.Furthermore, the model will be trained with 100 epochs.In the main reference of this research [14] the accuracy results are different, with the accuracy value of the proposed method higher than the previous research, for more details can be seen in Table 4.In Table 5 it can be seen that using the augmentation process in this research can increase the accuracy value compared to without using the augmentation process.The comparison of accuracy values is 97% with augmentation and 96% without augmentation process.
Evaluation Result Chart
The results of the Precision, Recall, and F1-Score of the model we designed by applying augmentation get a value of 0.97.Precision is the ratio between the accurately predicted True Positive (TP) and the total number of positive predicted data.Recall is the comparison between True Positive (TP) and the total amount of data that is actually positive, while F1-Score is the average comparison between precision and recall [25].The value of 0.97 indicates that the prediction value is 97%, which if it is 97%, then the model can predict the image correctly without any errors and vice versa if it is close to 0%, the model fails n classification.When using the model designed in this research, it is proven to be able to increase the accuracy value of image classification.For more details, see Table 6.It can be concluded that the implementation of the method used in this research is able to get very good results with the accuracy value of the testing data higher than the main reference journal [14] with the different types of herbal leaves used.
Figure 1 .
Figure 1.Block Diagram Based on the block diagram design in Figure 1, there are three main stages in this research, namely input, process, and output.The dataset categorized into 10 classes will first be divided into 3 parts, namely: dataset for training, validation, and testing.
Figure 3 .
Figure 3. Model Accuracy Chart Figure 3 shows the accuracy graph of the training model using data from training and validation data, which results in training accuracy and validation accuracy and has an index from 0 to 1 as a measurement value.On the graph, it can be seen the blue line as training and green line as validation.The training data graph has a point of 0.9673 or equivalent to 96.73% at the 100th epoch.While the accuracy point obtained from validation data is more than 0.9563 or equivalent to 95.63%.
Figure 4 .
Figure 4. Loss Model Chart Figure 4 shows the loss values of the training and validation data.It can be seen that the validation data value is a green line and training data is a blue line.In this research, the training loss value from the training process gets a value of 0.0975 or equivalent to 9.75% and the validation loss gets a value of 0.1647 or equivalent to 16.47%.3.3.Accuracy, Recall, and F1-Score Making Accuracy, Recall, and F1-Score as well as testing the results of models that have been made on machine learning.The test results of the training data model are stored in the history variable.By using a model evaluation through Classification Report to find out how much percentage of the model successfully classifies all images in the testing data [29].To measure the performance of classification problem, it can be seen through the comparison of the combination of predicted and actual values presented in the form of a Confusion Matrix as in Figure 5.This Confusion Matrix displays the accuracy and precision values of the testing data.
Table 1 .
70% of training data, 20% of validation data, and 10% of testing data.Details of the dataset distribution can be seen in Table 1.Group and Data Distribution Sample Data from Herbal Leaf Datasets Bella Dwi Mardiana, Wahyu Budi Utomo, Ulfah Nur Oktaviana, Galih Wasis Wicaksono, Agus Eko Minarno Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) Vol. 7 No. 1 (2023) DOI: https://doi.org/10.29207/resti.v7i1.4550Creative Commons Attribution 4.0 International License (CC BY 4.0) 23 2.2.Model Architecture This research applies the Transfer Learning method in the design of the proposed model.The model architecture used is VGG16, with the addition of a Fully Connected Layer, namely the Output Layer to adjust to the number of classified dataset classes.The model architecture designed on the proposed model is more complex than the model in previous research [14].
Table 2 .
Model Architecture
Table 3 .
Proposed Model Architecture
Table 6
The application of the proposed method with the Transfer Learning method using the VGG16 pretrained model and the augmentation process on the training dataset used for herbal leaf image classification can recognize and detect the type of herbal leaves correctly.The entire dataset used in this research consists of a collection of herbal leaf datasets that are transformed into training data, validation data, and testing data.The herbal leaf classification process is influenced by the clarity of the leaf image during testing.Another factor that affects the accuracy value in classification is the amount of training data.The more training data used, the model will learn a lot and get better results, but the time required will be longer too.The results of the classification that implements the Convolutional Neural Network (CNN) method for image classification using augmentation (Image Data Generator) and the addition of layers, namely Fully Connected Layer that is Dense Layer type in the proposed model, can increase the accuracy value of training data to reach 96.73% and 97% accuracy using testing data by providing values up to the 100th epoch.If the augmentation process is not used, it can reduce the accuracy value of the testing data to 96%. | 4,345.6 | 2023-02-01T00:00:00.000 | [
"Computer Science",
"Medicine",
"Environmental Science"
] |
Machine Learning-Enabled Internet of Things (IoT): Data, Applications, and Industry Perspective
: Machine learning (ML) allows the Internet of Things (IoT) to gain hidden insights from the treasure trove of sensed data and be truly ubiquitous without explicitly looking for knowledge and data patterns. Without ML, IoT cannot withstand the future requirements of businesses, governments, and individual users. The primary goal of IoT is to perceive what is happening in our surroundings and allow automation of decision-making through intelligent methods, which will mimic the decisions made by humans. In this paper, we classify and discuss the literature on ML-enabled IoT from three perspectives: data, application, and industry. We elaborate with dozens of cutting-edge methods and applications through a review of around 300 published sources on how ML and IoT work together to play a crucial role in making our environments smarter. We also discuss emerging IoT trends, including the Internet of Behavior (IoB), pandemic management, connected autonomous vehicles, edge and fog computing, and lightweight deep learning. Further, we classify challenges to IoT in four classes: technological, individual, business, and society. This paper will help exploit IoT opportunities and challenges to make our societies more prosperous and sustainable.
Introduction
The Internet of Things (IoT) is set to become one of the key technological developments of our times, provided we can realize its full potential.IoT is "a global infrastructure is enabled using advanced services by interconnecting (physical and virtual) things based on existing and evolving interoperable information and communication technologies."IoT was named by the US National Intelligence Council (NIC) in a 2008 report among the six vital civil technologies that could potentially affect US power.IoT is an enabler of ubiquitous computing envisioned by Mark Weiser.IoT is no longer a technological buzzword as described in [1], but a reality that links the physical world to the digital world, revolutionizing how we look towards our surroundings.Currently, IoT is partially implemented in bits and pieces due to a lack of availability of technology and other constraints on the global scenario.number of objects connected to the IoT is expected to reach 25-30 billion by 2030 due to the massive influx of various IoT devices proliferating [2,3].
The primary purpose of these increasing numbers and types of IoT objects is to produce valuable data about the entities present in the operating environment to make smart decisions.This is achieved by providing access to the environment from which we need information and analyzing past, present, and future data.These data allow optimal decisions about us and our environments, possibly in real time.This massive, diverse growth in the overall IoT landscape will produce $1-1.5 trillion in revenue annually.The IoT landscape is illustrated in Figure 1.Although Europe is at the forefront in the early adoption of IoT, South Korea tops the global ranking of connected things, whereas the USA is far behind in this respect [4].Figure 1 also depicts application areas of IoT: smart homes, warning systems, smart shopping, smart gadgets, smart cities, intelligent roads, health care, fire systems, threat-identification systems, tracking, and surveillance.
Data Production
One of the significant functions of IoT is to provide technological infrastructure to sense different activities and events happening in our surroundings.IoT is expected to produce an enormous amount of data.These data would be created by various vendors, giving rise to data as a service.For powering smart cities and societies to their full potential, sharing and collaborating data and information will be the key to providing them with sustainable and ubiquitous applications and services.The fusion of various types and forms of data to enhance data quality and decision-making will be of prime importance in ubiquitous environments.Data fusion is "the theory, techniques, and tools used to combine sensors' data, or data derived from sensory data, into a common representational format."A timely fusion and analysis of big data (volume, velocity, variety, and veracity) acquired from IoT's sensor networks enables accurate and reliable decision-making.However, managing ubiquitous environments would be a grand challenge for IoT.Also, various sensors and intelligent algorithms would play a critical role in addressing the above challenge.
Machine Learning and IoT
IoT's objectives are to understand what people want and how people think, predict wanted and unwanted events, and learn to manage certain situations.For all of this, IoT needs to understand the data produced by millions of objects.This understanding can be gained by using machine-learning algorithms (MLAs).Machine learning (ML) in the IoT paradigm can play a significant role that is imperative.IoT is ubiquitous by nature, which means to be available anywhere is one of its primary goals [5].ML will play a significant role in this by digging out the data produced by thousands and millions of connected devices.ML will add usefulness to IoT devices, and IoT can only be genuinely ubiquitous [6].Embedded intelligence (EI) will be at the core to enable IoT to play a significant role in achieving its objectives.EI is the fusion of product and intelligence to achieve better automation, efficiency, productivity, and connectivity [7,8].Whether a physical or virtual world, intelligence is acquired by learning.
The tendency of ML to find patterns may be the underpinning for human-like intelligence.Further generalization of these patterns into more valuable insights and trends provides an improved understanding of the world around us.The actual objective of ML in IoT is to bring complete automation by enhancing learning that facilitates intelligence through smarter objects [9].ML gives IoT-enabled systems the potential to mimic humanlike decisions after training from the data and further improve their understanding of our surroundings.The influence of information visualization on the human visual system is enormous, making systems better understand data and insights [10].Information visualization brings several advantages to its users, like (1) better knowledge without much further analysis of data and (2) using cognitive skills, and humans can better understand data.IoT will replace several systems currently used, which are costly to implement and maintain, with cheap sensor-based ML systems.For example, around 20,000 people lost their lives in developing countries due to severe weather conditions.Mostly, weather monitoring is done by radar-based weather-monitoring systems (WMSs).However, radar WMSs are costly and unavailable in several parts of the world.An ML-enabled IoT system consisting of a cheap sensor network that studies lighting and cloud patterns to predict the weather is successfully deployed in economically backward countries like Guinea and Haiti [11].
IoT will not only impact how we see technology but also how technology can bring progress and make our world more prosperous [12,13].Every day, various aspects of our lives are becoming easier and more connected through the IoT.ML brings intelligence and pervasiveness to IoT.IoT can be an appropriate synonym for the word "heterogeneous," as it consists of various devices, network technologies, protocols, data types, applications, and users.This heterogeneous nature of IoT brings several challenges to ML, which are:
•
IoT will produce big data [14][15][16], but are all the data valid, do the data have biases, and is it worthwhile to process all of it?These are some critical questions shaping the reliability, accuracy, and efficiency of MLAs for the IoT domain.
•
Not all IoT applications have ample data from which MLAs will learn quickly, but a lot of small data are also produced for this new form of algorithms needed to learn from scarce data [17,18].
•
Sensing devices are not always accurate and reliable [19][20][21][22][23][24].Outlier detection and data imputation are some of the necessary tasks needed to be performed on data before ML begins.
•
The application area of IoT is enormous, as mentioned in Figure 1.Every application has data with particular properties.
•
At Google's Zeitgeist 2011 event, Google's chief scientist Peter Norvig famously said, "We don't have better algorithms than anyone else; we just have more data."However, few researchers support the opposite.Which is better?A highly sophisticated MLA [25], with more data [26][27][28] or limited but high-quality data [29,30]?This question still has no answer and is one of the significant conflict areas among ML researchers, as opinions vary.
• Game-changer technologies such as IoT hold ample opportunities for businesses, but also pose a high risk, and ML-enabled IoT could end up swallowing millions of jobs [31,32].
Machine learning (ML) gives a brain to IoT-enabled systems to grasp the insight from data produced by millions of IoT objects.In IoT, we see several MLAs learning from diverse data, which makes ML completely different from IoT, as on the one hand, we will still have traditional MLAs; on the other, we need a completely different set of MLAs.We will have different classes of MLAs, and some will work based on simple, intuitive insights instead of complex mathematical proofs [33].
Eric Brill and Michele Banko in early 2001 [34] published an interesting paper that showed that more training data results in improved learning, rather than enhancing and designing new MLAs.Big data will never be a problem in IoT.Billions of connected IoT objects via the internet will produce massive data [35].As a result, IoT-based ML consists of algorithms that will learn from the colossal amount of data.The IoT domain ( 1) is partially valid, as IoT is not all about big data, but also about small data [17,18].Small data sets contain minimal attributes.Small data can be used to describe the current state, trigger events, and be produced by the aggregation of big data.
Governments, industries, and individuals have a broad spectrum of IoT-enabled applications that take leverage from ML. Shanthamallu et al. [36] discussed MLAs and their application areas in IoT.At the same time, Sharma and Nandal focused on "machine learning as a service" (MLaaS), which is the fusion of ML and IoT infrastructure [37].MLAs can perform various tasks in IoT, as illustrated in Figure 2. Broadly, ML tasks can be seen from IoT perspectives: (1) data quality and (2) pattern recognition.MLAs not only predict from a treasure trove of data but also can be used to enhance data quality, ultimately resulting in better learning.For example, MLAs are used to identify outliers and impute data before training MLAs for prediction.In the proceeding sections, we are going to discuss these in detail.
Contributions
Most R&D endeavors on IoT have focused primarily on object and resource management, object identification, access control, network, and connecting technologies.Instead of focusing on major IoT R&D trends mentioned above, in this paper, we attempt to enhance our understanding of how ML plays a critical role in shaping the IoT landscape.This survey will work as an underpinning for IoT-based ML researchers.In Table 1, we have given five major ML-enabled IoT surveys and their objectives.Our intention is not to represent a comprehensive review of the literature.Still, this paper attempts to achieve the more significant aim of enhancing ML's understanding, usefulness, and significance for the IoT domain.The main contributions of this work are fourfold:
•
Firstly, we classify IoT-related research and development work into three major perspectives (classes): data, application, and industry.
•
Secondly, the paper gives insight into the current state-of-art research and developments in IoT with a specific focus on ML-related developments.
•
Thirdly, the paper identifies emerging IoT trends that will use the machine at its core to develop futuristic and sustainable solutions.
•
Lastly, the paper helps the readers to identify future opportunities in IoT-based ML research.
Paper Structure
The paper is divided into seven sections.As depicted in Figure 3, in Section 2, we discuss IoT from a data perspective.In Section 3, we critically analyze the role of ML from an application perspective, whereas in Section 4, we discuss IoT's industry perspective.Further, in Section 5, we discuss five emerging trends where the fusion of IoT with ML will play a critical role.In Section 6, we classify challenges to IoT's success in four classes: technological, individual, business, and society.Finally, we conclude in Section 7.
Data Perspective
Data add value to the IoT paradigm and are collected by using a variety of sensors, as given in Table 2. IoT has both cheap and expensive sensors in its arsenal.For example, the temperature detection sensor is cheaper than lidar, which is too costly.The type of sensor used largely depends on the type of application of that data.Wild animal tracking sensors will have lifelong battery life, as replacing batteries in wild animal tracking applications is hard, whereas sensors like lidar, cameras, and radar continuously need a power supply to function.Also, low-cost sensor data have such issues as outliers and missing values as their hardware quality is limited.On the other hand, vision sensors bring many features, and selecting only the best feature is challenging.In the proceeding subsections, we discuss IoT data sources, data storage platforms, and the three types of data challenges with a specific focus on machine learning.
One of the major applications of IoT is sensing our surroundings and communicating that data to the smart application, which will be used to predict and forecast using machine learning algorithms.Further, the learning outcome is used to develop AI for making decisions.Later, the decision is transformed into mechanical output using actuators [38].Today, billions of devices with sensors surround our daily life.IoT produces and will produce an enormous amount of data that needs to be stored, processed, and archived for future needs.IoT infrastructures are not yet totally implemented, even in developed economies.Developing economies like India, Malaysia, etc., are slowly working on megasmart-city projects that will use IoT infrastructure.An exciting work has been done by Morais et al. [42] where they classify IoT data types into 19 common categories that are in use.Also, they classify IoT sensor types too.We depicted in Table 2 the possible kinds of sensors used in IoT, applications, and data challenges particular sensors bring.The COVID-19 pandemic expedited the demand for IoT solutions.
Data Storage
IoT means an enormous amount of real-time data.For example, autonomous vehicles alone can contribute to colossal amounts of data.For example, Wang et al. proposed HydraSpace [43] multilayered storage architecture for storing autonomous vehicles' data.The cloud may be more flexible, scalable, and ubiquitous.However, it is not possible to have real-time data analytics from data stored on clouds.This makes edge-and fog-based IoT data storage critical [44].ML and AI available on edge devices can give real-time insights from the sensed data.Later on, aggregated data can be stored in the cloud.The transfer of data on edge first will create a more realistic and valuable IoT landscape.Some popular and widely used IoT-based cloud storage services are AWS S3 (Amazon Web Services, Seattle, WA, USA), IBM Watson IoT Platform (IBM, Armonk, NY, USA), Oracle IoT Cloud Service (Oracle, Austin, TX, USA), and Microsoft Azure (Microsoft, Redmond, WA, USA) [45,46].
Data Issues
Our increasingly connected world through IoT is a delicate blend of low-cost sensors and distributed intelligence that will have a transformative impact on how we see the world.This merger will produce more data than ever that hold valuable information.Sensing data has critical quality issues, as sensing devices are not 100% reliable and accurate.Preprocessing of IoT data is required before feeding it to MLAs to gain critical insights.As depicted in Figure 4, three significant issues with sensed data, outliers, missing values, and feature selection are discussed in the proceeding sections.
Outlier Detection
Outliers, also known as anomalies are the data patterns that differ from the rest of the data and signify abnormal data behavior [47][48][49].Outlier data observations are everyday in highly dense sensor environments like IoT, due to: (1) low-cost sensors which mean low quality, (2) weather conditions, (3) electronic inferences, and (4) data communication errors [50][51][52].Outliers must be detected rather than deleted or replaced by predicted values, which is crucial to maintaining high data quality from which MLAs ultimately dig out the key insights.Modern-day MLAs are not only used for gaining valuable knowledge but also for improving data quality by detecting data aberrations [53].Significant attention has been given to outlier problems in wireless sensor networks (WSNs) [54][55][56][57][58], which can also be seen as a subset of IoT.
Several critical surveys exist that primarily focus on addressing the problem of outliers in the IoT landscape.Alghanmi et al. [59] did a general-purpose comprehensive study on ML-powered anomaly detection and discussed available IoT datasets used for this purpose.On the other hand Cook et al. [60] examined how to detect outliers in IoT-based times series data.In contrast, Diro et al. [61] saw outliers' detection as a way to make IoT networks more secure.Further, a more recent survey by Samara et al. [62] focused on statistics-based, clustering-based, nearest neighbor-based, classification-based, artificial intelligent-based, spectral decomposition-based, and hybrid-based for outlier detection.Commonly used outlier detection (OD) approaches are based on statistics, distance matrix supervised, and unsupervised ML.One such MLA is SVM, which has an explicit mechanism to handle outliers robustly [57,63].Resource overhead is one major issue with SVM-based OD.An unsupervised centered quarter-sphere SVM with low computational complexity and memory usage for online OD is proposed in [64] which outperforms previous offline OD methods based on SVM [65].
In unsupervised learning, k-means are a simple yet popular choice along with hierarchical clustering for OD, as critically analyzed by Garcia-Font et al. [58].Similarly, Münz et al. [66] focused on k-means clustering for OD in traffic data.One major drawback with k-means is that it computes a set of k centers to reduce the sum of squared distance.Multiple works [67,68] show that there are two issues related to it: (1) outliers can pull these centers, and (2) rather than rejecting outliers, it can be possible for outliers to form their cluster.As a solution for this, a robust version of k-means is proposed by Statman et al. [69] known as k-means+++.Another classifier quite popular for OD problems is naïve Bayes because of its ease of use and simplicity [70,71].Parto et al. [72] evaluated classical Bayesian techniques with slight modification for OD in streaming IoT platforms in the manufacturing industry.Further, similar to [66], Lam et al. [73] addressed the OD issue in traffic data using a fusion of naïve Bayes and Gaussian mixture-model techniques.
C4.5 and its successor C5.0 MLAs are highly accurate and efficient modern-day classifiers that outperform the best in the business classifiers, as analyzed in [74].However, little attention has been given to C4.5 and C5.0 for OD, particularly in the IoT environment, as they have high precision, minimum memory usage, and fast processing.Today in every field, we are witnessing the increasing use of deep-learning algorithms due to their ability to provide highly accurate predictions and forecasting output.These algorithms can understand highly complex datasets, which gives them an edge over others.Luo et al. proposed a distributed outlier detection method for sensor networks that uses deep autoencoders [75].The technique can produce a high detection rate with minimum communication overhead, which is necessary for IoT-based sensor networks.Similarly, Diro and Chilamkurti [76] used deep learning for cybersecurity purposes.Their work shows that the deep model is more capable of detecting anomalies than shallow learning.
In IoT, MLAs for OD can be divided into three classes.First-class algorithms execute offline and online.The second class of IoT algorithms will come from where intelligence and data lie.Finally, the third class of OD algorithms is based on the accuracy requirement of IoT applications.A detailed illustration is given in Figure 5.
Data Imputation
The IoT ecosystem relies heavily on hardware like sensors and RFIDs for sensing data.Sensors are not reliable, an established fact [50,51,77].One of these outcomes is missing values produced in IoT-based applications.The missing-values problem arises due to various reasons like synchronization problems, unstable wireless communications, sensor failure, power loss, and weather conditions [78].Two techniques used to handle missing values in IoT data are (1) deleting the missing data instances and (2) replacing the missing value with predicted data, a process known as data imputation [79].Much attention has been given to developing data imputation algorithms in several areas, such as natural sciences, census surveys, WSN, robotics, and scientific applications.
ML algorithms are widely used to impute the missing values.A lazy learner knows, as the KNN algorithm is a nonparametric method.It is straightforward to implement and simple to understand MLA.KNN is one of the top MLAs [80], and data imputation algorithms based on KNN are widely used [81][82][83][84].More complex and computationally extensive supervised learning algorithms than KNN, such as SVM, are also widely used for data imputation and handle both linear and nonlinear data efficiently [80,85].In various papers, SVM is used multiple ways to deal with missing values [86][87][88][89].However, SVM may not be too accurate with more than two class problems.A slightly younger supervised MLA than SVM, known as random forest (RF), based on ensemble learning, was introduced by Leo Breiman and Adele Cutler [90].RF-based DI is widely used in practice.Such methods based on RF MLA are presented in [91,92] that use proximity from RF to impute missing data values.
A more advanced, memory-efficient, and fast MLA, C4.5 is considered one the best MLAs [80], and Jerzy et al. [93] concluded that C4.5 is one of the handiest algorithms for dealing with missing data.C4.5 uses an internal data imputation mechanism based on a probabilistic approach [94].A few works also indicate that rather than using the C4.5 internal data imputation mechanism, KNN-based DI for C4.5 results in improved prediction accuracy [94,95], not just supervised machine learning, and the DI problem is unfolded from an unsupervised machine learning perspective.Several novels and hybrid data imputation algorithms are proposed, such as K-means [96][97][98][99][100][101], fuzzy c-means with support vector regression [102], fuzzy clustering [103], feature selection, and cluster analysis [104], and multiple imputation using gray-system theory and entropy based on clustering (MIGEC) [105].Another interesting algorithm is based on ANNs, which mimic the neural system of the human brain.ANNs are incredibly efficient in data imputation.Various novel and hybrid ANNs for data imputation are proposed, such as fuzzy min-max neural networks [106], particle swarm optimization (PSO), evolving clustering method (ECM), and auto associative extreme learning machine (AAELM) [107], ANNs and casebased reasoning (CBR) [108], general regression and auto associative ANN [109], and ANN-based emergent self-organizing maps [110].Except for C4.5 s MLA, dealing with missing values can be expensive in terms of storage and/or prediction-time computation [111].The fundamentals for imputing missing values will remain the same in IoT as in other domains.However, we envisage that data imputation will move more towards real-time processing of missing values in context with IoT's future scope, particularly for IoT applications.
Guo et al. concluded in [7,8] that IoT will ultimately EI-enable IoT, which will serve this goal.The FS method will also be helpful in terms of data reduction, apart from its other advantages [112][113][114][115][116]. For example, EI-enabled smartphones and home appliances will not have much processing power, storage needs are limited, and data are relevant only to predict an event.Most of the ongoing research on FS is based on offline FSA methods that can be suitable for most applications in domains like natural sciences and geography.However, for IoT, online FSAs are required for most of its applications.As in the IoT ecosystem, scenarios will change quickly, and most of the decision-making will be based on streaming data.
ML will significantly address the above data issues related to outliers, missing values, and feature selection.Most IoT sensors will be low-cost hardware that tends to have a temporary malfunction.In the proceeding section, we discuss IoT applications that will use the processed data produced by sensors.
Applications Perspective
IoT has evolved beyond what Atzori et al. [1], defined it.Today, it is seen as a discovery that has the potential to change the world in the same way as electricity did to humankind.Xu et al. systematically provide a concise view of current IoT application areas, R&D trends, and challenges for IoT in industries to provide an understanding of IoT developments in industries.Data for IoT are as important as electrons are for electricity.In this section, we examine ML developments in IoT and classify IoT applications according to [149,150].
According to a United Nations report, more than half the world's population lives in cities, due to better jobs, education, health care, and living conditions [151], putting extraordinary pressure on municipalities, urban development departments, and governments to provide sufficient resources.Due to this fact, the "smart city" concept has recently drawn significant attention from governments worldwide, especially in developed [152,153] and developing economies [154,155].Smart cities are now an essential part of urban development planning.There exists no formal definition of a smart city.However, it can be defined as the product of accelerated development and advanced information technology, which aims to improve citizens' socioeconomic conditions and enhance the overall quality of living.
IoT is about connecting physical devices using the internet to facilitate the smooth exchange of information.The smart city dream would not be possible without the technical support of IoT, which is inevitable to achieve smart city aims-Zanella et al. termed it "urban IoT" [156].In the background of urban IoT, an immense amount of data are produced by "things."Gaining key insights from these data is a critical problem that ML can solve.ML in urban IoT is a bit different from other domains, due to its heterogeneous nature in terms of devices, data, and applications, as seen in Figure 7.
Smart Grids
Smart grids enhance energy availability and efficiency, providing uninterrupted power supply to cities, towns, and businesses by minimizing power wastage, reducing faults, and optimizing power supply to cope with high energy demands [157,158].Power-grid failures are rare, but they result in the loss of millions of dollars, blackouts, and social ataxia.Smart grids supply power in a more distributed, adaptive manner.Pinning hopes on smart grids for better power management is achievable through IoT.According to Randal Bryant et al., the contribution of ML to the success of smart grids will be enormous and beyond what we see today.It describes the energy-space domain where MLAs are expediting the progress of the "data to knowledge to action" paradigm [159].
Further, Zhang et al. [160] critically examine smart grids' potential applications of deep learning, reinforcement learning, and integration.However, IoT-based smart grids are also bringing security challenges.With the availability of treasure-trove data and MLAs [161][162][163][164][165][166], we can find critical power usage patterns and consumer preferences.This will maximize the reliability of power grids and further share essential insights with consumers and power companies to improve or design better power infrastructure for future challenges.
Electricity-demand forecasting (EDF) has gained significant attention, and it is a critical task in strategic planning for power companies.EDF impacts the operational decisions in smart grids, as pointed out in [167].MLAs are the primary tools for EDF, which learn from data and predict.In conventional power grids, EDF is based on historical power consumption data.However, smart grids are the end product of the merger of IoT and power grids.As a result, more diverse data are available from various IoT applications, as mentioned in Figure 2, which can be used for highly accurate predictions.
In [166,168,169], the authors examined some of the widely used MLAs based on their effectiveness in facilitating operational decisions in smart grids.Nonlinear MLAs like ANNs and SVM are most persuasive for EDF.ANNs are very potent for modeling any nonlinear relationships and complex behaviors of smart grids.Various types of ANNs are used for demand forecasting: BP [170,171], radial basis function (RBF) [172][173][174], multilayer perceptron (MLP) [175], and optimized and hybrid ANNs [170,[176][177][178]. EDF by SVM [179][180][181][182] can handle noise better with minimum overfitting.ANNs and SVMs are highly accurate with conventional EDF.However, a novel deep-learning model known as factored conditional restricted Boltzmann machine (FCRBM) for EDF shows significant improvement in prediction accuracy [183].ANNs, SVMs, and FCRBM are computationally expensive for IoT to envision.EDF depends not on conventional grid data, but on several evolving factors in IoT ecosystems like weather, social events, individual preferences, power-grid performance, and maintenance.Smart grids will be essential for human urbanization prospects; nevertheless, their management is critical.This is achievable by ML-enabled IoT.However, this also brings some challenges related to the security of power grids connected by IoT infrastructure [184].
Smart Traffic and Transportation
The value IoT brings to all the traffic solutions to their customers through its smart, connected "things" is beyond what we have seen today.Urban mobility is the critical application of smart traffic and transportation solutions, as it also enhances the chances of accessibility of other services to the people.Throughout the world, cities are getting bigger.The challenging issues for cities include traffic congestion, increased pollution, and economic losses caused by traffic delays and road accidents.ML-enabled IoT strategy provides the opportunity for creating value from connected data, including better services and accelerated innovation [185,186].
In developed countries, the road infrastructure is highly advanced and well maintained.Contrary to this, road infrastructure suffers from maintenance issues in developing economies.Roadway surface disruptions and obstacles (RSDOs) are widespread, resulting in accidents, driving problems, traveling, and transportation delays.Gónzalez et al. in [187] used acceleration sensing data to classify patterns related to speed bumps, potholes, metal humps, and rough roads by using logistic regression and ANN MLAs.Another work [188] that addresses the same issue identifies RSDOs by using a combination of supervised and unsupervised ML with the help of data collected by the Street Bump smartphone application.Traffic monitoring is critical in controlling traffic congestion, which is achieved by identifying traffic patterns by analyzing vehicle movements using a granular classification in [189] and regression analysis in [190].Other applications of ML are intelligent traffic light management, which was achieved using Q-learning [191], and ANNs and reinforcement learning [192].
Autonomous vehicles (AVs) are another area that will revolutionize the transportation industry.AVs depend entirely on ML, eventually developing AI to drive without human interference.ML algorithms are used for tracking and identifying moving and stationary objects.Alam et al. [193] proposed a method to recognize objects in the driving scene by integrating deep learning and decision fusion.Tesla and Google [194,195] are some technological titans utilizing ANN and DL in their AVs to detect objects in the driving scene.
Smart Homes
IoT-enabled smart homes (SHs) are a technology concept that facilitates the complete automation of operations of household devices and home appliances via the internet.Context awareness is an important aspect of smart homes, as it improves the comfort and safety of users.However, direct interaction between the user and the environment decreases.The MavHome (Managing an Intelligent Versatile Home) project uses the coupling of multiagent systems and probabilistic MLA for making home environment response a rational agent [196] to maximize inhabitant comfort and minimize operating cost.A more advanced context-aware model uses back-propagation ANN for service selection and a temporal differential class of reinforcement learning algorithm for adaptive context awareness, as user preferences do not remain the same over time.The main advantage of [197] over [196] is that no predefined model is required for the context-aware system.Modeling is automatically done based on the user's feedback on the service.
SHs can make rational decisions for automation.This is achieved by tracking and predicting the inhabitant's mobility patterns and usage of devices.The active LeZi prediction algorithm based on the Markov chain is proposed in [198], which can understand subsequent event patterns.An important area that gained a lot of attention in enhancing the automation of SHs recently is human activity recognition (HAR).Human behavior prediction by activity recognition is made in [199] using algorithms based on deep learning.Some comparative analysis of various ML algorithms exists, which showed their performance with IoT-based HAR data.Fahad et al. compared the accuracy of five MLAs for correctly recognizing smart home activities.SVM and evidence-theoretic KNN showed higher accuracy than the probabilistic ANN, KNN, and NB in HAR [200].In contrast, Alam et al. [74] compared eight ML algorithms and concluded that DL performance in terms of prediction accuracy is the best.Taiwo et al. [201] proposed a deep-learning model for motion classification using movement patterns that is used to improve power usage in homes.However, the DL algorithm is computationally expensive.Also, other work [74] highlights that the C5.0 algorithm performs very close to the DL algorithm.
HAR is divided into two parts.Firstly, clustering of activity patterns, and secondly, activity-type decisions.However, many related kinds of literature focus on one part only, which results in performance degradation.An unsupervised MLA K-pattern is used to classify complex user activities to answer this issue.Then, ANN is used to train and predict user activities [131].K-pattern MLA shows improved accuracy for high-volume IoT data in terms of temporal complexity and cluster-set flexibility.HAR gives more control and automation to smart homes.Better power optimization can be achieved by switching on/off lights, fans, and home appliances.Emergency health conditions can also be identified, and alerting others can avoid loss of life.
Smart Health Care
IoT is revolutionizing the health-care industry by bringing up new and advanced sensors connected to the internet, producing essential data in real time.Islam et al. comprehensively explained IoT in health care, platforms, application, and industry trends for smart health-care solutions [202].The objectives of smart health-care applications are: (1) improved and easy access to care, (2) increased health-care quality, and (3) reduced health-care costs.The key to achieving the above objectives is to perceive patterns and critical insights from health-care data [203,204].Automated assessment of individual wellbeing and alerting others to any health risk for the patient is a widely researched topic.In [205], an intelligent system is developed to monitor the well-being of individuals in their home environments.An ML-based method is used to automatically predict activity quality and automatically assess cognitive health based on activity quality.SVM, principal component analysis (PCA), and logistic regression MLAs are used to quantify activities and further predict cognitive health.Dawadi et al. also address automated cognitive health assessment using ML.Supervised and unsupervised ML scoring models are used to quantify and determine boundaries between activity performance classes and cognitive assessments performed [206].Cognitive systems can understand, reason, and learn, helping to spur discovery and decreasing the effort required to populate research studies effectively.
Further, in [207,208], solutions for physiological monitoring, weight management, and cardiovascular disease monitoring are proposed.In [207], a wearable armband multisensor system known as BodyMedia FIT performs constant physiological tracking and weight management by exploiting ML.The system has been commercially available since 2001 and uses regression analysis to classify activities.In [208], the mobile machine learning model for monitoring cardiovascular disease (M4CVD) is proposed.It uses mobiles to monitor heart diseases.M4CVD locally analyzes trends of vital health signs by contextualizing them with clinical data sets.SVM is used to examine features extracted from clinical data sets and wearable sensors to classify a patient as at risk or at no risk of cardiovascular disease, and has shown high accuracy in identifying patients at risk [208].IBM Watson provides a large-scale IoT-enabled cognitive health-care solution that covers a broader spectrum of patients.It combines the power of health-care data with MLAs to give new insights [209].ML-enabled IoT health-care solutions enhance individuals' proactive and preventive health-care interventions and reduce health-care costs, whereas cognitive care provides modern mechanisms for health-care specialists to connect with their patients, improving diagnostic certainty and reducing error rates.IoT-based health-care solutions can help in finding insights that can help raise the quality of health care across the globe.
Smart Supply Chain and Logistics
We are seeing many IoT applications in industries that are evolving and growing daily.IoT produces enormous amounts of data coupled with the latest communications technologies.Real-time data analytics helps businesses meet consumers' demands in today's developing economies.Supply-chain management (SCM) epitomizes the impact of IoT in the manufacturing industry.Ellis et al. explained how IoT-enabled analytic applications would revolutionize SCM [210].Some of the immediate benefits of IoT in SCM, as highlighted by Barun [211], are:
•
IoT "things" can communicate promptly, allowing the possibility of knowing where they are at all times.
•
Object tracking facilitated by IoT results in improved asset and fleet management, which means well-planned scheduling, better routing, and on-time product deliveries.
•
Better control of mobile assets with IoT means knowing where they are and how they are used.• Downtime will be audited closely in real time.
•
It increases logistics transparency.
All these benefits are brought together in the broader scenario to make SCM more efficient and sophisticated.
IoT means more data, more connected "things," and a high degree of automation.With many entities, such as vehicles, shipping containers, packages, and return shipments as the origin of data, businesses require more advanced and sophisticated methods to ingest and critically scrutinize IoT data.ML-enabled IoT gives SCM automated "sense, decide, and reply" capabilities [212].One of the crucial determinants of effective SCM is the ability to recognize customer-demand patterns and react accordingly to the changes in the face of intense competition.MLAs have shown promising results in demand forecasting.
For demand forecasting, MLAs like ANN, recurrent ANN, SVM, NB, and linear regression are compared, and SVM produces highly accurate forecasts [213].ML-enabled IoT can significantly enhance the efficiency of logistics and SCM efficiency.Zhengxia et al. proposed an advanced logistics monitoring system based on IoT.It has various functions to support the argument of multiple services in one place.One of the essential services is data acquisition and processing, which shows that its data analysis and forecasting show MLAs in modern logistics are a must-have [214].
Fraudulent imitation of packaging and products is known as counterfeit.It is a severe problem for global supply distribution chains.As a solution for this, an anticounterfeit deterministic prediction model (ADPM) is proposed in [215].ADPM identifies counterfeit by the Monte Carlo (MC) MLA.ADPM examines the product attributes by analyzing and calculating the correlation coefficients among objective features.In other literature [216], the authors tried to apply a machine learning-based approach with statistical techniques to detect counterfeits.In this section, we review how IoT, ML, and the manufacturing industry can join together to take on the challenges presently faced and streamline industry processes with automation.Suppose all the discrete processes that used to take place in silos can be observed and managed through the analysis of the data provided to MLAs.In that case, the holy grail of proper supply-chain optimization may be within reach.
Smart Social Applications
Apart from the technological aspect, IoT can affect social aspects of human life more than we can imagine.ML-enabled IoT can be used to find the public's mood on a particular issue and discover a pattern in social application data for event exploration.With the help of connected devices like smartphones and tablets, opinions can be formed and public perceptions can be analyzed by exploiting ML.
Opinions are the core of almost all human activities and are key influencers of our behaviors.Our beliefs and perceptions of reality and the choices we make are, to a considerable degree, conditioned upon how others see and evaluate the world.For this reason, when we need to make a decision, we often seek out the opinions of others.This is not only true for individuals but also for businesses.In [217], Liu gives an in-depth introduction to this fascinating problem and presents a comprehensive survey of all possible methods, including ML, that can be potential candidates, in addition to the latest developments in the field.Opinions can be predicted by analyzing public sentiment.In [218], the authors proposed a sentiment-analysis technique that can translate the sentimental orientation of Arabic Twitter posts based on novel data representation and MLAs.The proposed approach applied many features: lexical, surface-form, syntactic, etc.We also used lexicon features inferred from two Arabic sentiment word lexicons.The authors used several standard classification methods to build a supervised sentiment-analysis system (SVM, KNN, NB, DT, and random forest).Similarly, to [218,219] supervised classification algorithms, such as SVM, KNN, and NB, are used for Arabic sentiment analysis, whereas in [220] domain-specific sentiment analysis is done using MLA.Also, these days social media analysis can be used to identify threats and unwanted events, as in [221], MLAs are used for feature selection, and then only relevant text in the tweets is classified using SVM, NB, and AdaBoost MLAs.
Similarly, complex events were identified in [222] using adaptive moving window regression (AMWR) for dynamic IoT data streams.The emergence of ML in IoT gives us three main advantages: (1) we are more connected, (2) more informed, and (3) actions can be highly automated.ML makes IoT able to think and decide.The coupling of these two gives us the power to sense, analyze, and predict the events in our social environment.
Smart Environment Control
One of the primary goals of IoT and smart cities is to make our societies more prosperous.Prosperity cannot be achieved until or unless cities provide a healthy living environment to their residents.Clean water and good-quality air are significant issues for more than half the world's population.IoT with smart applications can greatly change this scenario.For example, IoT-based applications, such as eWater and sustainable-watermanagement applications, are used to provide clean water in Gambia [223].The world has less clean drinking water because of manmade water pollution.The first step in reducing and managing water pollution is identifying where water is polluted and by how much.Shafi et al. [224] proposed a water-pollution detection method based on deep neural networks.
Similarly, Mishra [225] proposed an IoT-based air-quality monitoring system.Several machine-learning algorithms, such as linear regression, random forest, and XGBoost, are used for forecasting and prediction.The model can be deployed for real-world use.Similarly, Elvitigala and Sudantha in [226] used linear regression to compute pollutant-level gases.Smart cities can leverage the fusion of IoT and machine learning to enhance the automation of water, land, and air-pollution management operations to provide a safer, healthy living environment that will ultimately result in a prosperous society.
Emergency and Disaster Management
Deployment of IoT infrastructure can significantly enhance our capacity to speed up relief efforts during any emergency or disaster.Ray et al. [227] comprehensively examined the IoT paradigm from its application area to data analytics based on machine learning with a specific focus on disaster management.Forest fire is one of the areas where a prompt forest-event prediction is an important application that can take from leverage IoT infrastructure.For example, reaction time must be significantly less in the event of a forest fire, as they propagate very quickly.However, this IoT-based system can predict the wrong event due to outliers.To deal with such a problem, Nesa et al. [228] proposed an IoT architecture that detects the data errors and events in an IoT-based forest environment using classification and regression trees (CARTs), random forest (RF), gradient boosting machine (GBM), and linear discriminant analysis (LDA).RF outperformed the other three classifiers.
Similarly, Salehi and Rashidi [229] categorized existing unsupervised machine-learning methods for detecting outliers for the real-world application of forest fire prediction.We witnessed during the last decade the destruction caused by a tsunami in terms of loss of life and property.IoT infrastructure and machine learning can play a significant role in developing an effective system to warn people of an expected tsunami.Pughazhendhi et al. [230] addressed this issue by developing a tsunami early warning system.A tsunami is predicted from earthquake data by an RF classifier.Further work [231] explained in detail how Japan's tsunami system works, one of the best in the world today.
Smart Security and Access Control
In the IoT environment, sensitive data are collected and transferred to their application with partial or no human interference, which gives rise to the challenge of protecting the security and privacy of millions and billions of users.There should be techniques for access control to limit access to these data and information [232].Attacks like denial of service (DoS) attacks and distributed DOS, spoofing attacks, jamming, and eavesdropping are prevalent and real threats to the security and privacy of user data and applications in the IoT environment.Xiao et al. [233] proposed various classification, clustering, and reinforcement learning-based access-control methods with the larger aim of protecting overall user privacy in the IoT environment.In a more recent work [234], Hussain et al. systematically reviewed different attacks, current state-of-the-art solutions, and challenges for security in the IoT paradigm.Several critical security gaps were discussed.The authors proposed extending machine-learning and deep-learning techniques, which are confined to developing intelligence, as a security solution in the IoT paradigm.Also, [234] gave future directions for ML-and DL-based access-control solutions.In one such work [235], various types of attacks and anomalies in the IoT environment are predicted using several ML algorithms, such as support vector machine (SVM), logistic regression (LR), random forest (RF), decision tree (DT), and an artificial neural network (ANN).Decision trees and ANN performed better than the others did.Similarly, Khalifa et al. [236] critically examined several biometric-based access-control methods used for feature extraction and classification, such as Fisher discriminant analysis, linear discriminant analysis, learning vector quantization, and ANNs.In addition, their advantages and disadvantages were discussed.In [237], a deep learning-based method was introduced for smart-home application to limit the access of pets and humans to consumer appliances.Interestingly, the proposed method in [237] uses limited computing resources.
Industry Perspective
Predominantly, IoT remains at the initial stages of development and adoption by the information technology industry (ITI).Slowly but steadily, the future worth of IoT is envisioned by ITI.Driven by hopes, market trends, and statistics, a lot of R&D is going on.ITI giants like Cisco, Microsoft, Google, IBM, Oracle, and SAP are at the forefront of making our environment smarter by designing new IoT-enabled software platforms and hardware.Increasing the use of IoT infrastructure will significantly enhance and speed up the adoption of Industry 4.0, which will revolutionize industry practices [238].
Digging out key insights, or in simpler words making sense of IoT-generated data, is one of the biggest problems in IoT.ML can tackle these issues.Another significant problem is bringing ML to the masses, apart from the economic worth that IoT holds.In light of these critical facts, ITI starts by adding MLAs as they collect more data for their IoT-based systems.Some popular IoT-enabled ML systems are IBM Watson, Google TensorFlow, Microsoft Azure, and Splunk, which are discussed here.
Microsoft Azure is a cloud computing platform created by Microsoft [239,240].Joseph Sirosh, corporate vice president of ML at Microsoft, says, "Every day, IoT is fueling vast amounts of data from millions of endpoints streaming at high velocity in the cloud . . . in this new and fast-moving world of cloud and devices, businesses can no longer wait months or weeks for insights generated from data."The reflection of his comments is quite evident in Azure's recent developments.The Azure cloud platform added ML with advanced analytics to expand big data capabilities and be ready to tackle IoT.Services such as Stream Analytics and Azure Event Hubs are intended to help customers process data from devices and sensors in the IoT ecosystem.Scott Hanselman, principal program manager for Microsoft Azure, demonstrated how this platform integrates several things and facilitates ML for IoT [241].
Another interesting development came from IBM in the Watson software platform [242,243] initially developed for answering in the quiz show Jeopardy.IBM Watson is a technology platform that uses natural language processing and ML to disclose insights from vast amounts of unstructured data.IBM Watson is more about cognitive IoT computing [244].For example, a car owner wants to ask about the predictive maintenance date of a particular auto part.Watson achieves this by analyzing machinery performance and breakdown time with the help of sensor data gathered over time.Figure 8 illustrates the steps of IBM Watson.
Watson APIs for IoT help to accelerate the development of cognitive IoT solutions and services on the IBM Watson IoT Platform.By using these ML-enabled APIs, you will be able to build cognitive applications that: • enable a high degree of interaction with humans with the help of text and voice • perceive images and scenes • perform ML from sensory inputs • establish data correlations with external data sources, such as weather or Twitter.
Guo et al. [7,8] presented ongoing efforts toward EI for smarter objects.Their work also highlights the future transition of today's IoT to EI-enabled IoT.The importance of their work can be seen in the recent announcements of global collaboration by IBM Watson and Cisco for combining the power of Watson IoT with edge analytics [245,246].This development also shows IBM's willingness to scale down the unnecessary data transfer to the cloud using edge analytics.Cisco's fog-computing endeavors will be highly valuable in distributing intelligence at the edge.Watson's role in this partnership is to provide a small piece of code, informing the software of the exciting data for a particular requirement.An exciting development came from search giant Google in the form of TensorFlow (TF), an open-source ML platform [247].Several Google products are now using TF.For example, Google Photos, Gmail, Google search, and speech recognition utilize TF.A significant advantage of TF is that it is highly scalable and can run on several systems, servers, personal computers, smartphones, and other mobile devices.Users can execute customdistributed MLAs.The potential of deep learning can be exploited by using TF.Like Microsoft and Google, another US-based multinational corporation, Splunk, introduced IoT-enabled ML software known as Splunk (product), which is excellent in gaining fundamental insights from operational data.It handles big data efficiently and augments maintenance and fault diagnosis from IoT-generated data [248].It consists of around 300 new MLAs [249].Splunk stresses the fact that its ML system will benefit nontechnical users.Splunk integrates with popular IoT platforms and services that can be seen as a boost for the broader acceptance of Splunk.
How important ML is becoming for future IoT is quite evident in the latest developments of Amazon Web Services (AWS) IoT, which in early 2016 integrated with Amazon Machine Learning (AML) [250].As Google, IBM, and Microsoft offered cloud-based machine-learning platforms, Amazon has been obliged to step up with its product to meet market demand.AWS and AML integration allows users to create ML models without knowing much about ML.However, the AML platform offers an easy way to do simple data analytics, but this also confined it within a boundary [251].Several other companies are in the market, offering application-specific ML solutions for IoT.Recently, market research company CB Insights used the Mosaic algorithm to classify promising start-using ML and DL algorithms to provide predictive insights from IoT-generated data [252].The application area of ML in IoT is enormous.Undoubtedly, the challenges and opportunities presented by IoT [4,6,7,19,22,188] are driving the growing interest in ITI in developing ML-enabled IoT.
Emerging Trends
IoT has had some interesting emerging trends in the last few years, such as edge computing, fog computing, deep learning, and connected autonomous vehicles.Also, in the previous few months, we have seen IoT used successfully in managing and controlling the COVID-19 pandemic.All the above mentioned emerging trends are discussed in the proceeding subsections.
Internet of Behavior (IoB)
IoT is a fusion of sensors, actuators, and connectivity technologies, whereas the Internet of Behavior (IoB) is a fusion of IoT, intelligence, and behavioral science.IoB can be seen as an extension of IoT.Its goal is to better understand data that will facilitate better product development and promotion, focusing more on evolving human psychology [253].Javaid et al. [254] posited that the inception of IoB can change the dynamics of product or service design, marketing, and customer services due to its ability to understand and modify consumer behaviors based on their comportment, tastes, and imaginations.Pinochet et al. [255] analyzed the power of various "things" in IoT products in enhancing the purchase intention by improving the functional and emotional experience.Stary [256] stressed that IoB would transform the business and organization space with its choreographic intelligence.IoB is now in its infancy, and its success coincides with large-scale IoT deployments with a high level of user acceptance.
Pandemic Management
Today's world is witnessing the devastation of the COVID-19 pandemic.The WHO categorically said several times that the world response could be far better than what we did and are doing.Whether in developed countries like Italy and the US or developing countries like India and Brazil, most health-care systems were underprepared and already overburdened.From the prism of sophisticated technologies, IoT can be used effectively to monitor and control the COVID-19 pandemic.IoT infrastructure coupled with intelligence can be used to address challenges during the lockdowns, social distancing, contact tracing, health-care monitoring, prescreening, remote meeting, anytime and anywhere accessibility, etc. [257].IoT can play a significant role in providing virtual health-care (contactless) tools and telemedicine to the masses, which will eventually help in achieving the goals of Healthcare 4.0 [258].For example, Smart Field Hospital in Wuhan used IoT and AI-based applications to help health-care workers relax.Robots and IoT devices helped to perform contactless body temperature monitoring, cleaning, disinfecting, etc. [259].
On the other hand, IoT sensors can help to track infection by forming the web of human nodes and their connections.However, it has some serious privacy concerns, which need to be addressed [260].Why has the world messed up in managing and controlling COVID-19?
The answer is because human decision-making is slow and biased.To address this issue, Alam et al. [261] proposed iResponse, an intelligent IoT-enabled system for autonomous COVID-19 pandemic management.The authors demonstrated through iResponse that the fusion of IoT and intelligence could help break the chain of infection, cure development, treatment, resource planning, pandemic analytics, and decision-making.Still, we need to deploy IoT infrastructure on a large scale around the world to exploit its benefits.
Connected Autonomous Vehicles
IoT will connect AVs and will help in developing the driving cognitive.However, connected AV development is in its infancy, and a lot depends on how the adopter's willingness to accept the change and pricing of these vehicles, as examined by Talebian and Mishra [262], as well as such issues as pedestrian detection, intersection navigation, communications, collision avoidance, and security.One of the first of this kind of work was done by Alam et al., who developed TAAWUN [263].It uses connected vehicle data and prediction to enhance its driving-scene understanding.The core concept used in TAAWUN can also use IoT infrastructure in the future and sense data for prediction.AVs have recently suffered deadly crashes during the testing phase [264,265].This shows that ML algorithms used by AVs are not yet matured for real-world challenges.After TAAWUN, there are now a few words examining the benefits of connected AVs.Elliott et al. [266] critically discussed recent advances in connected AVs, focusing on five major areas: intersection navigation, pedestrian detection, collision avoidance, communications, and security.Safety is one of the foremost goals of any autonomous technology.Addressing the safety aspect, Ye and Yamamoto [267] critically analyzed the impact of connected AVs in providing a hassle-free and smooth driving experience with enhanced safety.
Edge and Fog Computing
Edge and fog computing push data intelligence and its processing closer to the nodes from where data is sensed or required, as depicted in Figure 9. Edge computing brings computational power closer to the sensing data than sending it to a remote cloud [268].This results in efficient speed and enhanced data-transport performance of devices and applications.Fog computing, an emergent architecture, can be termed a subset of the edge computing paradigm [269].Fog computing enables the cloud to be closer to the smart objects that generate data and actuators that act on sensed data.It defines the standards related to edge computing, data transfer, storage, computation, and networking [270].Edge and fog computing are enabling technologies that will help IoT infrastructure to assist smart applications and serve the bigger objective of smart cities around the globe.
Lightweight Deep Learning
Deep learning is a representational learning model that mimics the neural system of humans.It takes raw data as input and automatically discovers representations required to make predictions.Deep learning tries to model higher-level data abstractions.A deeplearning model can have several layers between input and output, which helps it to think.An intriguing fact about deep learning is that layers of features are learned from data automatically.LeCun, the director of AI research at Facebook [271]., stated that deep learning would see many near-future successes because of two critical factors: (1) limited engineering by hand required and (2) taking leverage from enhanced computational resources and data availability.However, deep learning has a few issues, such as these algorithms consuming too many resources like processing power and energy resources.From an IoT perspective, we need to exploit the power of deep learning at several levels, such as (1) cloud, (2) fog, and (3) edge [39,272].Plenty of processing resources are available at the cloud level, which the deep-learning algorithms can consume.Further going downwards to fog, the availability of processing resources decreases significantly, whereas edge has minimum processing resources, so unsuitable for conventional deep-learning algorithms.In addition, these deep networks must be capable of perceiving the environment from fewer data.Recent deep-learning trends show fundamental understanding among researchers about the need for lightweight deep-learning algorithms for IoT [273].Several advances reflect this trend, such as Alibaba introducing the open-source mobile neural network [274], a lightweight deep-learning model for HAR using smart "things" on edge [275], Mobi-Face, ShuffleFaceNet for face recognition on mobile devices [274,276], lightweight machine learning for IoT Systems (LIMITS) [277], CardioXNet, a lightweight deep-learning framework for heart-disease prediction [278], lightweight deep learning-based virtual vision sensing technology [279], and lightweight convolutional neural networks (CNNs) [280,281].In the future, we expect more development in this direction, as all or most IoT devices cannot match conventional deep-learning systems in terms of processing, memory, or power requirements.
Challenges
The IoT paradigm is a perfect candidate to bridge the gap between the real and digital worlds by developing a hyperconnected world.However, it needs to overcome and manage massive technological and nontechnological challenges.Based on several literature reviews on IoT [282][283][284][285], we identified four classes of challenges: (1) technological, (2) individual, (3) business, and (4) society.In the proceeding subsections, we discuss them in brief.
Technological Challenges
The extensive deployment of IoT systems is still a distant reality.However, they are being implemented in bits and pieces around the world.Sensing, connectivity, actuators, and security are four significant technologies fused to make IoT.The world still faces connectivity issues.Mobile connectivity and internet availability are obstacles, particularly in low-income countries.Another problem is that cross-platform capability is not much there in the present IoT platform, resulting in its slow acceptance [286,287], Nowadays, the IoT landscape is primarily based on a client-server architecture, which is not a feasible option for the future because of increased latency, maximized energy consumption, singlepoint failure, and security vulnerabilities.To tackle this, edge and fog computing platforms came into being, albeit most of them are in their infancy and facing challenges like network bandwidth, latency, accessibility, control, and management [288].A bigger challenge posing a significant threat to the large-scale acceptance of IoT systems is the security of these "things" [285].IoT is about decentralized edge devices where devices will connect with several strange things that are more prone to cyberattacks.Universal standards for IoT devices are not there for authentication and authorization, which adds further to security vulnerabilities.Security is one of the most critical issues IoT needs to address for a successful and acceptable paradigm.
Individual Challenges
Different people will use IoT for a diverse set of needs.The aim is to improve daily lives through sophisticated automation and an intelligent environment.Due to IoT's "anytime and anywhere" property, the foremost concern is privacy as it varies from individual to individual, resulting in more challenging scenarios that devices and servers need to handle.Sen et al. [289], critically discussed the ways to preserve privacy and emerging related trends in IoT.Another hurdle to IoT's success is the acceptability of IoT among the public.IoT will change how we look at our daily life.The point of interest is how compatible our cognitive needs are with these changes [290].Beştepe and Yildirim [291] analyzed how public acceptability is essential for smart cities, which at their core use IoT infrastructure to achieve sustainability.To increase IoT's acceptability, we need to educate and train people to make them aware of the benefits these sophisticated applications and services bring and how they can revolutionize our daily lives towards more prosperity.
Business Challenges
IoT offers enormous business opportunities in manufacturing, applications, and services.However, it cannot compete with the hype that was created.Businesses are facing challenges including but not limited to lack of universal platforms, lack of industry standards, compatibility connectivity, data collection, and security issues [292,293].A skill shortage of IoT experts is also one major issue.Other than this, global market anomalies like COVID-19 [294], a global computer chip shortage [295], and the Russia-Ukraine war [296] have slowed the pace and reduced the interest of businesses and governments in IoT developments as priorities shifted.
Society Challenges
As a society, we need a prosperous and sustainable living environment.IoT can help significantly by providing actionable decision-making support [297].However, the critical question is: Are we equipped today as a society to use IoT?And are we ready to accept the cognitive changes it will bring to our daily lives?These essential questions will shape public acceptance of IoT applications and services [298].It will increase the demand side, encouraging industries to put serious efforts into the large-scale deployment of IoT, its applications, and services.Further, the digital divide is another major challenge today our societies suffer due to the imbalance in global economic growth.If we want to exploit IoT to its full potential, we must address the issues discussed above.
Conclusions
IoT is far more mature now.More IoT applications are in practical use.Individuals, governments, and businesses have shown a keen interest in leveraging IoT's opportunities.An important question remains: How will IoT learn and think to provide a high degree of automation?The answer comes from other branches of computer science that understand and act like humans with the help of ML.In this paper, rather than doing a classical review of literature, we tried highlighting the importance of ML for IoT's success and diverse ML-powered IoT applications.We classify ML developments in IoT from three perspectives: data, application, and industry.The literature reviewed is wholly or partially applicable to the IoT ecosystem.Further, we identified and discussed emerging IoT trends, including Internet of Behavior (IoB), pandemic management, edge and fog computing, connected autonomous vehicles, and lightweight deep learning, with a primary focus on machine learning to develop futuristic and sustainable solutions.Despite IoT's ability to transform our present-day societies into smarter and more sustainable ones, it has to overcome a set of challenges, e.g., technological, individual, business, and those related to our societies.
We conclude that ML developments in IoT will revolve around currently available and well-established ML methods, at least in the short term.However, in the future, we can see a fully autonomous IoT ecosystem with embedded intelligence capabilities that will be a tricky development from an ML point of view regarding device data and processing abilities.With the help of this work, the reader can see what ML means to IoT, how ML is used with IoT, and what the prospects of ML in IoT can be.
Figure 2 .
Figure 2. Machine-learning task in the Internet of Things (IoT).
Figure 3 .
Figure 3.The high-level structure of the paper.
Figure 4 .
Figure 4. IoT-based data issues and solution landscape.
Figure 5 .
Figure 5. Classification of future outlier detection algorithm that adopt machine learning for IoT application.
Figure 6 .
Figure 6.Various categories of feature-selection methods.
Figure 8 .
Figure 8. Step of how IBM Watson finds critical insights.
Figure 9 .
Figure 9. Edge and fog computing landscape.
Table 1 .
Major machine learning-based IoT surveys. | 14,478.8 | 2022-08-26T00:00:00.000 | [
"Computer Science",
"Engineering",
"Environmental Science"
] |
Polarimetric synthetic aperture radar image classification using fuzzy logic in the H / α -Wishart algorithm
Abstract To solve the problem that the H / α -Wishart unsupervised classification algorithm can generate only inflexible clusters due to arbitrarily fixed zone boundaries in the clustering processing, a refined fuzzy logic based classification scheme called the H / α -Wishart fuzzy clustering algorithm is proposed in this paper. A fuzzy membership function was developed for the degree of pixels belonging to each class instead of an arbitrary boundary. To devise a unified fuzzy function, a normalized Wishart distance is proposed during the clustering step in the new algorithm. Then the degree of membership is computed to implement fuzzy clustering. After an iterative procedure, the algorithm yields a classification result. The new classification scheme is applied to two L-band polarimetric synthetic aperture radar (PolSAR) images and an X-band high-resolution PolSAR image of a field in LingShui, Hainan Province, China. Experimental results show that the classification precision of the refined algorithm is greater than that of the H / α -Wishart algorithm and that the refined algorithm performs well in differentiating shadows and water areas.
Introduction
Land cover classification is one of the major applications of polarimetric synthetic aperture radar (PolSAR). 1 The PolSAR system can work continuously in all weather and can take advantage of four kinds of polarization to obtain a wealth of feature information about features, which has broad application prospects. 2 However, PolSAR images are seriously influenced by speckle noise, which leads to major difficulties in image classification. 3 Over the last two decades, researchers have proposed many classification methods for PolSAR images. 4,5 In 1989, Van Zyl suggested that the PolSAR data could be classified into four kinds of scattering mechanisms, namely, even number of reflections, odd number of reflections and diffuse scattering, volume, and nonseparable scattering. 6 Then Freeman and Durden developed a new model to describe the polarimetric signature including volume, and the Bragg scattering mechanisms, 7 whose experimental results showed that the approach could deliver clear discrimination between different types of scene pixels. In 1995, scattering entropy, introduced by Cloude, 8 was first used in SAR image classification and later Cloude-Pottier proposed a classification algorithm based on the H=α decomposition. 9 However, a deficiency of the algorithm is that it cannot effectively distinguish different topographic features whose pixels are in the same area. Subsequently, a method combined the H=α decomposition with an iterative Wishart clustering algorithm, and was proposed by Lee et al. in 1999. 10 Among these, the H=α-Wishart method proposed by Lee has been widely used because it has a more stable performance than other algorithms. 11 However, because the algorithm uses the hard C-means framework, the clusters are divided in an arbitrary way, which makes the algorithm too sensitive to noise. Because the classification results still have difficulty in meeting application requirements, there is still room for improvement in classification accuracy.
The fuzzy set concept can solve the problem with traditional set theory, which performs a rigid division of set elements. 12 In fuzzy set theory, an element can belong to several sets at the same time, with a certain degree of membership in each one. The introduction of fuzzy partitions can overcome the shortcoming that the algorithm results are too sensitive to isolated points. 13 A PolSAR image consists of the scattering echo of various surface features in the observation area, 14 and an image pixel often contains a variety of basic scattering types, which means that simply classifying a pixel as a certain type is not reasonable. At the same time, this is the main reason why the fuzzy set concept is a convenient classification method for PolSAR images. Currently, many remote-sensing image classification methods make use of fuzzy set theory. Mao introduced fuzzy set theory to cerebellar model arithmetic computer (CMAC) neural networks and proposed a method that reflects the fuzziness of human cognition and the continuity of fuzzy CMAC neural networks. 15 The results show that the classification accuracy of this approach is significantly higher than that of the traditional maximum likelihood classification method. Liu et al. used semisupervised learning theory and the core theory of the fuzzy C-means (FCM) algorithm to improve classification accuracy. 16 Obviously, fuzzy theory can be applied to improve the accuracy of classification results in two main ways: directly used to improve existing methods or combined fuzzy theory or the FCM algorithm with a novel algorithm. 17 In recent years, some researchers have applied fuzzy theory to image classification and achieved good results for PolSAR images. [18][19][20] However, these methods directly use the FCM algorithm, which does not associate polarization parameters with fuzzy sets. Therefore, in this paper, the fuzzy concept is added to the H=α-Wishart algorithm to enable fuzzy clustering based on the Wishart distance to obtain more reasonable classification results. To verify the effectiveness of the proposed algorithm, three sets of PolSAR image data are experimentally analyzed.
The rest of the paper is organized as follows. Necessary background information and fundamental knowledge are provided in Sec. 2. Details of the proposed unsupervised classification algorithm are described in Sec. 3. Section 4 describes the remote-sensing datasets used, together with experimental results and discussion. The conclusions are presented in Sec. 5.
PolSAR Data
The basic form of PolSAR data is the Sinclair scattering matrix using horizontal and vertical matrices, expressed by Eq. (1): where S XY is the scattering matrix element of horizontal-vertical polarization. For the complex Pauli spin matrix set, the matrix S can be transformed to the three-dimensional Pauli vector, as Eq. (2): Then the coherency matrix T can be obtained as shown in Eq. (3): hjAj 2 i hAB Ã i hAC Ã i hA Ã Bi hjBj 2 i hBC Ã i hA Ã Ci hB Ã Ci hjCj 2 i where A ¼ S XX þ S YY , B ¼ S XX − S YY , and C ¼ S XY ¼ S YX . The superscript P denotes transpose, and "*" is the complex conjugate.
Fuzzy Set Theory
Fuzzy set theory emerged in the nineteenth century and transformed classic two-valued logic into a continuous logic in the closed interval [0, 1], which was more in line with the human brain's cognitive and reasoning processes. The main difference between traditional set theory and fuzzy set theory is the mechanism of element membership. In traditional set theory, there are only two kinds of collection mechanisms {Yes or No}, for which a quantitative description is f1;0g; that is, an element can either belong to or not belong to a set. In fuzzy set theory, the concept of membership is introduced, and an element can be assigned a grade of membership from 0 to 1. This expansion means that an element can belong to multiple collections at the same time, with different degrees of membership in each. Compared to traditional set theory, fuzzy sets smooth the process of translation between quantitative and qualitative representations.
H=α Classifier Based on Target Decomposition
In 1996, Cloude and Pottier proposed a target decomposition method based on a coherency matrix. 21 The coherency matrix T is first decomposed into the sum of the products of eigenvalues and eigenvectors, as shown in Eq. (4): where λ i are the eigenvalues and u i are the eigenvectors, and Q denotes transpose. Then the entropy H and scattering angle α can be decomposed as in Eq. (5): H and α can be used to define a two-dimensional feature space. Cloude and Pottier proposed the classification boundaries of the H=α plane on the basis of a large number of experiments. 9 Using this approach, the PolSAR data can be divided into eight classes based on Fig. 1.
H=α-Wishart Algorithm
According to Goodman,22 the coherency matrix T obeys the complex Wishart distribution, and its probability density is as shown in Eq. (6): pðhTiÞ ¼ n qn jhTij n−q exp½−nTrðV −1 hTiÞ Kðn; qÞjVj n Kðn; qÞ ¼ π ð1=2Þqðq−1Þ ΓðnÞ: : : Γðn − q þ 1Þ; (6) where V is the expectation of the coherency matrix T, n is the number of images, K is a normalization coefficient, Tr represents the trace of the matrix, and Γ is the gamma function. q ¼ 3 represents the monostatic backscattering case, while q ¼ 4 represents the bistatic case. Lee et al. proposed the H=α-Wishart algorithm based on the maximum likelihood decision criterion. 10 The Wishart distance between the coherency matrix hTi of the pixel and the cluster center of the target class m can be expressed as in Eq. (7): where V m is the mean of the coherency matrix of all the pixels belonging to class m. In the H=α-Wishart algorithm, when the coherency matrix T of a pixel satisfies Eq. (8), the pixel will be classified into class m. Because the category of an element depends only on the minimum distance between this element and the center of every class, the H=α-Wishart algorithm uses the rigid C-means algorithm. This means that eight initial classes are obtained based on the two parameters H and α, and then iterative clustering is performed using the Wishart distance by means of the classification process shown in Fig. 2(a). As shown in the figure, the new cluster center is calculated by taking the average of the matrix T of all pixels belonging to the corresponding class. The calculation method is shown in Eq. (9): where N is the total number of pixels, W j is the cluster center of class j, and x i represents the coherency matrix T for pixel i. When the i'th pixel belongs to class j, U ij takes on a value of 1; otherwise, it is 0. The end condition is that the number of pixels that change category between two generations is less than a given threshold value or that the number of iterations reaches a given threshold value.
Fuzzy Clustering Based on Wishart Distance
To solve the problem of inflexible clustering, an H=α-Wishart fuzzy clustering algorithm based on the Wishart distance is proposed in this paper, which allows pixels to belong to more than one class with a certain degree of membership for each. According to the concept of fuzzy set theory, the degree of membership of one point in a given class shall be 1 when the distance from this point to the class center is small enough, meaning that the point completely falls into this class and that the degree of membership of this point in other classes must be 0. When no class satisfies this condition, the degree of membership of a given point shall be set according to the distance between that point and the center of every class; the further away the point, the less will be the degree of membership. In addition, when a point completely belongs to a certain class, the sum of its memberships in every class is 1, and this condition shall still be satisfied when the point belongs to several classes. After introducing the concept of membership, the method of computing the class center is still Eq. (6), but the meaning of U ij has been expanded to be the degree of the membership of pixel i in class j, and its value range has changed from f0;1g to the interval [0,1]. The improved classification process is shown in Fig. 2
H= α-Fuzzy Wishart Classifier
Based on the concept described above, the H=α-fuzzy Wishart classifier was designed in this paper. The concrete implementation of the process was as described below.
Image Preprocessing
Due to imaging system complexity and real-world imaging factors, PolSAR images usually contain much noise, but its influence can be reduced by filtering operations. In this research, a refined Lee filter with a 3 × 3 window was applied before the classification experiments.
Initialization
In this initial step, images are classified into eight categories to initialize the cluster centers with H=α classifier. The initial process is as follows: 1. Decompose the PolSAR data based on Pauli and then work out the coherency matrix. 2. Compute H and α based on the coherency matrix. 3. Divide the fully-polarimetric SAR data into eight categories based on the H=α plane. 4. For each category, compute the cluster center according to Eq. (10): where N is the total number of pixels, c j is the cluster center of class j, and x i represents the coherency matrix of pixel i.
Fuzzification of Wishart Distance State Space
Based on the cluster centers calculated in Sec. 3.2, one of the key points in this paper, the design of the fuzzy membership functions (FMFs), can be done. In order to fuzzify the Wishart distance state space, let Uðx i ; c j Þ represent the degree of the fuzzy membership with which the i'th point belongs to the j'th class. The FMFs satisfy the following conditions over the entire state space: where M is the total number of clusters. For the i'th pixel, Wishart distances between M clusters can be computed according to Eq. (7), and let d w ðx i ; c j Þ represent the Wishart distance between the i'th pixel and j'th cluster. Because of terrain, sensor type, the wavelength of the electromagnetic waves used for measurement, and various other reasons, the values of the Wishart distance vary over a wide range. In fact, because the calculation of the Wishart distance involves a logarithmic operation, as can be seen in Eq, (7), the Wishart distance can even be negative. All these reasons prevent regular membership calculation and fuzzy partition. Therefore, to make the membership calculation feasible and to solve the problem of setting an appropriate unified threshold for the fuzzy partition of an arbitrary domain, a normalization of Wishart distance for every pixel is performed as shown in Eq. (12): where a sample group is the M Wishart distances from i'th point to each cluster, M is the total number of clusters, d n ðx i ; c j Þ is the normalized distance, S d is the variance of the sample group, and d W ðx i ; c j Þ is the average of the Wishart distances from i'th point to every cluster, namely the sample mean. Then the fuzzy membership Uðx i ; c j Þ can be specified as follows: For i'th point, there can be two different conditions: totally belonging to one cluster or belonging to several clusters at same time. When the normalized distance between this point and the k'th cluster center is small enough, this point shall totally belong to the k'th class as shown by Eqs. (13) and (14): where Uðx i ; c j Þ represents the degree of the fuzzy membership with which the i'th point belongs to the j'th class and p f is a parameter introduced in the fuzzy function design to adjust the range of acceptance of fuzzy operation. Otherwise, when the distances between i'th point and every cluster center are all larger than −p f , the point shall belong to several clusters and a judgment shall be made to choose the essential clusters as shown by Eqs. (15) and (16): when ∀ d n ðx i ; c j Þ > −p f ; ðj ¼ 1; : : : ; nÞ; where Min½d n ðx i ; c j Þ; p f means the minimum value of d n ðx i ; c j Þ and p f , which can ensure the pixel far from the cluster center has no effect on the membership calculation. And Uðx i ; c j Þ ¼ 0 d n ðx i ; c j Þ > p f means the distance between i'th point and j'th cluster is too large so that the membership of i'th point to j'th cluster shall be set to 0.
Update of Cluster Center and Iterative Refinement of Clustering
After the fuzzy membership degrees Uðx i ; c j Þ ∈ ½0;1 have been determined in Sec. 3.3, the new cluster center can be computed based on Eq. (17): where N is the total number of pixels, W j is the new cluster center of class j, and x i represents the coherency matrix T for pixel i. Comparing Eq. (17) with Eq. (9), the major improvement of the fuzzy clustering algorithm is that the weighting parameter U ij in Eq. (9) must be 0 or 1, while Uðx i ; c j Þ in Eq. (17) can have value from 0 to 1. With the cluster centers updated, the categories of every pixel can be reclassified based on Eqs. (7) and (8). And then an iteration of Secs. 3.3 and 3.4 can be done until the terminating condition is met.
Stopping Condition
The stopping condition is different in different applications. One option is to set a fixed number of iteration times, and another option is to set a fixed threshold of the number of pixels that change categories between two consecutive iterations. When the maximum iteration number is met, or the change rate is less than the threshold, output the classification result. Otherwise, return to Sec. 3.3. Finally, the proposed algorithm outputs a classification image.
Parameter Adjustment
Because of variations in the data distribution, the parameter p f is introduced in the fuzzy function design to adjust the range of acceptance of fuzzy operation. When the data separate well and are easy to cluster, a small p f can be used. When the data have a disorderly distribution and are difficult to classify, a larger p f can be used to expand the range of acceptance of fuzzy operation to improve algorithm performance. Figure 3 shows how the parameter p f affects the fuzzification of two categories as a simple case.
Application and Discussion
To verify the effectiveness of the proposed method, experiments were done using three datasets. The first consists of full-polarimetric SAR data for San Francisco Bay, California, obtained from NASA-JPL AIRSAR in 1992. The size of the experimental dataset is 1024 × 900 pixels. The region includes urban areas, ocean, vegetation, the Golden Gate Bridge, and other targets. The second consists of L-band PolSAR data for the Flevoland region from the NASA/JPL Laboratory AIRSAR sensor in 1989, with an azimuth resolution of 12.10 m and a distance resolution of 6.6 m. The feature types in the experimental area are relatively simple; most are croplands of rectangular shape, including grassland, potatoes, alfalfa, wheat, soybeans, sugar beets, peas, and other target surface features. The size of the experimental dataset is 444 × 471 pixels. And the third consists of X-band full-polarimetric high-resolution SAR data for LingShui town in Hainan Province, China, in 2010. The original size of this dataset was 5001 × 7893 pixels. The region includes airport runways, urban areas, pools, and various kinds of croplands, such as red peppers, betel palms, mangoes, papayas, and rice paddies. Because the original dataset was too large for analysis, a subarea sized 806 × 573 was selected for the experiments. Results were compared between the proposed method and the H=α-Wishart algorithm.
Experiment 1: L-Band PolSAR Image of San Francisco Bay
The first dataset and the classification results are shown in Fig. 4. Several basic features, such as ocean, beach, mountains, vegetation, urban areas, and noise, in the upper-right corner can be clearly seen in Fig. 4(a). The water has low-entropy surface scattering, the bare soil has medium-entropy surface scattering, the vegetation has medium-entropy vegetation scattering, and the urban areas have high-entropy volume scattering. These major ground features in San Francisco Bay are more distinguishable. Experiment 1 is done to check the performance of the proposed algorithm in classifying the ground features with different kinds of scattering types.
According to the types indicated in Fig. 4(a), in the classification results of the H=α-Wishart algorithm shown in Fig. 4(b), the first zone corresponds to noise. The corresponding feature type of the second and third zones is ocean. The fourth, fifth, and sixth zones correspond to urban areas. The seventh zone corresponds to vegetation and the eighth to bare land. The algorithm effectively distinguishes several basic features of the marine, urban, vegetation, and bare land environments, but some yellow spots representing bare land are evident in the blue zone representing the ocean, which is obviously a misclassification. Moreover, the urban areas are divided into three classes, which seem to be unreasonable and overly complex because each of the separate zones has no distinguishing characteristic. Figure 4(c) shows the classification results for H=α-Wishart fuzzy clustering. Similar to the types indicated in Fig. 4(a), the first zone corresponds to noise, and the second and third zones correspond to ocean. The fourth and fifth zones correspond to urban areas. The sixth, seventh, and eighth zones all correspond to bare land. Comparing Fig. 4(c) with Fig. 4(b), it can be seen that in the classification results of the H=α-Wishart fuzzy clustering method, the misclassified spots, which were classified as ocean, are now correct, and urban areas and vegetation are reasonably divided into two classes. Furthermore, there is a large difference between the classification results of H=α-Wishart and of the H=α-Wishart fuzzy clustering algorithm in the oval shown in Fig. 5.
In the area in the upper left of Fig. 4(a) marked by a red box, at the back of a mountain, there is a shadow area in the image because the higher terrain blocks the other parts of the back slope, which leads to less effective reception of information. The shaded area is similar to a calm water surface in its polarization characteristics, therefore, in the H=α-Wishart classification result, this section is almost completely classified as ocean. However, in the H=α-Wishart fuzzy clustering result, the algorithm is able to find a more reasonable cluster center, which leads to most of those pixels being classified more reasonably as bare land. Experimental results show that the classification accuracy of the improved algorithm proposed in this paper is greatly increased over that of the H=α-Wishart method.
To validate the improvements in fuzzy clustering achieved by calculating cluster centers, the movement paths of the cluster centers have been tracked. Figures 6(a) and 6(b) show the movement paths of the cluster centers as calculated by the H=α-Wishart and the H=α-Wishart fuzzy clustering algorithm. The background is the data distribution in the H=α plane, the transition from red to dark blue represents the density change from high to low, and the hollow points indicate the final positions of the cluster centers. The combination of the cluster center locations and the data distribution in the H=α plane reflects the effectiveness and reasonableness of the cluster centers. Figure 6(a) shows that there were five cluster centers that were calculated by the H/a-Wishart algorithm and located in the medium-entropy multiple-scattering region, with the first and fifth cluster centers slightly distant from the data-intensive region. In Fig. 6(b), the cluster centers calculated by the proposed method eventually moved to the data-intensive region and distributed themselves evenly. The classification results indicate that the clustering density of the proposed method is more reasonable, while the clustering density of the H=α-Wishart algorithm is uneven because the urban areas were divided into three classes and the gray part contained only a few pixels, which seemed to be insignificant.
Experiment 2: L-Band PolSAR Image of Flevoland
To further verify the validity of the classification algorithm, the second set of PolSAR data was used for another set of experiments and a quantitative analysis was performed using a confusion matrix. Most feature types in this experimental area are croplands, including grassland, potatoes, alfalfa, wheat, soybeans, sugar beets, peas, and other target ground features, which usually result in medium-entropy vegetation scattering. Experiment 2 is done to check the performance of the proposed algorithm in classifying the different ground features with similar kinds of scattering types. Figure 7(a) shows an RGB composite image of the region. The red, green, and blue components of the composite image were obtained using the three parameters jHH − VVj, jHVj, and jHH þ VVj derived from the Pauli decomposition.
Comparing Figs. 7(b) and 7(c), the classification result from the fuzzy clustering method is much smoother than from the other method. In the results of the H=α-Wishart algorithm, some areas were not distinguished, such as peas, while some other types were misclassified into several classes, such as lawns and potatoes. However, the majority of surface features, such as peas and beets, can be identified correctly in the H=α-Wishart fuzzy clustering result. The stretch of road at the bottom of this area and the edges of every field in the fuzzy clustering results are both clearer than the H=α-Wishart algorithm. To evaluate the classification accuracy, Fig. 8 shows the reference image of the real surface features, and the six major class samples from the image that were selected to compute the confusion matrices.
As can be seen from Table 1, the accuracy of the H=α-Wishart fuzzy clustering algorithm is greater than that of the H=α-Wishart method, both in terms of overall accuracy and Kappa coefficient. For some categories, such as rape, bare land, and lawn, the mapping accuracy and precision of the two methods are both >90%. For most classes, the accuracy of the H=α-Wishart fuzzy clustering algorithm is always a little better because the misclassified points in each zone are fewer with the fuzzy operation.
Experiment 3: X-Band High-Resolution PolSAR Image of LingShui, Hainan
The third dataset was collected from the farmland surrounding LingShui. Fields and roads can be easily distinguished in Fig. 9(a), where the green lines mark areas where crops grow lushly and almost completely cover the land, appearing as massive objects, the blue lines mark areas where the plants are small and the soil is bare, as reflected in the strip shapes in the image, and the black blocks are pools. Experiment 3 is done to check the performance of the proposed algorithm in classifying the high-resolution ground features. Figure 9(b) shows that the H=α-Wishart method was less effective in generating accurate clusters. In the vegetation zone, categories are mixed, creating many small spots. As many as six categories can be seen in a small region. There was obvious misclassification in the region indicated by black circles, where parts of roads, which generate a low-intensity response, were mistakenly classified as water. The H=α-Wishart fuzzy clustering algorithm improved the classification results and yielded a clearer clustering in which few of the pixels representing roads were classified as water. Figure 10 clearly shows that, compared with the H=α-Wishart method, the improved algorithm did a better job of distinguishing shadows from water.
The region shown in Fig. 10(a) bounded by red lines [as shown in Fig. 9(a) in the corresponding red rectangle] is a grove of mangoes. The synthetic map shows that the presence of trees above the ground created shadows as shown in Fig. 10(a), where the intensity of the echo is weak in the bottom-right portion of the mango trees, which are colored in black. In the results of the H=α-Wishart method, most of the shadows were mistakenly classified as water, while in the results of the H=α-Wishart fuzzy clustering algorithm, due to the fuzzy concept, the cluster centers moved to a more reasonable position, causing the pixels in that region to be reclassified, with only a small group remaining misclassified.
To further evaluate the performance of the new algorithm, the classification results for five typical kinds of land cover according to Fig. 11 are summarized in Table 2. In the results of the H=α-Wishart algorithm, grassy plant and woody plant classes are both classified into several classes, which seem rather messy and result in low accuracy; most of the bare soil is classified as plants or paths, while only 35.88% are in the correct category. In the results of the H=α-Wishart fuzzy clustering algorithm, >80% of the grassy plant and woody plant pixels are in correct categories, which proves that fuzzy logic can lead to a softer clustering and can classify a small group of isolated pixels into a larger class. While the accuracy of the path class is lower than the H=α-Wishart result, this seems to be the price for improving the classification accuracy of water and shadow because the algorithm used more categories to differentiate water, bare soil, paths in fields, and shadows.
By integrating the results of these three sets of experiments, the improved algorithm proposed in this paper can effectively improve classification accuracy and achieve a reasonable classification of shadows and water.
Conclusions
To solve the problem of inflexible clustering in the hard C-means clustering model used by the H=α-Wishart algorithm, fuzzy concepts, which can blur pixel class boundaries, have been introduced into the proposed H=α-Wishart fuzzy clustering algorithm. Three kinds of real-world PolSAR images were used in the classification experiments. Results show that the fuzzy clustering algorithm based on fuzzy set theory performed better in terms of classification accuracy than the H=α-Wishart algorithm and can effectively solve the problem of misclassifying shadows and water.
In the algorithm proposed in this paper, further research remains to be done on the automatic setting of the fuzzy parameter and the exact distribution of normalized data. At the same time, it would also be worthwhile to explore why improving the positions of the cluster centers by fuzzy clustering can correct the misclassification of shadows and water. | 6,844.6 | 2015-01-01T00:00:00.000 | [
"Computer Science"
] |
A Comparative Study Between Human Translation and Machine Translation as an Interdisciplinary Research
Research purpose of the research is to determine the difference between human translation and machine translation. The data sources for this article come from several references, books, journals and articles related to the title of the discussion. Research findings of the research are that human translation is more effective and easier to understand when compared to machine translation which can only translate literal sentences or words without understanding the intent and purpose of the intended target language. So, machine translation language mainly focuses on the source language without paying attention to the target language. In terms of semantic meanings in machine translation, it is far away from the truth of the meaning it has when compared to the meaning of human translations. Therefore, machine translation can be said to be a literal translation that applies in general. So, the role of human translation is still bigger in the correct translation.
Introduction
In translation subject, many students even many people are still confuse about the difference between human and machine translation, what is their advantage and their weaknesses. This is a very seriously problem that should be discussed to know the correct answer. It is also necessarily to inform in order to get good choice to determine a good result especially in doing assignment and business. A variety of definitions exist for machine aided translation. Between them, these definitions place machine aided translation on a scale reaching from human translation in the proper sense of the word to fully automatic machine translation (Baker, 2015). 116 Zainuddin Hasibuan Talking about human and machine translation, the mechanization of translation has been one of humanity's oldest dreams. In the twentieth century it has become a reality, in the form of computer programs capable of translating a wide variety of texts from one natural language into another. But, as ever, reality is not perfect. There are no 'translating machines' which, at the touch of a few buttons, can take any text in any language and produce a perfect translation in any other language without human intervention or assistance. That is an ideal for the distant future, if it is even achievable in principle, which many doubt.
Machine translation was a matter of serious speculation long before there were computers to apply to it; it was one of the first major problems to which digital computers were turned; and it has been a subject of lively, sometimes acrimonious, debate ever since. Machine translation has claimed attention from some of the keenest minds in linguistics, philosophy, computer science, and mathematics. At the same time it has always attracted the lunatic fringe, and continues to do so today.
Human and machine translation have an important role in the translation subject. They always help the translator to create a good result in translating source text into target text. Therefore, a good translator is surely able to know when machine translation can be used and when also human translation can be used properly. Thus, this research will try to find out the differences between human and machine translation.
A translation is a piece of writing or speech that has been translated from a different language (Collin, 2006). It means that in a translation there is a process of transfering language from the source of langugae (SL) into the target of language (TL). The text in the source of language (SL) and the target of the language (TL), both of them should be understood by the translator especially about its lexical elements of language and grammatical structure of the language.
According to Newmark, translation theory is concerned mainly with determining appropriate translation methods for the widest possible range of texts or text-categories. It also provides a frame work of principles, restricted rules and hints for translating texts and criticizing translations, a background for problem solving (Newmark, 2000). A rigorous theory of translation would also include Journal of English Teaching and Learning Issues 117 something like a practical evaluation procedure with specific criteria (Graham, 1981). A good survey of the theories of translation is perhaps best furnished by E.
Nida: who avers that due to the fact that translation is an activity involving language there is a sense in which any and all theories of translation are linguistic (Nida, 1969).
Translation is activity of changing the language and tried to find an equivalent idea among the language. When doing translation, the translator not only transferred the message but also pay attention on the style both expression and language.
Method of Research
This research used descriptive qualitative approach where descriptive qualitative is a research used to describe a natural phenomenon or man's engineering. This research investigates forms, activities, characteristics, changes, relationship, similarities and differences with another phenomenon (Sukmadinata, 2009).
The data of this research is in the form of word, phrase, clause and sentence which contains the translation texts from human and machine translation by using data collection technique in the form of words, phrases, clauses and sentences based on the scope of research. After collecting the data, the data was analyzed descriptively by using technique of three steps. They are data condensation, data display and conclusion drawing/verification.
Findings
Machine translation is not that perfect rendering of the source text into the target text. The point is that the translated text, still, bears much of the traits characterizing the language of the source text; therefore, much should be said about how the use of language is violated as well as the meaning. Simultaneously, some focus is to be on to what extent the human translation has succeeded in transforming the source text into the target text depicting whether the translated text has the same effect as the source text.
The machine translation is a literal translation or instead a word-for-word translation; the reader can easily notice that there is no flexibility in the machine 118 Zainuddin Hasibuan translation in that each word in the source text has been substituted orderly by another in the machine translation.
The human translator is capable of avoiding what have been criticized in the machine translation. The human version is a structure respecting and its focus has been in both the source text, in an act of comprehension, and the target text, in an act of producing a perfect translation. The human translator's flexibility allows them to move from language into another bearing in their minds the difference of structures between languages
Machine Translation
The idea of machine translation may be traced back to the 17th century. In 1629, René Descartes proposed a universal language, with equivalent ideas in different tongues sharing one symbol. The field of "machine translation" appeared in Warren Weaver's Memorandum on Translation (1949). The first researcher in the field, Yehosha Bar-Hillel, began his research at MIT (1951). A Georgetown University MT research team followed (1951) with a public demonstration of its Georgetown-IBM experiment system in 1954. MT research programs popped up in Japan and Russia (1955), and the first MT conference was held in London (1956). Researchers continued to join the field as the Association for Machine Translation and Computational Linguistics was formed in the U.S. (1962) and the National Academy of Sciences formed the Automatic Language Processing Advisory Committee (ALPAC) to study MT (1964).
Real progress was much slower, however, and after the ALPAC report (1966), which found that the ten-year-long research had failed to fulfill expectations, funding was greatly reduced. According to a 1972 report by the Director of Defense Research and Engineering (DDR&E), the feasibility of large-scale MT was Journal of English Teaching and Learning Issues 119 reestablished by the success of the Logos MT system in translating military manuals into Vietnamese during that conflict.
The Emergence of Machine Translation and its evolution.
The competition towards establishing more business with different parts of the world incited advanced countries in technology to look for easy and quick ways for communication. Hence, there emerged a type of translation known as Machine Translation for the process of translation was carried out by machines. The specific date when this type of translation did emerge as stated in Olivia Craciunescu's article " Machine Transltion and Computer-Assisted Translation: a New Way of Translating" is believed to be "the beginnings of the Cold War… in the 1950s competition between the United States and the Soviet Union".
Machine Translation as a new emerging discipline in the field of translation study has come to fill the void existing due to the small number of good and acknowledged translators. It was an advantageous way of translation in that it saves both time and money; a large quantity of articles and documents were easily translated in a short time with a low amount of money.
Therefore, the fact that machine translation is carried out by machines does not mean that humans are totally absent from the process of translation; nevertheless, there is human intervention, as in the case of Computer-Assisted Translation and in other cases of some translating machine programs that are limited in terms of the vocabulary provided by their programmed dictionaries. In this regard, the role of human translators is manifested in what is known as the process of pre-editing of the intended source text to be translated, and post-editing of the translated version provided by the machine translation.
Approaches
Bernard Vauquois' pyramid showing comparative depths of intermediary representation, interlingual machine translation at the peak, followed by transferbased, then direct translation. Machine translation can use a method based on linguistic rules, which means that words will be translated in a linguistic way -the most suitable (orally speaking) words of the target language will replace the ones in the source language.It is often argued that the success of machine translation requires the problem of natural language understanding to be solved first.
Generally, rule-based methods parse a text, usually creating an intermediary, symbolic representation, from which the text in the target language is generated.
According to the nature of the intermediary representation, an approach is described as interlingual machine translation or transfer-based machine translation. These methods require extensive lexicons with morphological, syntactic, and semantic information, and large sets of rules.
Given enough data, machine translation programs often work well enough for a native speaker of one language to get the approximate meaning of what is written by the other native speaker. The difficulty is getting enough data of the right kind to support the particular method. For example, the large multilingual corpus of data needed for statistical methods to work is not necessary for the grammarbased methods. But then, the grammar methods need a skilled linguist to carefully design the grammar that they use.To translate between closely related languages, the technique referred to as rule-based machine translation may be used.
Human Translation
Any attempt to replace Human Translation totally by machine translation would certainly face failure for, due to a simple reason, there is no machine translation that is capable of interpretation. For instance, it is only the human translator who is able of interpreting certain cultural components that may exist in the source text and that can not be translated in terms of equivalent terms, just like what automatic translation does, into the language of the target text. In addition, it is widely agreed upon that one of the most difficult tasks in the act of translation is how to keep the same effect left by the source text in the target text. The automatic translation, in this regard, has proved its weakness, most of the time, when compared with a human translation. The human translator is the only subject in a position to understand the different cultural, linguistic and semantic factors contributing to leaving the same effect, that is left in the source text, in the target text.
It is an undeniable fact that automatic translation is regarded as a tool for producing quick and great number of translated texts; nevertheless, the quality of the translation is still much debatable. The automatic translation, for instance, cannot usually provide a definite translation for words that bear different vowel forms such as the Arabic term /kotob/ which means in English "books". The term in many translation programs, when translating from Arabic into English, is confused with the other Arabic term /kataba/ which means in English the verb "to write".
On the other hand, no human translator would make the same mistake for their ability to read words with different diacritic marks or vowels. In some cases, the automatic translation cannot even provide equivalent terms in the target language leaving them as they are in the source text. Actually, this part in the paper has been dedicated mainly to demonstrate some of the general differences between automatic translation and human translation which make the latter much favorable than the former. The human translation process may be described as decoding the meaning of the source text and re-encoding this meaning in the target language.
Comparism Between Human and Machine Translation
General-use machine-translation engines, like Google Translate, tend to give very literal dictionary translations, and in the case of translating non-Latin alphabets like Arabic, Cyrillic or Chinese into English, can often return complete nonsense.
But some sophisticated translation companies have begun to offer machinetranslation engines that are trained by human translators.
In these instances, the translation engine is "fed" with as much of a business's professionally translated content as possible for all of its different language markets, so that the machine begins to recognise the terminology that's particular to its sector, and for each of its different markets.To give a very broad example, if a business is fashion, a machine translation tool designed for English to French would recognise that a specific type of blazer or jacket should be translated as a "smoking jacket", not with the dictionary translation of "veste".
For high-volume, low-quality translations, it might be sufficient to simply use the machine-translated content (if a few inaccuracies aren't likely to cause major troubles). If a business needs a higher quality translation, though, the most cost effective option is post-edited machine translation.This is where the content is fed through the machine-translation engine and then checked afterwards by a human translator to ensure there are no errors, and that the content is correctly localised for its intended region (no jarring cultural references, language and spelling choices).
The professional translator will then feed any adjustments they make back into the translation engine, so it gets more efficient and knowledgeable the more it gets used.
Making multilingual easy
Managing translation and multilingual content by hand can be extremely timeconsuming: emailing back and forth, checking content, debating over changes etc.
And with an ecommerce website, there's a lot of content to get through, from carefully tailored landing pages through to constantly changing product catalogue, client communications, front page updates, and so forth.
But businesses can eliminate that administration time by using an API (a corporation) to connect their website with professional human translators and intelligent machine-translation engines, so they can quickly and efficiently translate their website content from popular ecommerce platforms like Magento into different languages.The API connects a site's platform directly with the translation system, and allows businesses to set up the rules as to what content gets sent automatically for translation, and what gets sent individually.
In general, the art of translation and translation services don't just depend on word-for-word replacement of a source language text with a target language equivalent. If that were the case, then human translation would be rendered obsolete by machine translation. At the moment though, all professional translation is best done with human translation and the occasional machine translation service.
The Shortcomings of Machine Translation
To illustrate, a computer translation assistant once automatically translated an article about the former First Lady of the United States, Laura Bush, in French.
Although computer translators are programmed to discern certain expressions and figures of speech, each and every instance of "Laura Bush" found in the article ended up being translated as "le buisson de Laura". More to the point, "Bush" was transliterated by the machine as a noun instead of a family name, and "buisson" is incidentally French slang for "vagina" to boot.
As long as automatic machine translators lack self-awareness or insight equal to that of a normal human being, human translation will always be needed. At any rate, let's now take a good look at what a high-grade, topnotch professional translation really looks like. To reiterate, it's important to view a translator as an expert craftsman; a linguist, a specialist, and a wordsmith all-in-one multiplied by two or more different languages.
What about Human Translation?
So does this spell imminent doom for human multi-linguists? Not quite yet. As mentioned above, for quality translations where errors are likely to cause problems, such as product descriptions, at the very least businesses will want a human translator to post-edit the machine translations, to ensure accuracy.
The US Department of Labor put it best in a 2014 report, writing, "It is seldom, if ever, sufficient to use machine translation without having a human who is trained in translation available to review and correct the translation to ensure that it is conveying the intended message."And more importantly, for carefully tailored marketing content, such as a landing page, businesses will definitely need a professional human translator. They should not only be a native speaker of the target language, but also experienced in that business sector and have an in-depth knowledge of marketing and copywriting.This will ensure they get all the colloquialisms and cultural references correct, use the right terminology and accurately capture the styling and tone of your finely crafted marketing copy.
Businesses will also want to ensure that a native language search specialist optimises their landing pages for a good result. Then they can be confident that their various pages will perform well with the keywords that are the most used and most relevant in that language and for that region.So no, the machines aren't taking jobs just yet -but they're certainly winning the race in translation.
The Advantages of Human Translation
Even though translators can hardly be compared to, say, writers or journalists when it comes to making stories and articles from scratch, they are still considered experts in their field because of the way they hone a source text to fit a certain audience. In those terms, translators can be compared to editors who constantly shape, mold, and perfect a written piece for better public consumption.
To illustrate, here's the typical way a translator goes about his business: Once he has finishes a draft of his translation, he'll check whether or not his work contains any inconsistencies, misunderstandings, gaffes, and the like via constant and deliberate proofreading. From there, the translator will rewrite his proofread outline so that it will hide the translation marks. Being able to do so will help make the end product seem less like the result of a translation service and more like an original document.
Comparing Machine and Human Text Translation
In an attempt to spot light on the major practical differences between machine translation and human translation, the paper provides the following text to be translated by the two types of translation. The text is an extract written in English, taken from L.G. Alexander's book. The focus is to be on depicting, semantic and Journal of English Teaching and Learning Issues 125 pragmatic differences manifested in the translated version. The translation is to be from English into Indonesian.
The source text:
GOOD NEWS
The secretary told me that Mr. Harmsworth would see me. I felt very nervous when I went into his office. He did not look up from his desk when I entered. After I had sat down, he said that business was very bad. He told me that the firm could not afford to pay such large salaries. Twenty people had already left. I knew that my turn had come. 'Tuan Harmsworth,' saya ucapkan dengan nada yang pelan.
Oleh: L.G. ALEXANDER It is quite obvious, from the first reading of each translation, that machine translation is not that perfect rendering of the source text into the target text. The point is that the translated text, still, bears much of the traits characterizing the language of the source text; therefore, much should be said about how the use of language is violated as well as the meaning. Simultaneously, some focus is to be on to what extent the human translation has succeeded in transforming the source text into the target text depicting whether the translated text has the same effect as the source text.
The use of language
Violating the use of language is one of the main deficiencies that Machine Translation suffers from. For example we may see the following result of translation: The The source text: He did not look up from his desk when I entered.
The misuse of language, which is much manifested in machine translation, is mainly due to the literal nature of the translation. In the above example, the machine translation is a literal translation or instead a word-for-word translation; the reader can easily notice that there is no flexibility in the machine translation in that each word in the source text has been substituted orderly by another in the machine translation. Thus, it becomes clear that machine translation, is a translation, the focus of which is the source text rather than the target text. The word order is respected only in the source text; however, as far as the target text is concerned, no importance is given to the word order and the way words are linked resembles the way how words are linked in the source text.
Although the meaning can be comprehensible; nevertheless, the structure of languages are different and, hence, they should be respected for the sake of producing a well-formed translation in the target language. The inability of the machine translation to produce a well-structured text is due to its focus, as stated by Olivia Craciunescu, on the "comprehension" and not "the production of a perfect target text".
So far as the human translation is concerned, the above example can reveal, clearly how the human translator is capable of avoiding what have been criticized in the machine translation. The human version is a structure respecting and its focus has been in both the source text, in an act of comprehension, and the target text, in an act of producing a perfect translation. The human translator's flexibility allows them to move from language into another bearing in their minds the difference of structures between languages.
Violation of meaning
No one can deny that the main rationale behind any translation is to transfer as much as possible the meaning intended by the source text's writer into the target text. Yet, in machine translation, this is not always the case in that sometimes the achieved meaning is ambiguous, distorted, and it becomes difficult to grasp it just like in the following example: The source text: Twenty people had already left Human Translation: Dua puluh karyawan telah di PHK.
The meaning 'left' in the machine translation is ' telah meninggalkan' while in human translation is 'PHK'. This association of meaning in machine translation is quite unfit for it is known that the act of telah meninggalkan in Indonesian language is not suitable to the meaning of the sentences because the meaning is ambigues. This is mainly, as stated before, due to the fact that machine translation focuses on the source text's language which is in this case English, as being different from Indonesian language.
Conclusion
Generally speaking, since it was first acknowledged as an academic discipline, translation studies have known the emergence of new methods of translation including so-called Machine Translation. However, its emergence was not at the expense of Human Translation for the latter proved to be the only subject capable of translating not only by means of substituting words for words, like Machine Translation, but also in terms of respecting linguistic, semantic, and more importantly cultural differences between languages. This paper has been an attempt to draw a distinction between Machine Translation and Human Translation shedding light on the different characteristics of each one. The focus has been on depicting some the factors that render Human Translation more effective and flexible in comparison with Machine Translation.
Thus, for the sake of illustrating, a practical text has been provided and it was translated by both Machine Translation and Human Translation.
Journal of English Teaching and Learning Issues 129 Therefore, the fact that machine translation is carried out by machines does not mean that humans are totally absent from the process of translation; nevertheless, there is human intervention, as in the case of Computer-Assisted Translation and in other cases of some translating machine programs that are limited in terms of the vocabulary provided by their programmed dictionaries. In this regard, the role of human translators is manifested in what is known as the process of pre-editing of the intended source text to be translated, and post-editing of the translated version provided by the machine translation. | 5,847.8 | 2020-12-17T00:00:00.000 | [
"Linguistics",
"Computer Science"
] |
Entanglement entropy along a massless renormalisation flow: the tricritical to critical Ising crossover
We study the R\'enyi entanglement entropies along the massless renormalisation group flow that connects the tricritical and critical Ising field theories. Similarly to the massive integrable field theories, we derive a set of bootstrap equations, from which we can analytically calculate the twist field form factors in a recursive way. Additionally, we also obtain them as a non-trivial roaming limit of the sinh-Gordon theory. Then the R\'enyi entanglement entropies are obtained as expansions in terms of the form factors of these branch point twist fields. We find that the form factor expansion of the entanglement entropy along the flow organises in two different kind of terms. Those that couple particles with the same chirality, and reproduce the entropy of the infrared Ising theory, and those that couple particles with different chirality, which provide the ultraviolet contributions. The massless flow under study possesses a global $\mathbb{Z}_2$ spin-flip symmetry. We further consider the composite twist fields associated to this group, which enter in the study of the symmetry resolution of the entanglement. We derive analytical expressions for their form factors both from the bootstrap equations and from the roaming limit of the sinh-Gordon theory.
Introduction
Our understanding of many-body quantum systems both at and out equilibrium has been dramatically boosted in recent decades. An important part of this progress is due to the research carried out on entanglement in extended quantum systems. Being the fundamental feature of quantum mechanics, entanglement is responsible of a plethora of quantum phenomena, novel phases of matter, and collective effects [1][2][3][4]. Different quantities have been introduced to characterise and measure the amount of entanglement. The most prominent ones are the Rényi entanglement entropies. If we consider an extended quantum system in a pure state |Ψ⟩ and we take a spatial bipartition into subsystems A and B, then the Rényi entanglement entropies are defined as S n (ρ A ) = 1 1 − n log Tr(ρ n A ), (1) where ρ A is the reduced density matrix that describes the state of subsystem A, which can be obtained by taking the partial trace to the complementary subsystem B, ρ A = Tr B (|Ψ⟩ ⟨Ψ|). In the limit n → 1, Eq. (1) gives the von Neumann entanglement entropy S(ρ A ) = − Tr(ρ A log ρ A ).
The Rényi entanglement entropies are particularly interesting quantities to study in quantum field theories (QFT). In the ground state of two-dimensional conformal field theories (CFT), they grow logarithmically with the subsystem size, violating the area law, and they are proportional to the central charge of the theory [5,6]. In general, using the path integral approach, the ground state Rényi entanglement entropies can be cast as partition functions of the field theory on a Riemann surface. Alternatively, they can also be computed as correlation functions of branch points twist fields inserted at the end-points of subsystem A, which are spinless primaries in CFTs [6,7].
In the renormalisation group picture of QFTs as perturbed CFTs, an important result in two dimensions is the Zamolodchikov c-theorem [29,30], which describes the loss of information about the short-distance degrees of freedom along the flow. Employing the ∆-sum [31], it was found in Refs. [14,19] a function along the renormalisation flow associated to the branch point twist fields with the same qualitative behaviour as the Zamolodchikov c-function. This ∆-function monotonically decreases with the distance and it is equal to the scaling dimension of the twist fields at the IR and UV fixed points of the flow. A different c-function can also be directly constructed from the entanglement entropy [32].
In this paper, we investigate the ground state Rényi entanglement entropies in the massless QFT associated to the renormalisation group that connects the tricritical and critical Ising theories by perturbing the former with a relevant field. This theory is the simplest member of the well-known family of massless renormalisation group flows that have as UV and IR fixed points two consecutive A-series unitary conformal minimal models [33][34][35][36][37]. The form factor bootstrap program has been successfully applied in Ref. [38] to certain correlators along the tricritical-critical Ising flow. Here we extend it to the branch point twist fields and we obtain explicit expressions for the two and four-particle form factors. To this end, we follow the same strategy as in the massive case, we write the set of form factor bootstrap equations that take into account the particular exchange properties of the twist fields and we propose a general ansatz for their solution. Furthermore, we also derive the two and four-particle form factors along the massless flow from the roaming limit of the sinh-Gordon ones. By analytically continuing the scattering matrix of the sinh-Gordon model, one can find the Zamolodchikov's staircase model [39], a two-dimensional integrable scattering theory that describes a renormalisation group flow which interpolates between the successive A-series unitary conformal minimal models. It has been shown [40,41] that the form factors of different fields in the tricritical-critical Ising model flow can be obtained as roaming limits of certain form factors of the sinh-Gordon theory. We show here that a similar strategy holds for the twist field form factors. We also study the ∆-function associated to the twist fields along the flow, finding that it is monotonic and correctly reproduces their scaling dimension at the fixed points.
In the last years, one of the main research lines in the study of entanglement in extended systems has been its interplay with symmetries. If the system presents a global symmetry, the entanglement entropy can be further decomposed into the contribution of each symmetry sector [42][43][44]. The massless renormalisation flow between the tricritical and the critical Ising theories has a global Z 2 symmetry. Let us denote by Q the charge operator that generates the symmetry and assume that it is the sum of the charges in subsystems A and B, Q = Q A + Q B . In that case, the reduced density matrix admits the following decomposition in symmetry sectors where ρ A,q = Π q ρ A Π q /p(q), Π q is the projector onto the eigenspace of Q A with eigenvalue q, and p(q) = Tr(ρ A Π q ) guarantees the correct normalisation of ρ A,q , i.e., Tr(ρ A,q ) = 1.
The symmetry-resolved entanglement entropies S n (ρ A,q ) quantify the amount of entanglement in each charge sector. One usual way of calculating them is through the charged moments of ρ A , Z n (α) = Tr(ρ n A e iαQ A ), by applying the Fourier representation of the projector Π q . For a Z N symmetry group, q can only take values q = 0, . . . , N − 1 and Therefore, one can write The charged moments (3) were initially introduced in an independent way in Refs. [45,46] in the context of holography. After Ref. [43] pointed out its connection with the symmetry-resolved entropies (5), both quantities have been intensively studied in two-dimensional CFTs [43,44,[47][48][49][50][51][52][53][54][55][56][57][58] as well as in free and integrable QFTs [59][60][61][62][63][64][65][66]. Symmetry resolved entanglement has also been investigated in other systems such as lattice and spin models [67][68][69][70][71][72][73][74][75], as well as in ion trap and cold atom experiments [76][77][78][79]. Similarly to the neutral moments Z n (0) of ρ A , the charged ones can be expressed as partition functions of the field theory on a Riemann surface, but now in presence of an external magnetic flux. The charged moments Z n (α) can also be obtained from a correlator of composite branch point twist fields, which also take into account the non-trivial monodromy between the sheets of the Riemann surface due to the magnetic flux. The form factor bootstrap techniques developed for the standard branch point twist fields in integrable QFTs have been extended to the composite ones to study U (1) symmetries in free [60] and interacting QFTs such as sine-Gordon model [61], the Z 2 symmetry of massive Ising and sinh-Gordon models [62,63], and the Z 3 symmetry in the 3-state Potts model [64]. In this work, we consider the composite branch point twist fields associated to the Z 2 spin flip symmetry of the tricritical-critical massless flow, and we apply the bootstrap approach to obtain analytic expression for their two and four-particle form factors. Using them, we also investigate the corresponding ∆-function.
The paper is organized as follows. In Sec. 2 we introduce the system we are interested in, the massless renormalisation flow between tricritical and critical Ising theories. In Sec. 3, we review how entanglement entropies can be calculated in QFT as correlation functions of branch point twist fields and we discuss its spectral expansion in terms of form factors. In Sec. 4, we obtain the bootstrap equations for the standard branch point twist field form factors, we Table 1: Kac tables of the tricritical (Table 1a) and critical (Table 1b) Ising CFTs [80,81].
In each case, we report the conformal dimension of the primary fields ϕ r,s of the theory. The vertical and horizontal axes correspond to the s and r indices respectively.
propose a general ansatz for their solution, and we derive the explicit expressions for the two and four particle form factors. In Sec. 5, we repeat the same reasoning for the form factors of the composite twist fields associated to the Z 2 spin flip symmetry of the massless flow. In Sec. 6, we rederive the twist field form factors by taking the roaming limit of those of the sinh-Gordon model. In Sec. 7, we first study the ∆-function along the flow of the (composite) twist fields and we derive a cumulant expansion for the entanglement entropies. We end up in Sec. 8 with some conclusions and future prospects. We also include two appendices where we discuss in detail the derivation of some of the results presented in the main text.
2 The massless RG flow from the tricritical to the critical Ising theory In this paper, we investigate the ground state entanglement entropy along the renormalisation group flow that connects the tricritical and critical Ising CFTs, which are respectively the unitary minimal models M 4 and M 3 [35][36][37] with central charges [80,81] c UV = 7 10 , and c IR = 1 2 , and with Kac tables reported in Table 1. This integrable QFT, usually denoted as A 2 , is the simplest member of the infinite family of massless theories A p that interpolate between two consecutive A-series diagonal conformal minimal models M p+2 → M p+1 with central charges c UV = 1 − 6 (p + 2) (p + 3) , and c IR = 1 − 6 (p + 1) (p + 2) .
This family of integrable RG trajectories is obtained by deforming the UV CFT M p+2 with its relevant field ϕ 1,3 [33][34][35][36][37]. In particular, in the tricritical Ising CFT, ϕ 1,3 corresponds to the vacancy density field with conformal dimension h 1,3 = 3 5 (see Table 1a) [35][36][37]. In the Euclidean formalism, the action A flow of this flow takes the form where λ is a dimensionful coupling and, importantly, λ is positive, since for negative coupling a different massive integrable theory is obtained. Several other families of massless integrable flows have been identified as well [82][83][84][85][86][87][88][89]. The masslessness of the flow described by Eq. (8) can be understood by recalling that the tricritical Ising CFT M 4 is one of the simplest examples of superconformal theory [90][91][92]. The deformation with the vacancy field ϕ 1,3 leads to a spontaneous supersymmetry breaking [93] which gives rise to right-and left-moving massless Goldstone fermions ψ,ψ and ensures that the theory has vanishing mass gap.
In Ref. [93], it was shown that the low energy behaviour of the massless flow is described by the effective Lagrangian that is, the TT deformation of the critical Ising model. Notice that the Majorana fermions ψ,ψ of the Ising model are now identified with the Goldstone fermions of the massless flow A 2 , which are the only stable particles in this theory [94]. It is worth stressing that the massless flow at low-energies is described by a TT -deformed CFT. Such theories have been studied in great detail [95][96][97][98][99][100][101][102][103][104][105][106][107][108] and hence they provide non-trivial benchmarking for some of our results. The massless flow (8) as well as the effective Lagrangian (9) possess a mass scale M , which plays the role of the momentum scale at which non-trivial scattering happens between the fermions. We can parameterise the energy and momenta of the right-and left-moving Goldstone fermions in terms of a rapidity variable θ and of this mass scale M as [94] Since the massless fermions are the only stable particles, they form a complete basis of asymptotic states, which in the rapidity parameterisation read as which contains r right-moving and l left-moving fermions. If the rapidities are ordered as θ 1 > θ 2 > . . . > θ r and θ ′ l > . . . > θ ′ 2 > θ ′ 1 , then the set of states (11) corresponds to in-states, whereas the opposite ordering results in out-states. Different orderings are linked by scattering processes between the particles. Since the theory is integrable, the scattering of particles is completely elastic, preserves particle number and rapidity, and is fully characterised by the two-body S-matrices. Since the scattering of the particles is diagonal, the S-matrices are scalars and functions of the rapidity difference of the particles. In particular [94] S RR = S LL = −1 , that is, only the scattering between left-and right-movers is non-trivial. The massless flow (8) has been the subject of numerous studies. These involve its description in terms of the thermodynamic Bethe ansatz [35][36][37], or the determination of form factors, i.e., the matrix elements of the off-critical versions of the UV scaling fields, as well as certain correlation functions [38]. At the level of the free energy and form factors [40,41], the massless flow can also be recovered from the staircase model [39,82], which we shall introduce in Section 6. The model also shows interesting properties in inhomogeneous out-of-equilibrium situations as studied in [109] via generalised hydrodynamics.
To complete the brief review of this massless flow, we discuss its symmetry properties. Both the UV and IR limiting CFTs enjoy a spin-flip Z 2 symmetry under which the perturbing field also transforms trivally. Consequently, the massless flow inherits this symmetry as well. This fact can be made more transparent using the Landau-Ginzburg formalism, which allows the identification of the multicritical Ising CFTs with a Lagrangian [80,81]. In particular, the tricritical and critical Ising models can be described by the following actions in terms of the bosonic field φ where :: denotes normal ordering of the fields. The perturbing field of the UV theory ϕ 1,3 corresponds to : φ 4 : [80,81], which means that the action of the massless flow can be equivalently written as in which the invariance under the spin-flip Z 2 symmetry, which maps φ → −φ, is explicit.
Given the presence of a global Z 2 symmetry, a relevant question is whether the ground state entanglement entropy along the massless flow (8) can be resolved with respect to it. It is not immediately obvious if a reduced density matrix of the ground state of the theory commutes with the charge operator associated with the Z 2 symmetry. While for a continuous symmetry this is ensured by Noether theorem, in the case of discrete symmetries, the existence of a local charge density is not guaranteed. In order to justify the existence of such a local Z 2 charge, we can appeal to the defect line formalism. As understood in recent years, global symmetries in QFT are implemented by topological defects [110,111], which, in the case of CFT minimal models, correspond to the Verlinde lines operators. In particular, the spin-flip Z 2 symmetry is implemented by the Verlinde line associated with the primary operator ε. Such a defect line can be restricted to the subsystem A, with two disorder operators µ inserted at the end-points [110,111] (which we discuss in more detail in the next section). This formalism has been very recently used to study the symmetry resolution of entanglement in CFTs with respect to both continuous and discrete finite groups in Ref. [57]. Since the operators ε and µ exists along the entire massless flow, the previous construction may be extended outside the fixed points.
Entanglement entropy and Branch Point Twist Fields in QFT
In this section, we review the computation of the entanglement entropies in QFT as correlators of branch point twist fields. In QFT, the non-trivial task of computing entanglement entropies can be naturally formulated via the path integral approach. The main idea is that the moments of the reduced density matrix Tr(ρ n A ) and the charged moments Tr(ρ n A e iαQ A ) can be regarded as partition functions of the QFT on a Riemann surface consisting of n replicas of the space-time that are sewed along the subsystem A in a cyclical way [5,6].
Alternatively, one can take n copies of the QFT under analysis and quotient them by the Z n symmetry associated to the cyclic exchange of the copies. In (1 + 1)-dimensional relativistic QFTs, there exist local fields in the n-replica theory, called branch points twist fields (BPTF), that implement the boundary conditions imposed on the fields in the path integral on the n-sheeted Riemann surface. These twist fields can be generalised to cases in which the boundary conditions also involve additional phases, such as in the calculation of the charged moments Tr(ρ n A e iαQ A ), in which an Aharonov-Bohm flux is introduced between the sheets of the Riemann surface. In this setup, the corresponding twist fields are called composite BPTFs and they were originally introduced in other context [14,15]. Both types of fields are associated with particular symmetries of the replicated theory, which allows us to discuss them on the same ground. Therefore, for our purposes it is useful to distinguish the following twist fields: 1) the disorder field µ associated with the Z 2 spin-flip symmetry of the massless flow; 2) the standard BPTFs, T n and its conjugate T n , which are associated with the cyclic and the inverse cyclic permutation symmetry Z n among the copies in the n-replica massless flow. These fields play a central role in computation of the entanglement entropy; 3) the Z 2 -composite BPTFs, denoted as T µ n and T µ n , which are the result of fusing the former fields T µ n (x) = : T n µ : (x) , T µ n (x) = : T n µ : (x).
Therefore, they are associated both with the Z n symmetry under the cyclic permutation of the replicas and with the global Z 2 spin-flip symmetry present in the massless flow. These composite fields play the analogous role of the BPTF in the computation of the symmetry resolved entanglement entropies [43].
These twist fields are typically non-local or semi-local with respect to other quantum fields of the theory, in particular with respect to the fundamental field or to the interpolating field, which is associated with particle creation/annihilation. Non-locality can be formulated by non-trivial equal-time exchange relations between the two fields. Let us first consider the disorder operator µ and an operator O i living in the copy i of the replicated theory. Since the disorder field introduces an Aharonov-Bohm flux in the region y 1 > x 1 , the exchange relations of these two operators can be written as We refer to κ O as the charge of the operator O with respect to the Z 2 spin-flip symmetry. In particular, the Goldstone fermions ψ,ψ which generate the asymptotic states (11) have charge κ ψ = 1, i.e., they are odd under the spin-flip transformation. The action of the standard BPTFs when winding around a field is to cyclically map it from one replica to the next, as encoded in the equal time exchange relation In the case of the composite BPTFs, the winding around them further adds a phase e iκ O π . When considering discrete groups, as the Z 2 spin-flip symmetry of the tricritical-critical massless flow, we must be careful on how we include this phase. Unlike the continuous U (1) symmetry, discussed in Refs. [60,61], here we cannot distribute the flux uniformly in all the copies by inserting a phase e iκ O π/n when moving between replicas since this operation in not compatible with the properties of the Z 2 field µ. This issue can be addressed in two different ways. The first possibility is to insert a phase e iκ O π between all the copies; this corresponds to consider the exchange relation This approach was applied in Ref. [63], but it is only legitimate when we take an odd number of replicas n = 1, 3, 5, 7, . . . in which the identity e iπn = −1 clearly holds. The other approach consists of introducing the flux only between the last and the first replicas, in such a way that the phase e iπκ O only appears when a particles moves from the n-th copy to the 1-st one, that is for y 1 > x 1 and i ̸ = n , e iκ O π T µ n (x) O i+1 (y) , for y 1 > x 1 and i = n , This choice introduces a slight asymmetry between the replicas, but it is applicable to any number of replicas n. In Sec. 5, we discuss in more detail the effect of the two conventions (18) and (19), showing that they provide the same results for correlation functions under analysis. Analogous exchange relations can be formulated for the Hermitian conjugate fields T and T µ , with the difference that they move the field from the replica i to i − 1. In the following discussion, whenever we wish to treat both the standard and the composite twist fields at the same time, we use the notation T τ n , T τ n , where τ refers either to 'µ' for the composite or to the identity for the standard BPTF.
Using the (composite) BPTFs, one can switch from a path-integral to an operator formulation of both the neutral and charged moments of ρ A , which can be defined in terms of multi-point functions of the standard or the composite BPTFs in the replicated QFT inserted at the end-points of subsystem A. In particular, when A consists of a single interval, A = [0, ℓ], we have and The twist field formalism is especially useful at criticality, where conformal invariance fixes the properties of both the standard T n and the composite branch point twist field T µ n . In particular, in a unitary CFT with central charge c, the standard twist fields T n , T n are known to be primary operators with conformal dimension [6,7] We remind the reader that, in the A-diagonal unitary minimal models M p , the central charge is given by Eq. (7). In order to identify the conformal dimension of the composite twist fields T µ n , T µ n , one can use the fact that they are the fusion of T n , T n with the disorder field µ as shown in Eq. (15). In the tricritical and critical Ising models, the field µ is the Kramers-Wannier dual of the spin field σ = ϕ 2,2 and has the same conformal dimension reported in the Kac table in Table 1 [80,81] where we denote with UV the tricritical and with IR the critical Ising models respectively. Knowing the dimension of the disorder field, the one of the composite twist fields T µ n , T µ n is obtained as [43] In particular, for the tricritical and critical Ising models, Eq. (24) gives respectively The use of CFT techniques has provided exact results for Tr(ρ n A ) in many different situations [6,7]. On the other hand, away from criticality, the exact determination of the correlation functions of Eqs. (20) and (21) is known to be an extremely difficult task, except in the case of free theories [17,70]. In integrable QFTs, however, the form factor bootstrap approach provides a powerful tool to systematically investigate and construct (truncated) multi-point functions via form factors, namely matrix elements of generic local operators between the vacuum and the multi-particle states [12,13]. Although, in principle, all these matrix elements can be analytically computed, their resummation is an unsolved problem. Nevertheless, the multi-point correlation functions at large distances are generically dominated by the first few (lower-particle) form factors. For this reason this technique applies efficiently to the infrared properties of these theories as was first shown in [8] in the case of BPTFs and entanglement. As we shall see in this paper, the above considerations do not hold in massless theories, that is, when the IR limit of the QFT is described by a CFT as well. However, we show that it is possible to identify a subset of terms in the form factor expansion whose resummation reproduces the IR CFT results, while the remaining contributions yield non-trivial predictions for the behaviour of the entropies along the flow.
Form factors and spectral representations of BPTF correlation functions
From the knowledge of the exchange relations (17) satisfied by the BPTFs, one can formulate bootstrap equations for their FF in integrable QFTs [8][9][10], generalising the standard form factor program for local fields [12,13], which for the tricritical-critical Ising flow (8) has been investigated in Ref. [38].
Let us consider the two-point correlation function of the (composite) BPTFs in the ground state of the theory and insert the set of asymptotic states (11), which form a complete basis, where τ = 0, µ corresponds to the standard or the Z 2 -composite BPTF respectively. In the multi-particle states of the n-replica theory, the subindex γ i = R, L specifies if the particle with rapidity θ i is a right-(R) or left-mover (L). Moreover, each particle is labelled by an extra index ν i which indicates the copy where the particle lives; therefore, it takes values from 1 to n and it is identified up to ν i ∼ ν i + n.
In the n-replica theory, the S-matrix connects non-trivially only particles living on the same replica, while particles in different copies do not interact and no scattering events occur between them. In light of this, the S-matrix of the replicated model takes the form [8] where S γ i ,γ j (θ) is the S matrix of the original theory, which for the massless flow (8) is reported in Eq. (12).
Since the vacuum of the theory is invariant under space and time translations, we can rewrite the spectral expansion in Eq. (26) as where E i and p i are the single particle energies and momenta reported in Eq. (10). The elementary FFs of a generic (semi-)local operator O(x, t) are their matrix elements between the vacuum and the asymptotic multi-particle states (27), i.e.
Therefore, as clear from Eq (29), the spectral sum representation of the twist field correlation function can be rewritten in terms of FFs in the following way where we switched to the Euclidean formalism for simplicity and ℓ denotes the Euclidean distance. We can see from the above formula that the computation of the twist field correlation functions can be naturally formulated by means of FFs via the insertion of a complete set of asymptotic multi-particle states. Crucially, the form factors in integrable QFTs can often be determined exactly, giving access to the corresponding correlation functions. In the following, we review some basic properties of the twist field FFs and present the bootstrap equations from which their analytic expressions can be obtained.
Form factors of the branch point twist field in the massless flow
Given the exchange properties of the standard BPTFs (16), it is possible to write down the bootstrap equations for the form factors (30) associated with these fields in integrable QFTs. Relying on earlier works [8][9][10], we can immediately specify these equations for our massless theory (8). If we denote the FFs of T n by F T |ν γ (θ; n), then their bootstrap equations can be written as [8][9][10] − i Res − i Res where we introducedγ andγ i denotes the anti-particle of γ i (which coincides with the particle in the theory under consideration). Here θ and γ, ν are shorthands for (θ 1 , θ 2 , . . . , θ k ) and (γ 1 , γ 2 , . . . , γ k ), (ν 1 , ν 2 , . . . , ν k ) respectively, with γ = R, L andR = R,L = L. In the argument of the S-matrices, θ ij = θ i − θ i .
In the massless flow (8), two particles of any type cannot form a bound state. It is also easy to see that the one-particle FFs of BPTF are vanishing. The reason for this is that these fields are neutral w.r.t. Z 2 charge. This implies that only FFs with an even number of R and an even number of L particles are non-vanishing and, consequently, the odd-particle FFs are zero.
Moreover, relativistic invariance imposes that where Σ is the Lorentz spin, which is Σ = 0 for the twist field. Another important property of form factors which will be useful in our analysis is the cluster property, studied in detail in Ref. [31] and recognised in different models, see e.g. [19,[112][113][114][115]. In the limit in which the difference between the particle rapidities diverges, the form factors factorise in the product of form factors with a lower number of particles. In our model, the clusterisation of the different particle species can be phrased as where θ and ν stand for the rapidities and replica indices of the 'R' particles, and θ ′ and ν ′ for the 'L' particles. The cluster property for particles of the same species is instead written as with an analogous expression for the clustering of 'L' particles.
Let us now use the previous axioms to construct a set of solutions of the bootstrap equations (32)-(35) for the BPTF form factors. To fix the ideas, we first place every particle on the first replica ν i = 1. A convenient ansatz for the form factors is [38] where we have r right-moving and l left-moving particles and we have defined x i = e θ i /n , y i = e −θ ′ i /n and ω = e iπ/n . Notice that we simplified our notation by omitting the reference to the replica indices. In the ansatz (40), Q T r,l are polynomials of their variables and f RR = f LL and f RL are the minimal form factors. In Eq. (40), the kinematical singularity of the FF (see Eq. (34)) comes entirely from the denominators and therefore the cyclic permutation and the exchange axioms, Eqs. (32) and (33), are automatically satisfied requiring the following identities for the minimal form factors: By prescribing that the minimal form factor f RR has no poles and has the mildest asymptotic behaviour, we end up with the unique solution and f RR = f LL , which is identical to the minimal form factor of the massive Ising theory [8].
For f RL , the defining equations are whose solution can be explicitly given based on the knowledge of the Fourier representation of the non-trivial S-matrix S RL in Eq. (12). In particular, we can write the solution as using an integral representation, or, alternatively, in terms of a mixed product integral representation which is more convenient for numerical evaluation. Notice that, for n = 1, the minimal form factor (44) reduces to the known result for a single replica obtained in Ref. [38]. Moreover, In order to fix this normalisation along the massless flow, we compute the value N −1 n of f RL in the limit θ → ∞, which reads where we have defined the sequence which is equal to Catalan's constant G for n = 1, recovering the normalisation for f RL found in the non-replicated theory in Ref. [38]. With the choicẽ we fix all the constants in the form factors for the massless flow. An important property that the f RL minimal form factor satisfies is which shall be very useful in the rest of the section. The ansatz (40), with the definitions for the minimal FFs f RR (42) and f RL (45), satisfies all the axioms for the BPTF FFs. The eventual determination of F T R,L (θ, θ ′ ; n) can be done recursively. In fact, by applying the residue axiom in Eq. (34) to the ansatz (40), one can derive recursive equations for the unknown Q T r,l (x, y; n) functions that relate Q T r+2,l (x, y; n) or Q T r,l+2 (x, y; n) to Q T r,l (x, y; n), that is, to Q T r,l functions with fewer particles. In the next subsections and in App. A.1, we explicitly demonstrate how the determination of higher-particle FFs is carried out by solving the recursive equations for the polynomials Q T r,l .
Two-particle form factors and form factors with only one species
Since the Lorentz spin of the BPTFs is zero, their two-particle FFs only depend on one rapidity variable (37), that is, the rapidity difference θ 1 − θ 2 . Recall that, because of the spin-flip symmetry, we can only have 'RR' and 'LL' form factors, which means that these quantities coincide with those of the massive Ising QFT (c.f. Eqs. (40) and (42)) up to the vacuum expectation value ⟨T n ⟩. These quantities, nevertheless, can also be easily obtained from the bootstrap equations (32), (33). For the two-particle form factors, they imply that In this case, the kinematic residue equation (34), connects the two-particle FFs and the vacuum expectation value of the twist field. We can therefore write If this formula is recast in the form of the ansatz (40), then we have the equivalent expression in which we identify where σ j is the fully symmetric polynomial of degree j in the variables x 1 and x 2 . Since in the formula above both particles live in the first replica, we slightly changed the notation, namely we denote the form factor corresponding to two right-moving particles living in the first replica F T |11 RR as F T 2,0 . In the following, we shall use this convention whenever all the particles are on the first replica. The 'LL' form factor can be obtained by replacing x 1 and x 2 by y 1 and y 2 in Eq. (54).
From F T |11 γγ (θ; n), we can obtain the form factors F T |jk γγ (θ; n) corresponding to particles in different replicas from [8] The form factorsF of the antitwist field T n can be simply obtained from those of T n through the relation As we already said, the only non-vanishing FFs with higher-particle number are those containing an even number of 'R' and 'L' particles. It is easy to see that, in the particular case of form factors only containing an even number of particles of the same type, that is, the 'RR...RR' and 'LL...LL' form factors, they exactly coincide with the standard BPTF FFs of the massive Ising theory up to the vacuum expectation value ⟨T n ⟩, similarly to the two-particle case discussed above. These form factors can be easily obtained from the two-particles ones. In particular, the form factor with 2k particles of the same type is given by for If the ordering of the indices ν i is not the canonical one, using the exchange axiom (32) one can reshuffle the particles and their rapidities to satisfy ν 1 ≥ ν 2 ≥ . . . ≥ ν 2k and apply (59). In particular, for the 'RRRR' or 'LLLL' FFs with all the particles in the same replica, we have the simple formula
Solution for the four particle 'RRLL' form factor
The first non-vanishing form factors that contain both 'R' and 'L' particles appear at the four-particle level: with any permutation of 'R' and 'L'. Similarly to the other FFs previously discussed, it is sufficient to determine only the 'RRLL' form factor with all the particles on the first replica. Using then the exchange relation (32) we can readily obtain any other sequence of the particle species, and, applying the cyclic permutation axiom (33), we can obtain FFs for particles living on different replicas. Following the notation introduced for the form factors with all the particles on the first replica, we will denote F T |1111 RRLL as F T 2,2 . In this case, the ansatz (40) takes the form Applying now the residue axiom (34) to Eq. (62), we can derive recursive equations for the H T 2,2 normalisation factor and the Q T 2,2 function. The detailed solution of this equation for the case of four-particles (RRLL) is presented in App. A.1 and here we report the results of the calculations.
The normalisation factor reads where N n is given by Eq. (47), while for the polynomial Q T 2,2 we obtain where σ i , i = 1, 2 denotes the completely symmetrical polynomial of degree i in two variables. Using these results, the final solution for the full FF is which we can also rewrite as We remark that the form factor in Eq. (65) is one of the main results of this paper. As we will show in Sec. 7, it will provide the leading correction to the IR expressions for the entanglement entropy.
Form factors of the Z 2 -composite branch point twist field in the massless flow
In this section, we derive the bootstrap equations for the form factors of the Z 2 -composite BPTFs associated to the disorder field µ along the massless flow (8) and we obtain their explicit solution for the two and four-particle cases. Similarly to the standard BPTFs discussed in Sec. 4, from the exchange properties of the Z 2 -composite twist fields (16), we can easily write down their form factor bootstrap equations. Importantly, these equations include the non-trivial phase e iπκ O in the monodromy properties due to the insertion of the disorder field µ. The asymptotic states (27) that enter in the definition of the twist field FFs are constructed from the fields ψ,ψ, which are odd under the Z 2 transformation, i.e. κ ψ = 1, and therefore we must take into account a phase e iπ when moving between replicas. However, as we discussed around Eqs. (18) and (19), we have two different ways to introduce it, either as a whole phase e iπ in each replica, which is valid only for odd n, or inserting it only in the last one. These two approaches lead to slightly different form factor bootstrap equations. In this section, we comment both choices. In particular, we will show that the two conventions give the same result for the form factors up to some (−1) factors which do not influence the final physical result.
Let us denote as F T µ |ν γ (θ, n) the form factors of the composite twist fields T µ n . If we introduce the phase e iπ on the last replica only, that is taking the exchange relations (19), the bootstrap equations take the form − i Res − i Res On the other hand, if we introduce the same flux between all the copies, we have − i Res − i Res where notations are the same as for the standard BPTF discussed in Sec. 4; in particular, we recall that γ i = R, L. Both the Lorentz spin and the Z 2 charge of the composite BPTFs are zero. Observe that the phase (−1) in Eqs. (72) and (74) as well as in Eqs. (68) and (70) is due to the non-trivial monodromy of the fields ψ,ψ with T µ n (compare with the analogous axioms for the standard BPTF in Eqs. (33) and (35)).
Similarly to the standard BPTF, only FFs with an even number of 'R' and 'L' particles are non-vanishing and, consequently, the odd-particle FFs are zero. Additionally, the FFs of the composite BPTF satisfy the momentum space clustering property in the same form as the FFs of the standard BPTF in Eqs. (38) and (39).
Analogously to what we have done in Sec. 4, let us assume the following ansatz for the composite twist field FFs in which, for simplicity, we place every particle in the first replica where we have r right-mover and l left-mover particles, and x i = e θ i /n , y i = e −θ ′ i /n and ω = e iπ/n as previously. The cyclic permutation and the exchange axioms can automatically be satisfied if the equalities are imposed, that is, the minimal form factors satisfy the non-trivial monodromy due to the insertion of the external flux. The solution of Eq. (76) can be easily obtained from the standard minimal form factor in Eq. (42) by simply introducing a factor 2 cosh(θ/2n) which changes the monodromy properties [63] f µ γγ (θ; n) = 2 cosh For f µ RL (θ; n) instead we have two possible choices. We might either choose the unaltered equation without the '−1' monodromy with the solution f µ RL = f RL , or we can also introduce the monodromy such that the solution becomes As we will later see in Sec. 6, the exponential factor in Eq. (80) also appears in the roaming limit approach. Importantly, the two choices for the minimal form factor f µ RL in Eqs. (78) and (80) are completely equivalent because for the composite BPTFs the number of 'R' and 'L' particles is always even. This implies that, in a FF, we always have the product of an even number f µ RL terms, which implies that the (−1) phases always mutually cancel. In order to connect in a clearer way with the roaming limit that we later discuss in Sec. 6, we choose Eq. (80) as the minimal form factors in the ansatz (75) for the composite BPTF. If we had taken (78), we would have got different expressions for the functions Q T µ r,l , which would differ only by products of x i and y j with the same integer powers.
We remark that, in contrast to what happened in Sec. 4, the ansatz (75) does not guarantee that Q T µ r,l is actually a polynomial. In fact, as we will explicitly show, this function is in general a rational function. The reason for this is the monodromy changing factor introduced in the minimal form factors in Eqs. (77) and (80). These terms possess additional zeros that cancel out with the denominator of the function Q T µ r,l , guaranteeing that the pole structure remains compatible with the bootstrap axioms.
Two-particle form factors and form factors with only one species
Similarly to the standard BPTFs, for the composite BPTFs the only non vanishing form factors at the two-particle level are those containing a pair of 'R' or 'L' particles, which coincide with the analogous expressions of the massive Ising QFT [8]. Alternatively, they can easily be obtained from the bootstrap equations, either from Eqs. (67), (68) or from Eqs. (71), (72). For the two-particle form factors, the bootstrap equations imply that The kinematic residue equations (69) or (73) relate the FFs to the vacuum expectation value of T µ n as −i Res The solution for the equations above can be immediately written by plugging in the two-particle FF of the standard twist field (53) the minimal form factor of Eq. (77) that takes into account the non-trivial monodromy of T µ n , obtaining where, for simplicity, we have placed every particle on the first replica. Notice that Eq. (83) is not in the form of our ansatz (75), but it can be recast accordingly as where an analogous expression for the 'LL' form factor holds upon replacing x i with y i . Since in the above formula each particle lives on the first replica, we again used the simplified notation to denote the FF, namely we write F T µ 2,0 which indicates that we have two right-moving particles on the first replica. In the following, we shall use this convention whenever all the particles are on the first replica.
The two-particle FFs with arbitrary replica indices can be straightforwardly obtained from the result (84) with all the particles on the first replica. Importantly, the different flux convention in Eqs. (67)- (70) or in Eqs. (71)- (74) only differ in some (−1) factors. In particular, if the flux is only inserted on one replica, we have The FFs of the anti-twist field T µ n denoted by F T µ a (θ, n) can be simply written as [60] If the flux is instead introduced on each replica, we have while the FFs of the anti-twist field T µ n satisfy Eq. (86). In the computation of the symmetry resolved entropy, the additional factor (−1) k−j always cancels out, leading to the same value for both choices.
Similarly to the treatment of the standard BPTFs, it is easy to see that, in the particular case of form factors that only contain an even number of particles of the same type -that is the 'RR...RR' and 'LL...LL' form factors-, they exactly coincide with the Z 2 -composite BPTF FFs of the massive Ising theory [8], as occurs in the two-particle case discussed above. Assuming that ν 1 ≥ ν 2 ≥ . . . ≥ ν 2k , they can be written in terms of a Pfaffian involving the two-particle FFs as where W µ is an anti-symmetric matrix with entries For a different ordering of the replica indices ν i , we can apply the exchange axiom (67) to reorder them in the form ν 1 ≥ ν 2 ≥ . . . ≥ ν 2k and then use Eq. (88). In particular, for the four-particle 'RRRR' or 'LLLL' FF with all particles on the same replica, Eq. (88) takes the form
Solution for the four particle 'RRLL' form factor
To obtain the first non-zero form factors that couple right-and left-moving particles, we have to move to the four-particle level, in which we find F T µ |ν 1 ν 2 ν 3 ν 4 RRLL and all the possible permutations of 'R' and 'L'. As for the standard BPTFs, it is sufficient to determine only the 'RRLL' form factor with all the particles on the first replica. In fact, using the exchange relation (67) we can directly get any other sequence of the particle species and, applying the cyclic permutation axiom (68), we can find the FFs for particles living on different replicas. If we denote the form factor F T µ |1111 RRLL as F T µ 2,2 , then it reads according to the ansatz (75).
Applying now the residue axiom to Eq. (91) we can derive recursive equations for the normalisation factors H T µ 2,2 and the Q T µ 2,2 functions in a similar way as we did for the standard BPTFs in Sec. 4.2. In App. A.2, we find the solution for the functions H T µ 2,2 , and Q T µ 2,2 , As we explain in Appendix A.2, the set of equations that allows to obtain Q T µ 2,2 recursively is under-determined. This ambiguity in the solution can be fixed by requiring that the form factor F T µ 2,2 reduces to the one of the disorder field µ in the single replica limit n → 1. One can further check that the normalisation term H T µ 2,2 also matches the one of µ in that limit. In Sec. 7, we use the ∆-sum rule to provide an additional test of the validity of our solution.
Roaming limit of twist field form factors
In the previous sections, we computed the form factors of the twist fields along the tricritical Ising massless flow directly from the solution of their bootstrap equations. In this section, in order to provide a non-trivial check of our expressions, we present an alternative derivation based on the roaming limit of the sinh-Gordon model. After reviewing the general notions of this approach, we will then use it to recover the form factors in the massless flow as the limit of those in the sinh-Gordon theory.
Let us first briefly introduce the sinh-Gordon (ShG) model. This theory is defined via the Euclidean action This is the simplest interacting integrable relativistic QFT and has been the subject of an intense research activity since many decades, see, e.g., [81,[116][117][118][119][120][121][122][123]. The spectrum of the model consists of multi-particle states of a massive bosonic particle with the dispersion relation E = m cosh θ, p = m sinh θ, where m is the particle mass. The two-particle S-matrix is given by [116] S ShG (θ) = tanh 1 2 where B is defined in terms of the coupling g appearing in the action in Eq. (94) as For the ShG model, the form factors of various operators are known [112,117,118], including the standard and the Z 2 -composite BPTFs in the n-replica theory [8,19,63]. It was observed in Ref. [39] that the S-matrix of the sinh-Gordon model can be analytically continued from the self-dual point B = 1 to complex values and resulting S-matrix is a new perfectly valid scattering theory, which has been called the staircase or roaming trajectories model. Using Bethe ansatz, it was found that, as the real parameter θ 0 increases, the c-function shows a 'staircase' of defined plateaux with values equal to the central charges of the M p unitary diagonal minimal models and, in the intervals between the plateaux, the flow was found to approximate the A p massless crossovers M p+2 → M p+1 generated by the perturbing field ϕ 1,3 discussed in Sec. 2. Therefore, in the roaming limit θ 0 → ∞, the staircase model describes a renormalisation group flow that passes by the successive minimal models M p . The final point of the flow is a massive Ising theory. In another work [40], it was shown that the c-function defined by the c-theorem [29,30] using a spectral series in terms of the form factors of the trace of the stress-energy tensor Θ [112,117] presents the same behavior. In addition, it was explicitly demonstrated that the FFs of the of the stress-energy tensor for the A p massless flows can be reconstructed from that of the ShG model. Importantly, for this construction to work, the rapidities in the FFs have to be also shifted by ±kθ 0 /2 with specific integers k. A follow-up publication targeted specifically the A 2 tricritical-critical Ising flow (8), and showed that the form factors of the order and disorder operators along the flow can also be obtained via the roaming limit of the appropriate ShG FFs and, although not published, the correspondence holds for the ε field of the flow as well. As we have said, the staircase model also incorporates the massive Ising field theory, which is regarded in this context as a flow from the critical Ising fixed point to a massive one, and where the consecutive RG flows between the multicritical Ising CFTs terminate. Accordingly, it was demonstrated in [119] that the FFs of the massive Ising theory can be obtained from the ShG FFs by merely taking the limit of Eq. (97), i.e., scaling the rapidity variables withing the FFs. In contrast, for other than the A 2 flow and massive Ising QFT, only the FFs of the field Θ were found to be reproduced by the roaming limiting procedure, and hence the validity of this approach is not a priori obvious and well understood.
Regarding the replicated staircase model, in Ref. [19] the form factors of the standard BPTFs in the sinh-Gordon have been computed up to the four-particle order. While the explicit roaming limit of these form factors was not carried out, they were used in the computation of the conformal dimension of the BPTFs applying the ∆-sum rule [31], which we discuss in more detail in Sec. 7. In particular, it was found that the two-particle contribution correctly reproduces the first 'step' of the staircase, from the critical Ising CFT to the massive theory, while the four-particle one gives the result for the massless flow A 2 from tricritical to critical Ising [19]. This result reveals that the roaming limit also holds for the branch point twist fields of the replicated theory. In the following, we make a step further, by explicitly performing the roaming limit of the form factors of both the standard and the composite BPTFs up to the four-particle order, showing that they reduce to the exact expressions in the A 2 flow (8) obtained via the bootstrap program in Secs. 4 and 5. Proving the correspondence in the first few non-trivial particle levels provides strong evidence that the roaming limit for standard and composite BPTFs is valid for any (composite) BPTF form factors in the A 2 massless flow. In the ShG model, the k-particle form factors of the BPTFs can be parameterised in the usual fashion, that is [19], where each particle is put on the first replica and the superscript τ = 0, µ denotes the standard or the composite BPTF respectively. The minimal form factor for the standard twist field f ShG (θ; n) is given by while the one for the composite field is obtained by including an appropriate monodromy changing factor analogously to what we have done in Eq. (77) for the massless flow. The minimal FF f ShG in Eq. (99) is normalised in such a way that f ShG (±∞, B; n) = 1. The roaming limit construction of the twist field FFs in the massless flow can then be formulated as where we split the rapidities in the sinh-Gordon FF into r right-moving (θ) and l left-moving (θ ′ ) ones, which we shift by θ 0 and −θ 0 respectively. The function B(θ 0 ) is defined in Eq. (97).
In the rest of this section, we explicitly demonstrate the validity of the limit in Eq. (101) up to the four-particle level. Let us first focus on the ShG minimal FFs. Based on Ref. [40], it can be shown that, in the roaming limit (101), the minimal form factor in Eq. (99) reduces to where f RL is the minimal form factor (45) in the massless flow and the normalisation constant N n was found in Eq. (47). Similarly, for the composite twist field one has with f µ RL given by Eq. (80). Note that, according to the definition (101), only the above cases are the relevant limits for the minimal form factor. Some of them involve an exponential factor e ±θ 0 /(2n) but we anticipate that similar factors originate from other terms of the entire FF and they eventually cancel. From Eqs. (102) and (103), it is easy to see that the roaming limit in the two-particle case correctly reproduces the form factors of the massless flow. This is clearer when the two-particle ShG FF is rewritten as a function of the rapidity difference as In the limit (101) this expression reproduces either Eq. (53) (for the standard BPTF) or Eq. (83) (for the composite BPTF) for the 'RR' and 'LL' cases, while it vanishes in the 'RL' case because of the diverging denominator.
Roaming limit of the four-particle FFs of the standard BPTF
It is also not difficult to show that the four-particle 'RRRR' and 'LLLL' FFs are provided by the roaming limit (101). If we consider the standard BPTFs, using as Q T 4 the polynomial determined in [19] and reviewed in App. B.1, we can proceed in the following way. According to Eq. (101), the 'RRRR' or 'LLLL' form factors that only contain right or left movers are given by In this limit, the denominator of the ShG FF (98) does not change but acquires the diverging factor e 6θ 0 /n , whereas for the polynomial Q T 4 we obtain the following lengthy expression which diverges exponentially as e 8θ 0 /n when θ 0 → ∞. In addition, taking into account the limit of the minimal form factor reported in Eq. (102), we have and for the normalisation factor H T n , we find H T n = 2 sin(π/n)ω 2 nf ShG (iπ, B; n) 2 ω 2 −→ e θ 0 /n sin(π/n)ω 2 n sin(π/2n) 2 ω 2 = e θ 0 /n 4 e i 6π n n 2 cos 2 π 2n .
Counting the divergent factors e θ 0 /n in final expressions of Eqs. (106), (107), (108) and (109), we can conlude that the 'RRRR' ('LLLL') roaming limit form factor of the ShG twist field is finite. In fact, putting all the above results together it is straightforward to check that the limit (105) works and Eq. (61) is exactly reproduced.
Turning to the case of the 'RRLL' form factor, we have to consider For the denominator, the limit gives whereas for the polynomial Q T 4 we obtain the following expression which we can rewrite as (1 + ω)y 2 1 y 2 For product of the minimal FFs, we find The limit of the normalisation factor H T n gives the same result as in Eq. (109). Combining (111), (113), (114), and the normalisation (109), it is immediate to see that the divergent exponential factors e θ 0 mutually cancel and that the roaming limit yields Eq. (65), confirming the validity of Eq. (110).
Roaming limit of the four-particle FFs of the composite BPTF
Unlike the four-particle form factor of the standard twist field, the one of the composite twist field was not previously known in the sinh-Gordon theory. In App. B.2, we compute this form factor by constructing and solving the bootstrap equations, starting from the usual ansatz in Eq. (98). Notice that, as we discuss in App. B.2, now the function Q T µ k is not a polynomial but a rational function. At the four-particle level, the explicit expressions of the normalisation H Let us first consider the form factors 'RRRR' and 'LLLL', containing only either rightor left-moving particles. Following Eq. (101), we see that we need to compute the limit of F T µ 4 (θ + θ 0 /2, B(θ 0 ); n). As in Sec. 6.1 for the standard twist field, we study separately this limit for the different terms that constitute the composite ShG form factor in Eq. (98). Applying the limit of the minimal composite form factor reported in Eq. (103), we have where we have rewritten it in terms of x i = e θ i /n . It is convenient to take the limit of the function Q T µ 4 , reported in Eqs. (213), (214), and of the denominator of the ansatz (98) together with the one of the minimal form factor. We find that this limit reproduces the form factor in Eq. (90) up to a normalisation with an exponential e −θ 0 /n Finally, we see that the normalisation term H T µ n , whose explicit expression is given in Eq. (205), becomes cancelling precisely the multiplicative factor in Eq. (116), such that the roaming limit correctly reproduces the 'RRRR' (or 'LLLL') form factor in Eq. (90), as expected. Considering now the 'RRLL' form factor, we can see that it can be obtained from the limit of Eq. (101) in the particular case The joint limit of the denominator of the ansatz (98) and of the polynomial Q T µ 4 reported in Eqs. (213), (214) gives where Q T µ 2,2 flow is the polynomial in the massless flow of Eq. (93). The normalisation H T µ Eq. (103), we have Putting all together, we find that limit of the 'RRLL' form factor is again finite as expected, confirming the validity of the roaming limit also for the composite twist field T µ n .
Standard and symmetry resolved entropies for the massless flow
In this section, we use the form factors computed in the previous sections to study the behaviour of the correlation functions of the standard and composite twist fields. After calculating the running dimension of the field along the renormalisation flow, we investigate the entanglement entropy, comparing it with expected results.
Running dimension from the ∆-sum rule
As we discussed in Sec. 2, the model under examination interpolates between the tricritical Ising CFT M 4 in the UV and the Ising CFT M 3 in the IR, providing the simplest example of a massless renormalisation flow between two A-series diagonal minimal models [35][36][37]. In both fixed points, the properties of the standard twist field T n and the Z 2 composite one T µ n are known from conformal invariance [6], as we reviewed in Sec. 3. In particular, the conformal dimension of the standard twist field is given by Eq. (22) while the dimension of the composite one is in Eqs. (24), (25) for the fixed points of interest. The knowledge of the exact conformal dimensions of the fields in the IR and the UV fixed points of the massless flow provides a non-trivial check of the correctness of the form factors via the ∆-sum rule [31]. Let us start by considering the twist field T n . Along a renormalisation group flow, the difference of conformal dimensions of the field T n in the IR and in the UV is given by an integral of the two-point function between T n and the trace of the stress-energy tensor Θ [31] h UV In order to compute the ∆-sum rule (122) Table 2: Comparison of the difference of the conformal dimensions in the UV and IR fixed points h UV − h IR with the results of the ∆-sum rule, for both the standard twist field T n and the composite one T µ n . The 'CFT' columns collect the exact result fixed by conformal invariance in Eqs. (22), (25), while '∆-sum rule' is the result of the ∆-sum rule truncated at four-particle order, reported in Eq. (125). The column 'n' indicates the number of replicas. We can see that at the four-particle order we already find good agreement for all the number of replicas considered.
which in the case of the (non-replicated) massless tricritical flow have been obtained in [38]. In particular, in a massless model, all the form factors of Θ containing either only left-('L') or right-movers ('R') identically vanish. When considering the replicated theory, we have to take the sum of Θ in each the copy. Therefore, the only non-vanishing form factors are the ones with identical replica indices F Θ|11...1 r,l = F Θ r,l . After integrating out the distance t in the spectral expansion of the ∆-sum rule (122), we finally find [8,31] where E is the energy (reported in Eq. (10) for a massless model).
The leading non-trivial form factor of Θ is the four-particle 'RRLL' one, coupling two rightand two left-movers [38] where γ is Euler-Mascheroni's constant and f RL (θ) = f RL (θ; n = 1) is the minimal form factor in Eq. (45) for a single replica n = 1. Since all form factors have an even number of left-and right-moving particles, Eq. (124) is the only contribution at the four-particle level [38]. We can then consider the approximation where F Θ 2,2 is given in Eq. (124) and F T |1111 2,2 is the twist field FF that we obtained in Eqs. (65), (66). Analogous expressions hold for the ∆-sum rule of the composite twist field T µ n , replacing F T |1111 2,2 with the form factor F T µ |1111 2,2 of the composite field reported in Eqs. (91), (92), (93). In Table 2, we compare the exact difference of conformal dimensions of both the standard and the composite twist fields with the result of the ∆-sum rule at the four-particle order (125) for n = 2, 3, 4 replicas. The integral in Eq. (125) has been computed numerically using the Divonne routine of the library Cuba [124] for the software Mathematica, using a cut-off θ j ∈ [−60, 60] for the rapidities. Already at the four-particle order we find a good agreement between the exact CFT result and the ∆-sum rule, confirming the correctness of the form factors computed in Sec. 4 and the relatively small weight carried by the higher order FFs. This is consistent with Ref. [19], where it was found that, for the staircase model (reviewed in Sec. 6), the four-particle contribution obtained in the roaming limit reproduces the difference in conformal dimensions of the standard twist field along the massless flow (8).
The ∆-sum rule (122) can be modified to give a running dimension of the (composite) twist fields along the flow [19,89] where now the integral over the distance t starts from a finite length ℓ. As we did before in Eq. (123), we expand the two-point function in form factors and we integrate over the distance t, obtaining [19,89] where again, in the massless flow, the leading contribution is given by the 'RRLL' form factors, A running ∆-theorem (126) can also be formulated for the composite twist field by considering the appropriate form factor F T µ |1111 2,2 of that operator obtained in Eqs. (91)- (93). In Ref. [14], it was argued that the running dimension h(ℓ) of the branch point twist field T n provides an entropic c-function which is monotonically decreasing along the flow. In Fig. 1a we report the result of the numerical integration of the running Delta theorem in Eq. (128) for the standard branch point twist field T n taking n = 2, 3, 4 replicas. We observe that, already at the four-particle order, the running dimension monotonically decreases with ℓ for all the number of replicas considered. In Fig. 1b, we plot the running dimension of the composite twist field T µ n at four-particle order. In particular for n = 2 replicas, the dimensions of the twist fields and of the charge operator conspire to give the same ultraviolet and infrared conformal dimensions for the composite twist field. Remarkably, we see that along the flow the running dimension varies and is not monotonic in ℓ, differently from the standard twist field. In the inset, we zoom in the region of small running dimension which shows that also for larger number of replicas n the behaviour is non-monotonic.
Cumulant expansion of the entanglement entropy
As a main result of this paper, in this section we discuss the form factor expansion of the entanglement entropy along the massless renormalisation group flow. As we will show, the formal expressions require a suitable regularisation, after which the form factors containing particles with the same chirality reproduce the logarithmic entanglement entropy of the infrared Ising CFT, while those that include particles of different chirality provide the corrections along the flow. Figure 1: Semi-logarithmic plot of the running conformal dimension h T τ obtained using the ∆-sum rule at four-particle order reported in Eq. (128). In the plot on the left, we show the result for the standard twist field T n , while on the right for the composite one T µ n . The dotted gray lines indicate the exact difference between the UV and IR conformal dimension obtained from Eq. (22) in the plot on the left and from Eqs. (25) in the plot on the right. As expected, for small distances ℓ, the running conformal dimension approaches the exact UV results, as also reported in Table 2. On the left, we see that the one of the standard twist field decreases monotonically, consistently with its behaviour as an entropic c-function. On the right, in the inset we zoom on the running dimension of the composite twist field (for n = 2 we have . The running dimension of the composite field is not monotonic along the flow. Instead of studying the correlator of the twist field, we find more convenient to directly apply its form factor expansion to the Rényi entanglement entropies defined in Eq. (1). Plugging in Eq. (20) the spectral series (31) of the twist field correlator and expanding the logarithm for the Rényi entropy order by order in the number of particles, we obtain the cumulant expansion [18,125] where, in analogy with Ref. [125], we have introduced the cumulants , obtained by subtracting all the possible clusterisations for large rapidities [18,125]. Recall from the discussion around Eqs. (38), (39) that, due to the clustering property, at large rapidity differences between the particles, the form factor factorises in the product of form factors with less particles. For example, up to the four-particle level, the connected components take the form f (131) By definition, the connected form factors f T |j 1 ... r,l vanish for large rapidities. As we will see, this improves the convergence of the integral in Eq. (130).
In the expansion (129), we recognise two different kinds of cumulants. Those containing only form factors diagonal in the chiralities, c T r,0 , c T 0,l , which we will call non-interacting cumulants, and the ones that couple left-and right-movers, which we will call interacting. In the rest of the section, we treat the two kinds of terms separately since, as we will see, they give different contributions to the entanglement entropy (129).
Non-interacting cumulants
Let us first focus on the non-interacting cumulants. As we saw in Sec. 4, in the massless flow, the form factors containing either only right-or only left-movers are identical to those of the massive Ising theory except for the vacuum expectation value ⟨T n ⟩, implying that their connected components are identical f Given this identity, we can analyse them by applying the same strategy as in Ref. [18] for the massive Ising theory, which we also report in App. C.
Using the Pfaffian structure of the form factors in Eq. (59), it was shown that the noninteracting cumulants c T r,0 have the general expression [10,17,18,62] c T k,0 (M ℓ; n) = n k (2π) k n j 2 ,...,j k =1 where θ ij = θ i − θ j , we have summed over j 1 , and we have introduced the notation From the form of Eq. (134), with all terms cyclically connected [18,62], it is clear why they are known as fully connected. Importantly, Eq. (134) holds for both the massless tricritical-critical flow and for the massive Ising theory. The only difference between the cumulants in these two models is the form of the energy E appearing in the exponential factor. This difference has however a major effect in the integral in Eq. (134). For simplicity, we can start by analysing the two-right-mover cumulant c T 2,0 . The generalisation to higher particles will be straightforward. In our massless flow, as already recognised in Ref. [38] for a different correlation function, the two-fold integral in Eq. (134) is IR divergent due to the absence of a mass gap. In fact, in the relative and center-of-mass coordinates, θ 12 = θ 1 − θ 2 and A = (θ 1 + θ 2 )/2, the energy (10) of two right-moving particles takes the form For right-movers, the IR region E → 0 corresponds to large and negative center-of-mass rapidity A → −∞. Since the form factors do not depend on A, we see that the integrand of Eq. (134) tends to a non-zero constant for A → −∞, leading to a divergence when the integral in A is performed.
In order to cure this IR divergence, we introduce a cut-off Λ in the center-of-mass rapidity A. Since the form factors do not depend on A, the resulting integral can be cast in terms of the exponential integral function Ei(x), We see that Λ plays the role of a cut-off at large distances with M ℓ ≪ Λ. In this limit, using the expansion we obtain a logarithmic dependence in the interval length ℓ, where z 2 is the function Remarkably, up to an additive constant and the large distance cut-off Λ, the sum of the left-and right-moving two-particle cumulants in our massless flow, c T 2,0 + c T 0,2 , is equal to the UV limit of the two-particle cumulant of the massive Ising model (cf. Eq. (221) in App. C). This is consistent with the expectation that, in the IR, the contributions of the interacting cumulants vanish because the flow leads to the critical Ising fixed point. As such, we expect that for large distances the non-interacting cumulants completely reproduce the logarithmic entanglement entropy of the Ising CFT.
Moving to higher particle cumulants c T r,0 , we expect a similar structure. In the presence of more than two particles, a convenient set of coordinates is again provided by the center-of-mass rapidity A = 1 k k j=1 θ j and the difference between the rapidities θ j,j+1 = θ j − θ j+1 , with Jacobian equal to one. For convenience, we further define the rapidities in the center-of-mass frame of reference which can be shown to depend only on the rapidity differences. In the massless flow, the r-fold integral in Eq. (134) is divergent in the large negative center-of-mass rapidity region A → −∞.
It is important to stress a subtle point. Due to the clustering property (see Eqs. (38) and (39)), the integral of the form factor is divergent in the direction of the sum of any two rapidities θ j . However, in the cumulants, the non-connected factorised component is subtracted as in, e.g., Eqs. (132) and (133), guaranteeing that the integral of the connected part converges in those directions. The only remaining divergence is the one in the direction of large negative center-of-mass A, as it happens for the two-particle cumulant (137).
In the center-of-mass coordinates defined before Eq. (141), the energy of r right-moving particles in the massless flow takes the form As already done in Eq. (137) for the two-particle case, we again introduce a cut-off Λ on the large negative center-of-mass rapidity A and we write the integral over it in terms of the exponential integral function Ei(x), In the large cut-off limit Λ ≫ M ℓ, we can approximate the cumulant using the expansion of the exponential integral in Eq. (138), obtaining the expected logarithmic behaviour with z k (n) equal to w(−θ 2l,2l+1 + 2πi (j 2l − j 2l+1 )) w(θ 2l+1,2l+2 + 2πi (j 2l+1 − j 2l+2 )) .
As happens with the two-particle cumulants, the sum c T r,0 +c T 0,r in the massless flow coincides with the UV limit of the r-particle cumulant of the massive Ising theory up to additive constants (see Eq. (225) in App. C). In Ref. [18], the resummation of the z r (n) terms is carried out. Taking Eq. (144) and applying their result, we find that r even c T r,0 (M ℓ, Λ; n) + l even This shows that, up to additive constants, the sum of the non-interacting left-and right-movers contribution to the entanglement entropy (129) in the massless flow gives the entropy of the Ising CFT in the IR fixed point.
Interacting cumulants
As shown in the previous discussion, the non-interacting cumulants contribute to the entropy of the IR Ising CFT; hence the corrections for smaller distances are provided by the interacting cumulants c T r,l , which couple left-and right-movers. In this section, we study the only interacting cumulant at four-particle level, namely c T 2,2 c T 2,2 (M ℓ; n) = n 2 × 2 (2π) 4 where the 'RRLL' form factor F T 2,2 is given in Eq. (65). As for the non-interacting cumulants, the integral of F T 2,2 is divergent in the IR limit. However, unlike the previous discussion, now the subtraction of the clusterisation in the connected component f T |1j 2 j 3 j 4 2,2 ensures that the cumulant is convergent and a regularisation is not needed.
In Fig. 2, we report the result of the numerical integration of the cumulant c T 2,2 for different values of M ℓ and for n = 2, 3 replicas, performed using the Divonne routine of the library Cuba [124]. In the UV region M ℓ ≪ 1, we expect a leading logarithmic behaviour, since the sum of the interacting and non-interacting form factors should reproduce the logarithmic entanglement entropy of the tricritical Ising UV fixed point. In Fig. 2a, we plot the interacting cumulant c T 2,2 in Eq. (148) for n = 2, 3 replicas and we compare it with the fit of the numerical points to a logarithmic function −α n log M ℓ + C n . We perform the fit for M ℓ ⩽ 2 × 10 −4 when n = 2 and for M ℓ ⩽ 6 × 10 −4 when n = 3, obtaining the parameters From Fig. 2a, we see that for M ℓ ≪ 1 the cumulant is in good agreement with the expected logarithmic behaviour.
To understand the behaviour in the IR (M ℓ ≫ 1), recall from the general introduction of Sec. 2 that, near the IR fixed point, the effective theory describing the massless flow is the TT deformation of the critical Ising CFT, as shown in Eq. (9). The entanglement entropies of generic TT -deformed CFTs have been heavily studied in recent years, see e.g. [95][96][97][98][99][100][101][102][103][104][105][106][107][108]. In particular, in Ref. [97], the entropy of an interval of length ℓ in a system of finite size L has been computed perturbatively for a generic TT -deformed CFT. Let the deformed action be where the TT coupling g has dimensions of an inverse mass squared, g ∝ M −2 . The first perturbative correction to the Rényi entanglement entropy of the IR CFT was found to be [97] (1 − n) δS (1) n (ℓ, L, g) = − where ϵ is a non-universal UV cut-off. Here we are interested in the entanglement entropy in the thermodynamic limit L → ∞ of Eq. (152), Comparing the effective Lagrangian (9) with the generic one in Eq. (151) and taking into account that in our case T = − 1 2 ψ∂ψ andT = − 1 2ψ∂ψ , we can conclude that g = − 4 π 2 M 2 . Therefore, since the central charge of our IR point is c = 1 2 , Eq. (153) specialised to our massless flow gives Observe that in the prediction of Eq. (154) the leading correction is of the form A n ℓ −2 + B n ℓ −2 log ℓ. The coefficient A n is not universal due to the presence of the U V cutoff ϵ, while the factor B n is. In particular, for n = 2, 3 replicas, its numerical value is = 0.02096 . . . , for n = 3.
Note also that the leading correction in Eqs. (152), (153), (154) is non-zero only for for n ⩾ 2 Rényi entropies, while it vanishes in the replica limit n → 1 [97]. It is worthwhile to compare the first-order perturbative prediction in Eq. (154) with the leading correction that we obtain here from the form factor cumulant expansion (129), which is given by the interacting cumulant c T 2,2 . In Fig. 2b we study this cumulant for n = 2, 3 replicas as a function of M ℓ and we perform a best fit of the numerical points to a function A n ℓ −2 + B n ℓ −2 log ℓ, for M ℓ ⩾ 50 when n = 2 and for M ℓ ⩾ 20 when n = 3. We obtain For large distances, we find a good qualitative agreement with the functional form predicted in Eq. (154). However, while for n = 2 replicas the numerical result of the fit for B 2 is close to the predicted value in Eq. (155), this is not the case for n = 3. A possible explanation of this discrepancy is that the higher-particle interacting cumulants c T r,l , which we are neglecting, also contribute to the term log ℓ/ℓ 2 and their contribution depends on the number of replicas n.
Entanglement entropy
Finally, we can put together the results obtained in Secs. 7.2.1 and 7.2.2 to get the total entanglement. In Fig. 3, we consider the Rényi entanglement entropy in the massless flow as a function of M ℓ for n = 2 and 3. The results in this figure (represented as symbols) have been obtained with the cumulant expansion (129), including the first 30 non-interacting cumulants c T r,0 and c T 0,l and the leading interacting cumulant c T 2,2 . In the plot, we also report the expected behaviour of the entanglement entropy when approaching the UV (dashed curves) and IR (dotted curves) fixed points. When approaching the IR, M ℓ ≫ 1, we find a very good agreement between the truncated cumulant expansion and the expected IR asymptotics for both values of n. In the UV, M ℓ ≪ 1, while the truncated expansion presents a behaviour compatible with a logarithmic divergence in M ℓ, it does not quantitatively agree with the UV entropy. This is expected, since the UV limit is the regime where the higher-particle interacting cumulants c T r,l contribute more significantly and here we are only considering the four-particle one, c T 2,2 . It would be interesting to include higher order terms, but this is a challenging task due to the difficulty of computing higher-particle cumulants, which involve an increasing number of multidimensional integrals. Nevertheless, in the light of Fig. 3, we can conclude that only including the leading interacting cumulant is enough to qualitatively observe the crossover between the IR and UV regimes.
Before concluding this section, let us comment on the symmetry resolved entanglement entropy. Also in this case, a cumulant expansion analogous to Eq. (129) holds true, by replacing the form factors with appropriate ones for the composite twist field that we have determined in Sec. 5. In particular, the expansion of the symmetry resolved entanglement entropy contains both the non-interacting cumulants reproducing the Ising CFT and the interacting ones providing the corrections, analogously to what happen for the standard entropy. The symmetry resolved entropy in the massive Ising model has been recently studied in Ref. [62] where it was found that (differently from what happens for the standard BPTF) the cumulants of the composite twist fields are divergent although the theory is massive; consequently a regularisation was required. This fact suggests that also for the massless flow, the regularisation employed for the total entropy is not sufficient to find a finite result for the composite twist field. Resolving such a regularisation is a problem that goes beyond the scope of this paper and we hope to return on the issue in the future.
Conclusions
In this paper, we investigated the ground state Rényi entanglement entropies of a single interval in the massless QFT associated to the renormalisation group flow connecting the tricritical and critical Ising CFTs. The corresponding two-point correlation function of branch points twist fields admits a form factor expansion along the flow. We showed that these form factors can be calculated in two different and independent ways. On the one hand, we have directly applied the bootstrap approach of Ref. [8] for massive integrable QFTs: based on the symmetries of the theory and the exchange properties of the twist fields, we obtained a set of equations for the form factors and we found a general ansatz that solves them. Alternatively, we obtained the form factors using the Zamolodchikov's staircase model, an extension of the sinh-Gordon theory with complex couplings that includes the tricritical-critical renormalisation flow. In this framework, the form factors of several fields in this massless flow have been obtained as the roaming limit of those in the sinh-Gordon theory. We showed that the same strategy works for the branch points twist fields; we derived explicit expressions for the two and four-particle form factors, from which the higher-particle ones can be recursively derived. Obviously the two approaches gave identical results.
The form factor expansion of the entanglement entropy can be rearranged order by order in the number of particles in terms of cumulants, which are given by the connected part of the form factors. In this cumulant expansion, we distinguished free and interacting cumulants. The former only contain particles of the same chirality and give the entropy of the IR Ising CFT. In fact, we found that, after a proper regularisation, they are equal to those that appear in the massive Ising theory. On the other hand, the interacting cumulants, which contain particles with different chiralities, describe the behaviour of the entanglement entropy along the flow. In particular, we checked that the lowest-particle interacting cumulant yields in the UV limit the expected logarithmic behaviour in the subsystem size. The IR limit can be described by a TT deformation of the Ising CFT. We showed that the lowest-particle interacting cumulant expansion approaching the IR point qualitatively reproduces the result for a generic TT perturbation at first order [97].
The massless flow (8) that we studied here is also connected with the SU (3) 2 -homogeneous sine-Gordon (HSG) model [87]. As shown in Refs. [19,88,89], along the renormalisation group flow, the central charge and the twist field dimension of this theory present two plateaux, analogously to the behaviour of the staircase model considered here. For certain values of the parameters, it was shown that one of these plateaux corresponds to the massless flow from tricritical to critical Ising [19,88]. Since the form factors of the standard twist field in the SU (3) 2 -HSG model have been obtained in Ref. [19] up to the four-particle order, it would be interesting to recover our results from an appropriate limit of the HSG expressions.
The massless flow connecting the tricritical and critical Ising CFTs enjoys a global Z 2 symmetry. In this work, we also considered the composite branch point twist fields associated to this symmetry. Their two-point functions give the charged moments of the reduced density matrix from which one can determine the symmetry-resolved entanglement entropy. Similarly to the standard twist fields, we obtained their bootstrap equations, which now include the non-trivial monodromy due to the insertion of the charge, and we found a general ansatz for their solution, which allows to obtain the higher-particle form factors recursively. We further derived them as the roaming limit of the composite twist field form factors of the sinh-Gordon theory. Remarkably, the latter were neither known in the literature, a gap which we also filled here.
Our goal now is to identify the four-particle 'RRLL' form factors using the well-known two-particle quantities. For simplicity we place every particle on the first replica and specify Eq. (159) to the case of interest Applying now the residue axiom in Eq. (34) to the ansatz (160) we can derive recursive equations for the H T 2,2 normalisation factors and the Q T 2,2 functions. Let us first also recall that the minimal form factor f RL satisfies the identity The residue of the denominator of the ansatz (160) takes the form − i Res from which we can obtain the residue of the entire expression (160) as − i Res where we used Eqs. (161) and (162). Via algebraic manipulations we can simplify the above formula to − i Res Following the residue axiom in Eq. (34), the residue of the kinematical pole in Eq. (164) has to reproduce the two-particle form factor in Eq. (53). We first recast it in the shape of our ansatz as where we have defined Q T 0,2 (y 1 , y 2 ) = σ 2 (y 1 , y 2 ) = y 1 y 2 .
Comparing the residue in Eq. (164) with the two-particle form factor in Eq. (165) leads to the recursion equations for H T 2,2 and for the polynomial Q T 2,2 which we separate as Q T 2,2 (ωx, x, y 1 , y 2 ; n) = (ω 1/2 xy 1 − 1)(ω 1/2 xy 2 − 1) = 1 − ω 1/2 x (y 1 + y 2 ) + ωx 2 y 1 y 2 . (170) We postulate a solution to Eq. (170) completely symmetrical in x 1 , x 2 , y 1 , y 2 and hence writing which is the most general expression compatible with the fact that the form factor has zero Lorentz spin, and that sending each rapidity to ±∞ the entire FF converges to zero. Posing x 1 = ωx, x 2 = x, we get a unique solution for the unknown constant A, namely (172) This means that the entire solution can be written as which we can also rewrite as
A.2 Form factors of the Z 2 -composite BPTF
Once again, we start our derivation by recalling and repeating the ansatz for Z 2 -composite BPTF where we have r right-mover and l left-mover particles and again x i = e θ i /n and y i = e −θ ′ i /n . The cyclic permutation and the exchange axioms are already satisfied since is fulfilled via f µ γγ (θ; n) = 2 cosh(θ/(2n))f γγ (θ; n) = sinh(θ/n) . (177) for γ ′ different from γ, we similarly satisfy In full analogy to what we have done in the previous section for the standard twist field, we apply the residue axiom Eq. (69) to the ansatz (175) in order to derive recursive equations for the normalisation factors H T µ r,l and the Q T µ r,l functions. Since the denominator of the ansatz (175) is the same as the one for the standard twist fields in Eq. (175), we can reuse the same residue we have computed in Eq. (162). Using again the property (161) of the minimal form factor, the residue of the ansatz with 4 particles yields − i Res Again, from the residue axiom (69), this expression must be compared with the two-particle FF, which we can rewrite as where y 2 2 − y 2 1 /(2y 1 y 2 ) = sinh((θ ′ 1 − θ ′ 2 )/n). We then end up with the equation for Q T µ 2,2 as well as the normalisation which we can separate as Notice that, differently from what happened in Eq. (170) for the standard twist field, now the function Q T µ 2,2 is not a polynomial but a rational function. We write the solution to Eq. (184) as which is the most general expression compatible with (i) the form factor has zero Lorentz spin, and (ii) sending each rapidity to ±∞ the entire FF converges to a constant. When setting x 1 = ωx and x 2 = x we can obtain the same solution for A as for the case of the standard BPTF, namely A = − 1 2 cos π 2n and hence The ansatz with the above fraction of polynomial Q T µ 2,2 (ωx 1 , x 2 , y 1 , y 2 ; n) (0) satisfies all the FF axioms. Notice that while the n → 1 limit of the standard BPTF is not well defined, the FFs of the composite twist field reduce to those of the disorder field 1 + x 1 x 2 y 1 y 2 As pointed out in the main text, the solution of the bootstrap equation is in general not unique, since we can often add to our polynomial Q also a (non-trivial) kernel solution, that is, another polynomial (or fraction of polynomials) Q (k) which satisfies the homogeneous equation Polynomial kernel solutions at the two-and four-particle level have been identified in [19]. In particular, the two-particle kernel solution reads as from which the required four-particle kernel solution for the flow can be constructed by squaring the expression due to the the anticipated symmetry between the variables of the RR and LL particles. Based on the above consideration, we can write the eventual kernel as Q T µ 2,2 (x 1 , x 2 , y 1 , y 2 ; n) (k) = ω 2 n 2 8 cos 3 π 2n x 1 x 2 − 1 4 sec 2 π 2n (x 1 + x 2 ) 2 (x 1 x 2 )(x 1 + x 2 )(y 1 + y 2 ) × × y 1 y 2 − 1 4 sec 2 π 2n (y 1 + y 2 ) 2 y 1 y 2 (y 1 + y 2 ) (190) that is, taking the product of (189) and additionally, by also renormalising the expression with (x 1 x 2 y 1 y 2 )(x 1 + x 2 )(y 1 + y 2 ) which does not spoil the kernel property. We chose the pre-factor in a way that the entire expression for the polynomial Q T µ 2,2 (x 1 , x 2 , y 1 , y 2 ; n) = Q T µ 2,2 (x 1 , x 2 , y 1 , y 2 ; n) (0) + Q T µ 2,2 (x 1 , x 2 , y 1 , y 2 ; n) (k) gives (1 + x 1 x 2 y 1 y 2 )/(x 1 x 2 y 1 y 2 ) in the n → 1 limit, which reproduces Q µ 2,2 . The normalisation factors match as well, since H µ 2,2 = −4N 4 1 = 2 e −4G/π .
B Form factor bootstrap for branch point twist fields in the sinh-Gordon model
In this appendix, we first report the known results for the four-particle form factor of the standard twist field in the sinh-Gordon model and we then derive the previously unknown form factor of the composite one.
B.2 Form factors of the Z 2 -composite BPTF
As we mentioned in the main text, differently from the the four-particle form factor of the standard twist field T n , in the sinh-Gordon model the one of the composite field T µ n was not previously known in the literature. In this appendix we compute this form factor by constructing and solving the bootstrap equations, in full analogy to what we have done in Sec. 5 and in App. A.2 in the case of the massless flow.
Plugging the function Q T µ 4 in Eqs. (212)-(214) and the normalisation H T µ 4 from Eq. (205) in the ansatz (196) we finally obtain the four-particle form factor for the composite twist field that we used in Sec. 6.2.
C Cumulant expansion of the entanglement entropy in the massive Ising theory
In this appendix, we review the known results for the form factor expansion of the entanglement entropy in the massive Ising model, obtained in Refs. [18]. In particular, we find a direct relation between UV limit of the cumulant expansion of the entropy in the massive Ising and the non-interacting part of the expansion in the massless flow, studied in Sec. 7.2.1.
In the massive Ising theory, if we denote by m the mass gap, the ground state Rényi entanglement entropy admits the following cumulant expansion [18], These cumulants can be reexpressed as in Eq. (134) and, therefore, the k-particle cumulant c T k, Ising is similar to the k-rightor k-left-mover non-interacting cumulants c T k,0 , c T 0,k in the massless flow (130), differing only in the energy E in the exponential factor. In the massive Ising theory, the energy of k particles is E(θ 1 , . . . , θ k ) = k i=1 m cosh(θ i ) .
(219)
As shown in Ref. [8], in the UV limit mℓ ≪ 1, the expansion of the Bessel function K 0 reproduces the expected UV logarithmic behaviour of the entanglement entropy up to an additive constant [8] c T 2, Ising (mℓ; n) ≈ mℓ≪1 −z 2 (n) log mℓ + const, where the function z 2 (n) was introduced in Eq. (140). We can now investigate the higher-particle cumulants c T k, Ising . If we write them in terms of the center-of-mass coordinates, we can apply the integral identity and the fact that the form factors only depend on the relative rapidities to integrate out the center-of-mass rapidity A. We then obtain and ξ j are defined in Eq. (141). In the UV limit mℓ ≪ 1, by expanding the Bessel function using Eq. (220), we get at leading order the logarithmic behaviour of the entropy expected in the Ising CFT up to an additive constant c T k, Ising (mℓ; n) ≈ mℓ≪1 −z k (n) log(mℓ) + const, where the coefficient z k is the same as in Eq. (145). Comparing Eq. (225) with the analogous formula in Eq. (144), we can immediately see that the UV limit of the k-particle massive Ising cumulants is twice the k-right-mover cumulants of our massless flow. Notice that the factor 2 comes from the expansion of the Bessel function and ultimately its origin is the difference in the energy of the two models. Before concluding this appendix, let us make a remark on the computation of the coefficients z k (n). The expression in Eq. (145) contains k − 1 integrals and, therefore, it is not practical for numerical calculations. In Ref. [18], the analytic continuation of Eq. (145) was carried out for n ⩾ 1 replicas, writing z k (n) as a single integral for any k (see also [17,62]) where, for k = 2p, for p even, and w(θ; n) is given in Eq. (135). Eq. (226) is efficient for numerical calculations. We employed it to compute the first 30 non-interacting cumulants in the truncated expansion of the entropies of the massless flow plotted in Fig. 3. | 23,091.6 | 2023-09-29T00:00:00.000 | [
"Physics"
] |
Robust Maximum Lifetime Routing and Energy Allocation in Wireless Sensor Networks
We consider the maximum lifetime routing problem in wireless sensor networks in two settings: (a) when nodes’ initial energy is given and (b) when it is subject to optimization. The optimal solution and objective value provide optimal flows and the corresponding predicted lifetime, respectively. We stipulate that there is uncertainty in various network parameters (available energy and energy depletion rates). In setting (a) we show that for specific, yet typical, network topologies, the actual network lifetime will reach the predicted value with a probability that converges to zero as the number of nodes grows large. In setting (b) the same result holds for all topologies. We develop a series of robust problem formulations, ranging from pessimistic to optimistic. A set of parameters enable the tuning of the conservatism of the formulation to obtain network flows with a desirably high probability that the corresponding lifetime prediction is achieved. We establish a number of properties for the robust network flows and energy allocations and provide numerical results to highlight the tradeoff between predicted lifetime and the probability achieved. Further, we analyze an interesting limiting regime of massively deployed sensor networks and essentially solve a continuous version of the problem.
Introduction
Wireless sensor networks (WSNETs) have emerged as an exciting new paradigm of inexpensive, easily deployable, completely untethered device networks that enable the automated and intelligent monitoring and control of physical systems. WSNET nodes can be equipped with a variety of sensors, have a built-in radio to communicate with each other, are powered by batteries, and have limited information storage and processing capabilities. WSNETs can be useful in a plethora of applications including industrial and building automation, health monitoring, wildlife monitoring, and asset and personnel tracking [1]. Battery technology, however, remains a critical bottleneck. In many applications one would like to use the WSNET for long periods, often years, without changing batteries. As a result, energy conservation is a primary concern and aggressive optimization becomes indispensable.
In this paper, we focus on the problem of selecting an optimal strategy for routing packets from data-collecting sensor nodes to a set of gateways (or sinks) in order to minimize the rate at which energy is consumed or, equivalently, to maximize the lifetime of the network. We consider two situations: (i) when the initial energy of every node is given and (ii) when it is also subject to optimization given an overall energy budget. Routing, of course, has received quite a bit of attention in WSNETs. Various aspects of the problem have been considered in [2][3][4][5][6][7][8][9][10][11], which mostly focus on finding a single path from origin to destination. A more static view is adopted in [12], followed by [13], and [14], which provide a linear programming formulation for optimizing average flows between nodes.
Our starting point is the flow optimizing formulation of [12,14]. A different but equivalent formulation using optimal control ideas is in [15]. Key data to solve this problem include the total available energy at the nodes and 2 International Journal of Distributed Sensor Networks the energy consumption rates. These quantities are hardly known with any degree of certainty or accuracy. Yet, they affect both the optimal flows and the corresponding optimal objective value, that is, the predicted network lifetime. The latter value will in fact be equal to the actual network lifetime if all problem data are known with certainty. We note that both these quantities are quite important for the network designer. The predicted network lifetime is useful for planning purposes, and the optimal flows indicate how routing should be done to achieve such a lifetime.
Uncertainty, though, renders the predicted lifetime overly optimistic. For the case without energy allocation, we show that for specific, yet typical, topologies including linear and two-dimensional grid-like networks, the actual lifetime will reach the predicted value with a probability that converges to zero as the number of nodes grows large. This suggests that the predicted network lifetime is not a particularly useful estimate under uncertainty.
For the energy allocation case, we show the same result without any topological assumptions. We also find that uncertainty impacts the optimal policy as well, and one needs to use a different set of "robust" flows to protect against uncertainty. To that end, we develop a series of alternative robust problem formulations, ranging from pessimistic to optimistic. A set of parameters enable the tuning of the conservatism of the formulation with a desirably high probability that the corresponding lifetime prediction will be achieved-a lifetime guarantee probability. Our robust formulations are based on recent work in robust linear programming in [16,17]. However, the problem we consider has special structure which we exploit to establish a number of interesting properties. Robust optimization has in general received a lot of attention lately and has found applications in many areas. It started with [18] with more recent contributions in [16,19].
To gain more insight, we consider maximum lifetime routing with energy allocation in a continuous setting of massively dense WSNETs. Related limiting regimes have previously been considered in [8,20,21]. For a single point source and a single point sink, we show that the optimal route is a straight line from the source to the sink. For multiple sources and sinks, we show that sources send their flows to the closest sink, again over a straight line.
The rest of the paper is organized as follows. In Section 2, we tackle the maximum lifetime routing problem without energy allocation, introducing robust formulations and characterizing their solutions. Section 3 incorporates the energy allocation into the problem. In Section 4, we develop the continuous version of the problem with energy allocation. Numerical examples are in Section 5. Conclusions are in Section 6.
Maximum Lifetime Routing without Node Energy Allocation
We represent a WSNET as a directed graph G(N , A), where N is the node set and A is the set of directed links (i, is the set of nodes that can be reached by i. Each node i has an initial battery energy of E i and consumes e t i j per data unit to transmit to j, while j consumes e r i j per data unit to receive from i. We assume that the nodes are able to relay packets and to adjust the transmit power level to the minimum required in order to reach the intended receiver. Origin nodes (or sources) O include all i ∈ N with a positive (constant) information generation rate Q i . D is the set of sink nodes (or sinks) responsible for collecting all data. Assume O∩D = ∅; we refer to nodes in N \ D simply as sensor nodes.
Every source node seeks to send its data to one of the sinks, not necessarily the same one for each data unit generated. To that end, node i may use multiple other nodes as relays. Let q i j be the information transmission rate from i to j. We write q for the vector of all q i j 's. (We use bold letters to denote vectors and all vectors are assumed to be column vectors unless explicitly stated otherwise.) Note that routing and power control are intrinsically coupled since the power level is adjusted depending on the choice of the next hop.
In the sequel, we only consider the energy spent for communications since this is the dominant energy consumption term in WSNETs (see [22]). Additional energy consumption terms could be incorporated into e t i j , e r i j . For example, a sensing/processing energy cost at transmissions or receptions per data unit can be incorporated into e t i j and e r i j . We also assume that e t i j is monotonically increasing with the distance between i and j. Finally, sink nodes are assumed to be powered by line power.
The lifetime of a sensor node i under a given set of flows q is given by Define the network lifetime under flow q as the minimum lifetime over all nodes, that is, The network lifetime is equivalent to the earliest time, a sensor node runs out of energy.
Problem Formulations.
The maximum lifetime routing problem without node energy allocation is the problem of selecting flows q to maximize T net (q). Letting q i j = q i j T denote the amount of information transmitted from i to j over the lifetime T, [12] formulated the problem as a linear program as follows: where the decision variables are T and the q i j 's. On a notational remark, we will use q to denote flow over the International Journal of Distributed Sensor Networks 3 lifetime T and q to denote flow per unit of time. Thus, when we refer to an optimal solution q * (resp. q * ) of (3) we mean optimal flow per unit of time (resp. over the lifetime). The first set of constraints correspond to flow conservation and the second set of constraints follows from the definition of lifetime. We note that this formulation can also account for the energy consumed while the node's radio is listening. Specifically, we can add e ON i λ i T to the lefthand side of (5), where e ON i is the energy consumption rate by the radio while listening, λ i is the fraction of time node and i is "awake" and listening. We refer to (3) as the nominal problem. Note that it is always feasible if for every sensor node there exists a path to a sink node. We assume that this will always be the case. We note that problem (3) can be solved in a distributed manner using subgradient optimization techniques for the dual [23]. This is appealing for WSNET applications. Here, however, we concentrate on the impact of uncertainty and do not focus on distributed solution approaches. It can be also argued that in several application contexts a distributed approach is not critical since (3) is solved during a planning/deployment stage of the WSNET.
The data for the nominal problem are e t i j , e r i j , and E i and these affect both the optimal solution and the optimal value. As these may be uncertain, we model them as symmetrically bounded nonnegative random variables (r.v.'s) with ranges given by: We will call e t i j , e r i j , and E i the nominal values and assume that they are the means of the corresponding r.v.'s. The values Δe t i j , Δe r i j , and ΔE i represent the maximum deviations from the mean which are assumed to be identical left and right from the mean (hence, the term symmetrically bounded r.v.'s). These deviations are defined so that all r.v.'s have positive support. We also define the uncertainty sets J t Due to data uncertainty, the optimal solution of (3) may not be feasible. It can be easily seen that the following worstcase formulation guarantees feasibility for any realization of the following data: s.t. (4), (6), We refer to the above as the fat problem. By construction, its optimal solution is feasible for any data realization but it may be overly conservative. Intuitively, the probability that all parameters take their "extreme" value should be small, thus, motivating a less conservative formulation.
We view the uncertainty budget as an 1 -norm constraint for the vector The following robust maximum lifetime routing problem is formulated so that we can guarantee feasibility for all data realizations in the following restricted uncertainty sets: In the Appendix, we show that the above is equivalent to a linear programming problem.
, (6), p i ≥ 0, ∀i ∈ N \ D, Furthermore, solving (13) one obtains an optimal solution ( q R , T R , p R , ω R , ν R ) so that ( q R , T R ) is feasible for (11) and T R is equal to the optimal value of (11).
4
International Journal of Distributed Sensor Networks
Properties of Optimal
Solutions. Next, we study the relationships between the three formulations and establish properties of the optimal solutions. We also introduce a metric-the lifetime guarantee probability-to quantify how likely it is for the predicted lifetime to be achieved.
Optimal Lifetime.
Let T * N , T * F , T * R denote the optimal values of the nominal, fat, and robust problems, respectively. Let Γ e = (Γ e 1 , . . . , Γ e |N \D| ) and Γ E = (Γ E 1 , . . . , Γ E |N \D| ). Note that T * R depends on Γ e and Γ E . To express this dependence, we write T * R (Γ e , Γ E ). The following proposition is almost immediate. It simply states that by adjusting the uncertainty budgets one can generate a continuum of formulations whose predicted lifetime ranges from the fat to the nominal.
Let q 2 be an optimal flow for the robust routing problem under which suggests that q 2 is a feasible flow vector for the robust routing problem under Γ e1 , Γ E1 . It follows that ). Next notice that when Γ e = 0, Γ E = 0, the uncertainty set becomes R i (Γ e i ) = {e t i j , e r ji | e t i j = e t i j , e r ji = e r ji } and the robust routing problem (11) reduces to the nominal routing problem (3), that is, for all i, which implies that the robust routing problem (11) reduces to the fat one (7).
Standard sensitivity analysis results from linear programming yield the following corollary.
Observe now that at optimality at least one of the energy constraints (5), (8), and (12) will be active. This is stated in the following proposition. We will call dead the nodes that correspond to active constraints at optimality. The lifetime of a dead node equals the lifetime of the network. (3), fat (7), and robust (11) formulations will be active.
Optimal Flows.
Consider an optimal flow vector q obtained by solving one of the three formulations. Recall that q denotes total flow over the lifetime and q flow per unit of time. We associate a directed graph (subgraph of G) G q = (N , A q ) to q, where A q contains all (i, j) with q i j > 0. We say that a flow q is acyclic (resp., cyclic) if G q contains no cycles (resp., otherwise).
becomes zero and all other flows remain nonnegative. Because both the inflow and outflow at each node is reduced by the same amount, the flow conservation condition for all the nodes i 1 , . . . , i k still holds. Since the above operation only reduces flows, all the energy constraints remain satisfied. Hence, the reduced flows remain optimal. We can repeat the same process to eliminate any other cycle.
Since (i, j, i) is a trivial cycle, we obtain the following corollary. (3), (7), and (11), there exists an optimal flow q which satisfies q i j q ji = 0 for all possible links (i, j) and ( j, i).
Corollary 7.
For all three routing formulations (3), (7), and (11), there exists an optimal flow q satisfying q i j = 0, for all i ∈ D, which means no flow out of sinks.
Proof. Let q * be an acyclic optimal flow (cf. Theorem 5). Suppose there are sinks with positive flows emanating from them.
to zero by proportionally allocating this flow reduction to all outflows from node j. To be specific, for all k 0 ∈ S j we set the new reduced flow as q * jk0 := q * jk0 − q * i j (q * jk0 /( k∈S j q * jk )) which maintains the nonnegativity of the resulting flow. The flow reduction q * i j (q * jk0 /( k∈S j q * jk )) can be propagated to the node downstream from j in a similar way. Since q * is acyclic and the network is finite, propagating the flow reduction as described above terminates at some other sink nodes. During this process, flow conservation and energy constraints are maintained. This yields a new optimal flow vector with no flows out of sinks.
Lifetime Guarantee Probability.
Consider one of the three formulations (3), (7), and (11) and let q * , T * be an optimal solution. We will refer to the probability evaluated under the distributions of the r.v.'s E i , e t i j , e r ji , as the lifetime guarantee probability. This is the probability that International Journal of Distributed Sensor Networks 5 the actual lifetime obtained by applying the optimal flow q * achieves the predicted optimal lifetime. We denote by P N , P F , P R the lifetime guarantee probabilities for the nominal (3), fat (7), and robust (11) formulations, respectively. By design, the fat formulation provides an "absolute" guarantee; we omit the proof.
then P R → P N . Now let A N be the set of nodes having active energy constraints at optimality in the nominal formulation (3). For any random variable a with mean a and support in be an optimal solution to the nominal problem (3). We have For i ∈ A N and because q * N is feasible for the nominal problem it holds E i = j∈Si e t i j q * N i j + j|i∈Sj e r ji q * N ji . Since E i , e t i j q * N i j , e r ji q * N ji are independent symmetrically distributed r.v.'s with means E i , e t i j q * N i j , e r ji q * N ji , respectively, it follows that By independence, we have P N ≤ (1/2) |A N | .
Linear and Square Arrays.
In this section, we study two regular network topologies: linear and square arrays. Linear arrays appear, for instance, in pipeline monitoring applications and square arrays are applicable in environmental monitoring applications.
Linear Arrays.
We consider a linear array segment where one sink node is at the center and an equal number k of sensor nodes are aligned one by one on both sides of the sink. The distance between neighboring nodes is d.
The radio range is in [2d, 3d), that is, every node can only communicate with its very next 4 neighbors. Lining up such multiple segments, we can build a linear array network. We grow the network in this manner since one would need a sink per given number of sensor nodes. We assume that all sensor nodes have identical characteristics, that is, E i has the same distribution for all i, e t i j and e r i j have the same distribution among equidistant nodes, and the information generation Case I Case II rate Q i is identical for all i. The network we described is motivated by oil or gas pipeline monitoring applications. The following theorem establishes a decomposition property. (3), fat (7), or robust formulation (11) for a linear array network described above can be decomposed into the corresponding subproblems for each one of its segments.
Theorem 10. The maximum lifetime routing problem under either the nominal
Proof. Without loss of generality, consider a linear array network denoted by L consisting of two segments L 1 and L 2 . Consider any of the three routing formulations and let T * L1 , T * L2 , T * L be the optimal values for networks L 1 , L 2 , and, L, respectively. Clearly, T * L1 = T * L2 ≤ T * L since by combining the optimal flow vectors for L 1 and L 2 we obtain a feasible flow vector for L.
Due to homogeneity and symmetry in L, there exists an optimal flow vector which is symmetric about the center of L. Flows in the interface between the two segments L 1 and L 2 can fall into one out of two possible cases shown in Figure 1 (top). In each case, we can reconstruct the optimal flows between nodes k and k − 1 of L 1 and nodes −k and −k + 1 of L 2 as shown in Figure 1 (bottom). This flow reconstruction process maintains feasibility and eliminates any communication between segments L 1 and L 2 . Then Together with our earlier observation it follows T * L = T * L1 = T * L2 , which establishes the result.
The following theorem establishes that the nominal formulation (3) is not particularly useful since its predicted lifetime will be achieved with a diminishing probability as the size of the network increases. ., not equal to a constant). Then, as n → ∞, P N → 0.
Proof. By applying Theorem 10 n times, we decompose the network L into 2 n identical segments. With this decomposition, we have identical optimal flows in all 2 n linear segments. As we have seen before, each segment has at least one node with a binding energy constraint. Let K denote a set which 6 International Journal of Distributed Sensor Networks contains one node from each segment with a binding energy constraint. It follows that where the last equality follows from the fact that every k ∈ K corresponds to a binding energy constraint. Notice that P[E k ≥ E k ] < 1 for nondegenerate r.v.'s and that |K| = 2 n . Hence, as n → ∞, P N → 0.
Square Arrays.
A square array network consists of square array segments. Each segment is a two-dimensional (square) grid of a given dimension with a node at each point in the grid and a sink node located at the center point of the grid. The vertical and horizontal distance between neighboring nodes is d and we assume that the radio range is slightly less than √ 5d. As with linear arrays, we assume that all sensor nodes have identical characteristics, that is, E i has the same distribution for all i, e t i j and e r i j have the same distribution among equidistant nodes, and the information generation rate Q i is identical for all i. We grow a square network in both dimensions by stitching together segments. As an example, a network S with four segments S 1 , . . . , S 4 can be formed by placing segment S 1 in the northeast orthant, segment S 2 in the southeast orthant, S 3 in the southwest orthant, and S 4 in the northwest orthant. The following result is analogous to Theorem 10. Analogous to the linear array case, we can now show that the nominal formulation does not provide a useful lifetime prediction. We omit the proof as it is similar to the proof of Theorem 11.
Uncertainty Only in E i .
Here we focus on the case where uncertainty appears only in the initial available energy E i . Namely, for all results in this subsection we assume that e t i j 's and e r ji 's are known with certainty. We define a global robustness budget Γ = for all i∈N \D Γ i and incorporate the allocation of Γ to individual Γ i into the following robust formulation: where the decision variables are T, the q i j 's, and the Γ i 's. The following monotonicity property is immediate. Concavity follows from the fact that (19) maximizes a concave (linear) objective over linear constraints and Γ appears in the right hand side of these constraints.
Proposition 14.
The optimal value T * R of (19) is monotonically nonincreasing and concave as a function of the global robustness budget Γ.
Optimizing P[T ≥ T * ] over the Optimal Flows q * .
When the uncertainty is only in E i 's, we can maximize the International Journal of Distributed Sensor Networks 7 lifetime guarantee probability P[T ≥ T * ] over the set of optimal flows q * while guaranteeing that we achieve the corresponding predicted lifetime. One can think of this optimization as maximizing "robustness" while guaranteeing the same objective (predicted lifetime). We next show that this problem is a well-structured concave optimization problem. We only treat the robust case. For the fat case we have already shown that P F = 1 and the nominal case is similar to the robust.
Assume that only E i 's are uncertain, and let T * R , s * , q * , Γ * form an optimal solution of the robust formulation (19), where s * denotes the vector of slack variables corresponding to the energy constraints. Suppose all E i 's are independent, then Taking the E i 's to be uniformly distributed in and To maximize P R while achieving the optimal lifetime T * R , we can equivalently maximize ln(P R ) which yields the following concave optimization problem:
Maximum Lifetime Routing with Energy Allocation
In this section, we consider the problem of maximizing the WSNET lifetime by jointly optimizing the routing decisions and the initial energy allocated to the nodes. Suppose E is the total available energy for a WSNET. Similar to formulation (3) we have the nominal problem: max T
s.t. (4), (5), (6),
i∈N \D Here the E i 's (appearing in (5) As before, the robust problem (26) can be shown to be equivalent to a linear programming problem; we omit the details for brevity. From the structure of the formulation with energy allocation, we have the following result.
Proposition 15. At optimality, all the energy constraints for nonsink nodes are active and the total energy constraint is also binding. This holds for all three formulations.
Proof . Consider first the robust problem (26). We will use contradiction. Assume that at optimality, the energy constraint (27) for some nonsink node k is not active. Notice that we can decrease E k and increase all the other E i while maintaining their sum. This improves the lifetime which contradicts optimality. Similarly, the total energy constraint is also binding at optimality. If not, we can increase all E i to achieve a better lifetime, which again contradicts optimality. The nominal and fat cases are almost identical.
Proposition 16. T * R (Γ e ) is a nonincreasing function of Γ e and
As in Section 2.2, one associates a directed graph G q = (N , A q ) to a feasible flow vector q, where A q contains all (i, j) with q i, j > 0. Recall that we name q as acyclic when G q contains no cycles. The following results are similar to Theorem 5 and Corollary 7; we omit the proofs.
Lifetime Guarantee Probability. The development in this
section is similar to Section 2. We have the following results; we omit the details in the interest of brevity. It follows that as |N \ D| → ∞ we have P N → 0, and this now holds for all topologies.
Routing and Energy Allocation in Massively Dense WSNETs
It is straightforward that the joint problem of routing and energy allocation (24) is equivalent to finding paths from sources to sinks with lowest energy consumption rate. If we consider the energy consumed by both the sender and the receiver over a link as the cost (or length) of the link, the problem is reduced to finding shortest paths between sources and sinks. Imagine now that the WSNET is scaled by uniformly deploying an increasing number of nodes while decreasing their radio range in order to maintain a fixed density of one-hop-reachable neighbors. Although the approach we developed so far scales well since we are dealing with linear programming problems, it is of interest to consider whether the scaled problem exhibits, in the limit, a structure that simplifies its solution and deepens our understanding. In particular, we will consider a limiting regime of massively dense WSNETs and study maximum lifetime routing formulations with energy allocation. Such WSNETs can only be described by macroscopic parameters, such as the information generation and energy distribution densities.
Problem Formulation.
Let M be the planar area where a massively dense WSNET is deployed. Mathematically, M is a convex set in R 2 . We assume that the WSNET is uniformly deployed over M.
Let Q(x, y) represent the information generation density function defined on M whose units are bits/(sec·m 2 ). We assume Q(x, y) is known. Denote by S(x, y) the information consumption density function defined on M whose units are bits/(sec·m 2 ). In the next subsection we will consider the special cases of "point" sources and sinks where Q(x, y) and S(x, y) become Dirac functions on the plane. Let e(x, y) be the energy density function defined on M whose units are J/m 2 . The energy density function e(x, y) characterizes the distribution of the globally available energy E over M. Define the information traffic flow function as q(x, y) = (q x (x, y), q y (x, y)). The interpretation of q(x, y) is as follows: q(x, y) is the rate at which information crosses a linear segment of infinitesimal length which is centered on (x, y) and perpendicular to q(x, y) (see Figure 3). The units of q are bits/(sec · m).
The continuous maximum life routing problem with energy allocation can be formulated as: M Q x, y − S x, y dσ = 0, where S(x, y), e(x, y), q(x, y), and T are decision functions and variables. Using an argument in [21], (29) states that the International Journal of Distributed Sensor Networks 9 divergence of the traffic flow function measures the degree with which the traffic increases or decreases; we can think of this as a detailed flow conservation equation. (31) is a global energy constraint while (32) can be seen as a global flow conservation constraint. As for (30), consider a point (x, y) ∈ M and let Ω( ) denote an infinitesimal square centered at (x, y) with a side length equal to and one of its sides parallel to q(x, y). Let α (in J/(bit · sec)) be a constant indicating how much energy is consumed per unit of transmitted information per second. Then, (30) expresses the fact that the total energy consumed when the traffic flow q(x, y) passes through Ω( ) during a period of time T should be no more than the total energy available in this area.
In this section, we are only interested in the structure of the optimal solutions to (28), hence we only consider the nominal version of the problem. Uncertainty in E can be easily incorporated as we have done with the discrete instances. This will only change the right hand side of the total energy constraint and would not affect the optimal solution structure. Uncertainty in e(x, y) can also be incorporated but that is beyond the main focus of this section.
From the structure of (28), we have the following results. The proof is immediate as whenever q(x, y) = 0 and e(x, y) > 0 we can reduce e(x, y) to zero while maintaining feasibility. The energy savings can be allocated to other points resulting in a potential increase of the lifetime.
Similarly, we define the information consumption density function S s (x, y) for a point sink at (x s , y s ) with a sink rate equal to S. These are Dirac impulse functions on R 2 .
In the single point source and single point sink case, let o = (x o , y o ) and s = (x s , y s ) be the source and sink locations, respectively. Denote by Q o (x, y) and S s (x, y) the corresponding information generation/consumption density ɛ C Figure 4: C and its -tube. One notes that the argument above can be extended to handle an infinite number of (forked and merged) paths.
The key idea is the same, that is, one can show that any solution using an infinite number of paths is no better than the straight line connecting o with s. one will omit the details to avoid obfuscating the discussion. The result implies that sinks generate a Voronoi tessellation of the deployment area, and the sources send their flows over straight lines to the sink in the cell they reside in, thus, resulting in a star-like network within each cell.
Numerical Experiments
In this section, we present a set of numerical examples. For all examples we adopt the communication energy consumption model from [12].
Let d r be the transmission range of each node. Then j ∈ S i if and only if d i j ≤ d r , where d i j is the distance between nodes i and j. The energy expenditure per data unit transmitted from i to j satisfies e t i j = e • + amp d 4 i j , e r i j = e R , where e • = 50 nJ/bit and e R = 150 nJ/bit denote the energy consumed in the transceiver circuitry at the transmitter and the receiver, respectively, and amp = 100 pJ/bit/m 4 is the energy consumed at the output transmitter antenna for transmitting a bit over one meter. The receiver circuitry is in general more complex and consumes more energy than the transmitter circuitry within the same order of magnitude. The path loss exponent of four is chosen to account for multipath reflections. In all the numerical experiments P R is estimated by Monte-Carlo simulation with 10 6 samples, thus P R is accurate with a ±0.005 error and 99% confidence (by Chebyshev's inequality).
A 4-Node WSNET.
We start with a toy example to give some intuition on the routing policies produced by each formulation. The WSNET consists of one origin node O, two relay nodes, R 1 and R 2 , and one sink node S, where Q O = 500 bits/sec and the radio range is 30 m. The origin node O has to use relays R 1 Figure 6(a), the red (dot-dash), black (dash), and green (solid-star) lines with arrows represent the nominal, fat, and robust optimal flows, respectively. Note the difference in the selected routes: the nominal picks the shorter path O − R 1 − S, the fat picks the more "stable" but a little longer path O − R 2 − S, while the robust balances the two to maintain a relatively high lifetime guarantee probability while not suffering too much from the low predicted lifetime.
As we adjust Γ e i /(|J t i | + |J r i |) = Γ E i , P R and T * R will change accordingly. The solid blue curve in Figure 6(b) describes the relationships between P R and (T * R − T * F )/T * F (the percentage predicted lifetime gain of the robust formulation over the fat). It can be seen that there is significant predicted lifetime gain (e.g., 15%) while the lifetime guarantee probability remains high (e.g., close to 0.8). The red dash curve represents the relationship Γ e i /(|J t i | + |J r i |) = Γ E i versus P R . It can be seen that as we protect more against the randomness, the predicted lifetime T * R goes down and the lifetime guarantee probability P R gets enhanced. The two extreme cases of no protection and full protection correspond to the nominal and fat situations.
To gain further insight on the impact of uncertainty on the nominal formulation, consider the probability distribution of the actual lifetime T achieved by applying the nominal optimal policy q * N to random instances (where e t i j , e r ji , and E i are randomly selected). Figure 7 shows the histogram of T generated from a million instances. We can see that T can be substantially smaller than T * N and in fact most of the probability mass corresponds to such T's. The nominal lifetime guarantee probability P N = P[T ≥ T * N ] would be fairly low but that does not capture how far from T * N the actual lifetime T can be.
Routing with Energy Allocation.
If energy allocation is an option, set the global available energy E = 30J. As before, Figure 8(a) presents the nominal, fat, and robust optimal flows and energy allocation. The situation is very similar as before but energy allocation improves the predicted lifetime since no energy is wasted. Optimal values in a number of nominal, fat, and robust cases with and without energy allocation are listed in Table 1.
A Randomly Deployed WSNET.
In this case, we have 20 nodes (4 sinks, 10 origins, 6 relays) uniformly deployed on a 50 × 50 m 2 square. d r = 25 m. Q i = 500 bits/sec, for all i ∈ O. All E i , e t i j , e r ji are uniformly distributed and E i = 10J, Table 2. Again adjusting Γ e i /(|J t i | + |J r i |) = Γ E i or Γ e i /(|J t i | + |J r i |), respectively, for the two cases, changes P R and T * R accordingly (see Figures 9(a) and 9(b)). It can be seen that as we protect more against the randomness, the predicted lifetime T * R goes down and the lifetime guarantee probability P R gets enhanced. For energy allocation problems, since at optimality all energy constraints are active, the lifetime guarantee probability gets reduced but still the gain over the fat formulation is nonnegligible.
As we did in the 4-node example, we plot in Figure 10 the histogram of T achieved by q * N computed from a million random instances of the problem (without energy allocation). It is clear that as the number of nodes grows the probability mass for T shifts away from T * N and the actual T is typically substantially smaller than T * N . This is consistent with our result that P N = P[T ≥ T * N ] → 0.
Conclusions
We presented a new framework to accommodate uncertainty in designing maximum lifetime routing policies for WSNETs. We considered two scenarios-one (Scenario A) assuming that energy is already allocated to various nodes and the other (Scenario B) where such allocation is also subject to optimization. We formulated a worst case (fat) problem and compared it with the nominal problem that makes certainty equivalence assumptions and ignores uncertainty. As a compromise between the two, we also devised a robust formulation. We established, analytically and numerically, that the nominal solutions are always too optimistic. Specifically, for common Scenario A topologies (like regular linear arrays and grid-like WSNETs) the nominal formulation predicts a lifetime that is (almost) never achieved in the presence of uncertainty. In Scenario B, the same result holds for all topologies. The robust solutions, on the other hand, provide a useful and practical way to tradeoff performance versus robustness. We extended our analysis to massively dense WSNETs and characterized optimal solutions of the routing problems. | 9,279.2 | 2012-08-01T00:00:00.000 | [
"Computer Science",
"Engineering",
"Environmental Science"
] |
Validation Strategy as a Part of the European Gas Network Protection
The European gas network currently includes approximately 200,000 km high pressure transmission and distribution pipelines. The needs and requirements of this network are focused on risk-based security asset management, impacts and cascading effects of cyber-physical attacks on interdependent and intercon-nected European Gas grids. The European SecureGas project tackles these issues by implementing, updating, and incrementally improving extended components, which are contextualized, customized, deployed, demonstrated and validated in three business cases, according to scenarios defined by the end-users. Just validation is considered to be a key end activity, the essence of which is the evaluation of the proposed solution to determine whether it satisfies specified requirements. Therefore, the chapter deals with the validation strategy that can be implemented for the verification of these objectives and evaluation of technological based solutions which aim to strengthen the resilience of the European gas network.
Introduction
The European gas network is an important and irreplaceable subsector of European Critical Infrastructure (ECI) [1]. The functioning of this network is constantly affected by threats with a direct but also cascading or synergistic effect [2]. These threats can be of various natures, e.g. meteorological, geological, process-technological, cascading, personnel, cyber or physical [3]. The impact of these threats can result in serious disruption or even failure of the regional parts of the gas network. For this reason, it is necessary to continuously improve the protection system of the European Gas Network, in particular through risk analysis and the consequent strengthening of the resilience through the identification and elimination of the identified weaknesses.
One of the main measures and means to achieve the enhancement of resilience, is through technological solutions, which should address the operational and technical needs of the infrastructure and requirements of the end user, i.e. infrastructure operator [4]. The chapter therefore deals with the validation strategy [5] that can be implemented for the verification of these objectives and the evaluation of technological based solutions which aim to strengthen the resilience of the European gas network. The main objective of the proposed validation plan, as part of an overall evaluation process, is to study the acceptance of a designed security system aiming to promote resilience [6] of gas critical infrastructures (at strategic, tactical and operational level). For this purpose, it is necessary to collect qualitative information concerning some key criteria of the system which define its performance in the operations. The primary focus of the validation strategy is to assess the functionality and effectiveness of the proposed system. However, the intuitiveness of the individual components as well as the overall exploitation and operationalization potential of the developed solution, should also be evaluated.
The aforementioned validation plan has been developed and verified through continuous interaction with critical infrastructure (CI) operators within the SecureGas project [7]. The project aims to improve the resilience capabilities of the gas CI. The methodology uses a gas CI-contextualized Panarchy loop [8] reflecting a disaster life-cycle management process. The objective is to reduce foreseen risk, optimize the monetary investment, and reduce uncertainties. Providing the CI operators with a detailed validation methodological procedure to assess the added value of security solutions added to their infrastructure is of high value. Within the context of the SecureGas validation and evaluation, the following aspects that are addressed include: performance versus expectation, ease-of-use, understandability, reliability of operations, completeness and reliability of output, functionality, manmachine interface and efficiency. The criteria for validation, i.e. Key Performance Indicators (KPIs) [9], can be clustered into two categories: (1) general criteria that apply to the whole SecureGas system, and (2) specific criteria that apply to individual components of the system. Such validation plan is fully transferable to other CI operators both of Gas and other sectors (e.g. power, telecommunication). With a slight adjustment of the identified KPIs, it can provide a valuable information on the applicability and usefulness of a security solution for risk mitigation, prevention and response purposes within a CI.
Validation, verification and evaluation
In order to understand the activities to be implemented from the validation point of view, definitions of the basic concepts used and are further analyzed below, presenting also several methodological approaches. Therefore, this section provides both a background analysis for validation-verification-evaluation processes and an adequate methodology.
The validation process involves the collection and evaluation of data, from the process design stage through commercial production phase, which establishes scientific evidence that a process meets a determined requirements. Process validation involves a series of activities taking place over the process. Regulatory authorities like European Medicines Agency and Food and Drug Administration have published guidelines relating to process validation [10]. The purpose of process validation is to ensure that varied inputs lead to consistent and high quality outputs. Process validation is an ongoing process that must be frequently adapted as manufacturing feedback is gathered. End-to-end validation of production processes is essential in determining product quality because quality cannot always be determined by a finished-product inspection. Process validation can be broken down into three steps: (1) process design, (2) process qualification, and (3) continued process verification.
The Guide to the Project Management Body of Knowledge (PMBOK guide), a standard adopted by the Institute of Electrical and Electronic Engineers, defines validation and verification as follows [5]: • Validation: The assurance that a product, service, or system meets the needs of the customer and other identified stakeholders. It often involves acceptance and suitability with external customers. Contrast with verification.
• Verification: The evaluation of whether or not a product, service, or system complies with a regulation, requirement, specification, or imposed condition. It is often an internal process. Contrast with validation.
These terms generally apply broadly across industries and institutions. In addition, they may have very specific meanings and requirements for specific products, regulations, and industries. Some examples: Software [11], Food and drug, Health care [12], Greenhouse gas [13], Traffic and transport [14], Simulation models [15], ICT industry, Civil engineering [16], Economics, Accounting, Agriculture, Arms control.
In the context of the above, validation can generally be classified into five basic categories: • Prospective validation comprises the missions conducted before new items are released to make sure the characteristics of the interests which are functioning properly and which meet safety standards [17]. Some examples could be legislative rules, guidelines or proposals [18][19][20][21][22][23][24][25].
• Retrospective validation is a process for items that are already in use in distribution or production. The validation is performed against the written specifications or predetermined expectations based upon their historical data/evidences that are documented/recorded. If any critical data is missing, then the work cannot be processed or can only be completed partially [10]. Retrospective validation is used for facilities, processes, and process controls in operation use that have not undergone a formally documented validation process. Validation of these facilities, processes, and process controls is possible by using historical data to provide the necessary documentary evidence that the process is doing what it is believed to do. Therefore, this type of validation is only acceptable for well-established processes and would be inappropriate where recent changes in the composition of product, operating processes, or equipment have occurred [26].
• Concurrent validation is used for establishing documented evidence that a facility and processes do what they purport to do, based on information generated during actual imputation of the process [26]. This approach involves monitoring of critical processing steps and end product testing of current production to show that the manufacturing process is in a state of control.
• Cross-validation is an approach by which the sets of scientific data generated using two or more methods are critically assessed [27].
• Re-validation is carried out for the item of interest that is dismissed, repaired, integrated/coupled, relocated, or after a specified time lapse. Examples of this category could be relicensing/renewing driver's license, recertifying an analytical balance that has been expired or relocated, and even revalidating professionals [28]. Re-validation may also be conducted when a change occurs during the courses of activities, such as scientific researches or phases of clinical trial transitions.
In contrast, evaluation is a systematic assessment of a subject's qualities, using criteria governed by a set of standards. Evaluation involves tests or studies conducted to investigate and determine the technical suitability of an equipment, material, product, process, or system for the intended objective. So evaluation can be formative that is taking place during the development of a concept or proposal, project or organization, with the intention of improving the value or effectiveness of the proposal, project, or organization. It can also be summative, drawing lessons from a completed action or project or an organization at a later point in time or circumstance. [29] According to the way the evaluation is conducted we can distinguish the following types [30]: • Internal evaluation, carried out by organizations, groups or stakeholders directly involved in the implementation of the project solution.
• External evaluation, carried out by specialists outside the development team, who are not employed within the organization responsible for the project under evaluation and who have no personal, financial or direct interest in the project.
Evaluation can be characterized as being either formative or summative. Broadly (and this is not a rule), formative evaluation looks at what leads to an intervention working (the process), whereas summative evaluation looks at the short-term to long-term outcomes of an intervention on the target group [31]: • Formative evaluation takes place in the lead up to the project, as well as during the project, in order to improve the project design as it is being implemented (continual improvement). Formative evaluation often lends itself to qualitative methods of inquiry.
• Summative evaluation takes place during and following the project implementation, and is associated with more objective, quantitative methods.
Process evaluation is an inductive method of theory construction, whereby observation can lead to identifying strengths and weaknesses in program processes and recommending needed improvements [32]. For this purpose, qualitative methods are most often used, which are defined in the context of evaluation as research methods that emphasize depth of understanding, that attempt to tap the deeper meaning of human experience, and that intend to generate theoretically richer, observations which are not easily reduced to numbers [32]. The most used qualitative evaluation methods include [33]: content analysis, situational analysis, in-house surveys and interviewing.
Content analysis involves studying documents and communication artifacts, which might be texts of various formats, pictures, audio or video [34]. Quantitative content analysis highlights frequency counts and objective analysis of these coded frequencies [35]. Additionally, quantitative content analysis begins with a framed hypothesis with coding decided on before the analysis begins. These coding categories are strictly relevant to the researcher's hypothesis. Quantitative analysis also takes a deductive approach [36].
Situation analysis refers to a collection of methods that managers use to analyze an organization's internal and external environment to understand the organization's capabilities, customers, and business environment. The situation analysis consists of several methods of analysis: The 5Cs Analysis, SWOT analysis and Porter five forces analysis [37]. These analyses help understand the analytical processes by which managers understand themselves, their consumers, and the marketplaces in which they compete. SWOT analysis is a strategic planning technique used to help a person or organization identify strengths, weaknesses, opportunities, and threats related to business competition or project planning [38]. It is designed for use in the preliminary stages of decision-making processes and can be used as a tool for evaluation of the strategic position of an organization. It is intended to specify the objectives of the project and identify the internal and external factors that are favorable and unfavorable to achieving those objectives. Users of a SWOT analysis often ask and answer questions to generate meaningful information for each category to make the tool useful and identify their competitive advantage.
An interview is essentially a structured conversation where one participant asks questions, and the other provides answers. Interviews can range from Unstructured interview or free-wheeling and open-ended conversations in which there is no predetermined plan with prearranged questions [39], to highly structured conversations in which specific questions occur in a specified order [40].
Other commonly used tools and techniques for evaluation purposes [41] can include especially observation, survey questionnaires, case studies, analytical models, expert panel's consultation, cost-benefit analysis (CBA), and multi-criteria analysis (MCA).
Normally validation, verification and evaluation are performed in a row allowing to estimate the completeness and consistency of the system and examining its technical appropriateness, as depicted in Figure 1.
To sum up, verification and validation heavily rely on earlier phases of the project. Verification is a rather technical process in which the main question is whether the system works properly. The validation process covers not only the demonstrations but also earlier meetings and discussions in which the requirements are refined. As already mentioned, verification of developed tool/solution is the process of determining that the system is built according to its specifications. Validation is the process of determining that the system actually fulfills the purpose for which it was intended. Evaluation reflects the value and the acceptance of the system by the end users and its performance.
Concept of creating a validation plan
Following the analysis and presentation of validation, verification and evaluation processes, in this section, a holistic (including all those three processes) validation plan, will be analyzed. In principal, an effective validation and evaluation plan, needs to seek, as clear as possible, answers to the following issues: 1. What has to be evaluated? 2. Who is interested in the validation/evaluation? 3. What critical issues have to be tackled? 4. What has to be measured? 5. How validation/evaluation has to be performed? 6. Who is involved in the evaluation? 7. How results will be reported?
All these questions have been taken under consideration and are answered and described in detail as part of the SecureGas validation-evaluation methodological approach. In this four-step methodology (Figure 2), a set of business cases (BCs) is used to support the validation, verification and evaluation of SecureGas solution. Three BCs, addressing relevant issues for the gas sector (production, transport and distribution phase of the gas lifecycle, including different infrastructures for each phase) have been identified to ensure the delivery of solutions and services to the end-users. During the BCs implementation, tailor-made scenarios for the CIs will be used for demonstrations on actual sites. The technical components involved will be assessed quantitively (by measuring foreseen KPIs) and qualitatively (by using a set of questionnaires and interviews to the participants in the demonstrations).
Set the context
This kick-off step entails all the discussions and reviews with relevant stakeholders for the exact identification of the gaps and the existing capabilities. This step also sets the scope and the objectives of each BC for the SecureGas solution to provide differentiation from current practices and added value to the operational environment of a gas CI.
Identify end users/teams
Within SecureGas framework, the end-user team consists of the gas CI operators participating in the project (DEPA, EDAA, AMBER, ENI). Further to them, the SecureGas technical component providers are actively engaged and directly involved in all phases of the validation plan. External stakeholders have been identified and will be involved only in the BC implementation phase. They will participate and provide feedback for evaluation purposes. The stakeholders/actors participating in the pilot activities may vary among the different BCs however they belong to one of the following groups: 1. CI operators, managers and administrators, security liaison officers (also from interconnected, interdependent or similar CIs); 2. Emergency response authorities (police, fire brigade, civil protection, etc.); 3. National Authorities (CI regulatory authorities, ministries, etc.); 4. Security service providers; 5. Secondary/other security professionals and practitioners (e.g. policy makers, other EU research projects, etc.).
Identify requirements and processes
The SecureGas validation and evaluation process is an essential part of the project's development cycle. The development cycle is user-oriented, which means it relies on the perception, needs and responses by end users. Based on this development cycle, in SecureGas phase 1: "construct/develop", user requirements and specifications are identified leading to conceptual model (CM), concept of operations (ConOps) and high level reference architecture (HLRA). The CM, ConOps and HRLA will be implemented and demonstrated in phase 2: "demonstrate" and finally validated in phase 3: "validate & exploit". Initial and crucial substeps to achieve an efficient planning and implementation of the BC are to: 1. Identify CI assets, threats, vulnerabilities, requirements, procedures, etc., in order to prepare the scenario including CI's specific security issues and addressing end users' actual needs.
2. Identify legacy systems and existing infrastructures, integration-data sharing, possible limitations, etc., and collaborate with the technical team to develop a SecureGas solution tuned to the project's BCs.
For the execution of these substeps, some may choose from a set of existing tools and frameworks, e.g. risk and vulnerability assessment and penetration testing (see Section 4).
Define the objective of the validation-evaluation process
The main objectives of the evaluation process will be to study the acceptance of the SecureGas system (at the strategic, tactical and operational levels), assess the performance of its components and the operational potential of the developed solution.
The beneficiaries of the validation and evaluation process are both technical component providers and CI operators. The technical providers will receive valuable feedback on technical development, components adaptation and implementation, system integration and cooperation with legacy systems, etc.. The CI operators will receive the performance assessment analysis of SecureGas solution, the extracted lessons, recommendations and conclusions, and all knowledge that can be transferred to their operations.
Identify adequate criteria
The criteria for validation can be clustered into two categories, further analyzed in Section 4: • General criteria, that apply to the whole SecureGas system (cross-KPIs) and • Specific criteria that apply to individual components of the system.
As such, the validation process will generate feedback during the pilot demonstrations on the following dimensions: functional, interface, security, operational, design, and implementation.
When it comes to the specific criteria, the SecureGas partners will make use of the lists of user (organizational, operational and regulatory) and technical (and standards-related) requirements defined, in order to determine whether the SecureGas system offers what it was designed to. As far as verification is concerned, the system specifications developed by technical partners will play the same role as user requirements in validation (see Figure 1). The evaluation process will also assess whether the SecureGas system complies with the technical requirements developed in Phase 1 of the project.
Plan the business case
This second part consists of a number of substeps that will lead in the realization of the BC implementation.
Type, location and schedule
In each SecureGas BC, an operational based demonstration will take place in the field (for the production, transport and distribution phases of gas lifecycle), aiming to simulate scenarios as realistically as possible in a controlled environment. This method of BC implementation will offer the advantage of real-time decisions and actions by the end-users and other participating actors, generating responses and leading to several consequences depending on the participants' actions and system performance. On top of that, regarding the strategic level of Gas lifecycle, a discussion-based approach will be followed, through the organization of a workshop/tabletop exercise, during which key personnel of the CI will have the chance to discuss scenarios that involve strategic threats and will assess policies, procedures, standard operating procedures and potential mitigation measures.
The locations may be related to the assets involved, the objectives and requirements of the validation, etc. Within SecureGas, the CI operators' sites in Greece, Lithuania and Italy have been selected and included in the scenarios based on the type of their installations.
Within the SecureGas project, project partners will customize, integrate and deploy the provided technical components into each BC. The deployment of the extended and integrated components in the BC will be tested through piloting activities for a period lasting almost one year period, with the last months focusing on the evaluations leading to an overall report based on the data and information collected.
Define scenarios
BCs are based on scenarios that correspond to a sequence of facts occurring in a specific space-time framework. Scenarios should be structured in a logical, readily accessible way to the pilot actors. Within SecureGas BCs, scenarios consist of events designed to guide the actors towards achieving the BC objectives. Six specific methodological substeps have been specified to define the scenarios:
Analyze criteria
The criteria used for the validation/evaluation of the SecureGas system and each component, consist of cross KPIs and specific KPIs (all linked with the end user requirements and technical specifications). In Section 4.1, these criteria will be discussed in detail.
Select validation/evaluation method and tools
In the framework of the validation plan, the methods and tools for the evaluation needs have been selected. Thus, the following substeps are executed for each BC: 1. Define what has to be measured for based on applicable KPIs.
2. Define how, through discussion-based workshop/tabletop exercise for the strategic level, and operations-based simulations/field pilots for the tactical/ operational level.
3. Define who are involved in the frame of the evaluation, sorted into three main groups as follows: • CI operators, security liaison officers, administrators and managers who can provide input based on an operational, policy and technical point of view, and evaluate the overall performance based on their experience.
• First responders, who can provide input regarding the information sharing and community awareness during an incident.
• Security practitioners and stakeholders, who, depending on their expertise, will provide information concerning the potential exploitation and use of the SecureGas solution. They may provide feedback on their willingness to use or adopt the system, other technical/operational comments, etc.
In order to achieve an effective evaluation outcome, the selection of the stakeholders, must be based on some requirements, such as the relevance to the scenario, adequate qualification, objectivity, previous experience. 4. Define the tools to be used to collect the results and feedback comprising: • KPIs and respective traceability matrices, for validation purposes, and • survey questionnaires, focus groups, interviews and brainstorming, for evaluation purposes.
5. Define how the results will be reported.
The results will be presented in suitable style and form, according to the reporting target audience and the selected tool. All reporting activities will be planned accordingly, paying attention to the most suitable communication means for the specific audience, in terms of content presentation, type of language, level of details and so on. For example, the elaboration of the questionnaires, the feedback from the interviews of the focus groups and the conclusions of the debriefing sessions (hot and cold washes) of BCs will be documented based on standardized feedback sheets which will be analyzed to improve the overall specification and development processes and their outcomes.
Business case implementation
The third part that will be followed in the validation plan, is that of that of the BC pilots execution, including both preparatory meetings and the actual field testing consisting of the following three substeps.
Plan the business case
1. End-users (internal and external) are identified specifically for each BC.
2. Identify the place and date and estimate the budget-plan logistics.
Send invitations, share information for the pilot with involved stakeholders.
4. Before the pilot, organize a training course, for the participants to have the opportunity to familiarize with the SecureGas solution.
5. The scenario (depending on the area of application) is presented to the endusers and its details are discussed.
6. All necessary adaptations, installations, integrations have been achieved and the system is ready to be used, demonstrated and evaluated.
Conduct validation exercise
Following the specific BC scenario storyline, the involved actors are guided and supported by the capabilities of the SecureGas system in order to respond to a security incident.
Assess data quality
Following the BCs pilots' implementation, the participants are asked to use the validation/evaluation tool/method (e.g. fill a specifically designed questionnaire, see Section 4). In some cases, interviews are held.
The assessment of results and feedback gathered leads to a holistic evaluation outcome, respective lessons identified and recommendations for further analysis.
Assess results
This last step of the methodology contains the analysis of the gathered evaluation results as well as an assessment of the SecureGas solution. The results of this step will be presented in the overall SecureGas evaluation and lessons identified report.
Assess results
The results assessment aims to collect valuable feedback from the end-users interactions during the pilots (via questionnaires, described in detail in Section 4.4), expressed opinions and comments through focus groups and end-session interviews. The purpose of this substep is to indicate among others whether the SecureGas solution is performing well, provides useful information, is easy to understand, reliable, ergonomic, efficient, etc.
Prepare validation and evaluation report
The final step in each BC pilot demonstration will summarize and present all the activities realized and the responses by involved actors' (both consortium partners and external experts). Based on these outcomes, an overall performance evaluation of the SecureGas solution will be reported, lessons, recommendations and conclusions will be extracted, and content for knowledge transfer will be structured.
Validation and evaluation tools
Within the SecureGas framework and specifically in the third phase of the project, that of validation and exploitation, several tools will be used in order to support the efficient implementation of the validation plan described in Section 3 above. These tools consists of: (a) an initial assessment tool, that will be used as a decision support tool to carry out a self-assessment to identify the level of intrusiveness and level of maturity of the CI, (b) the penetration testing tool/methodology for identifying vulnerabilities and assessing performance, (c) the KPIs that will be used as benchmarks to assess project's efficiency in reaching its key objectives and to evaluate the quality of the proposed technical solution, and finally (d) questionnaires and interviews as two main instruments for evaluation purposes.
Initial assessments
In the first step of the validation plan, the context is set as described in subSection 3.1. The validation plan follows the same approach as a pre-attack phase gathering as much information as possible on the target systems and planning the activities performed during the tests. Assessment frameworks such as [42,43] can be used to identify the level of intrusiveness and level of maturity.
The substeps that are performed comprise: 1. Identify and prioritize assets: A list of identified assets indicating the importance of each one should be identified (e.g. software, hardware, data, interfaces, security governance, security controls and components, etc.).
Identify threats:
A threat is anything that could exploit a vulnerability to breach security and cause harm to a CI. General threat categories are: physical adversarial threats and acts of terrorism, political/geopolitical/social threats, natural hazards, technological and accidental hazards, indirect threats and cyber threats.
3. Identify Vulnerabilities: Identify a list of known vulnerabilities of all the asset list and analyze the impact on the system/infrastructure if these are not correctly treated and mitigated The impact on the system shall be treated in terms of e.g. economy, reputation, and security for people 4. Analyze measures: Analyze the measures that are either in place or in the planning stage to minimize or eliminate the probability that a threat will exploit a vulnerability in the system 5. Determine the likelihood of an incident: The possibility of an incident to be an exploited vulnerability should be quantified, based on historical/ statistical data, user experience and knowledge or any other sources available (e.g. studies, estimations/information that authorities are producing, etc.).
6. Assess the impact a threat could have, including factors such as the mission, the criticality and the sensitivity of the system and its data 7. Prioritize the security risk: For each threat/vulnerability pair, determine the level of risk for the system/infrastructure, based on the likelihood and the impact of the threat, and the adequacy of the existing or planned system/ infrastructure security controls for eliminating or reducing the risk 8. Recommend Controls: Using the risk level from the previous step, determine the actions that the senior management of the CI and other personnel that hold key positions, must take to mitigate the risk to an accepted residual risk level. 9. Document the results to support management in making appropriate decisions on budget, policies, procedures, and so on.
Penetration testing
Following the above assessment, another process that can be used as a tool for identifying vulnerabilities and assessing performance is Penetration Testing (PT). PT is a security testing process in which experts execute real but yet controlled attacks on systems and services to identify methods for circumventing the security features of an application, system, or network. [44] PT methodologies divide the process into four generic phases: 1. A planning phase, focuses on gathering available information on the target systems, as well as on potential methods of attacks, management approval and setting the groundwork for setting up attack strategies and attack scenarios.; 2. A discovery phase, which is broken down into two parts: information gathering and scanning, and vulnerability analysis; 3. An attack Phase, where the tester put in place the knowledge acquired in the previous phase. This phase contains the following substeps: (a) Gaining access, (b) escalating privileges, (c) System browsing, and (d) Install additional tools; 4. A reporting phase, where experts evaluate findings and propose corrective actions.
Key performance indicators
KPIs typically enable the realization of technical systems towards tangible goals while serving as a benchmark for internal quality assurance. Indeed, KPIs are deemed as a measurable way to assess project's efficiency in reaching its key objectives and to evaluate the quality of the proposed technical solution(s). Through well-defined KPIs, the main areas to be tested, measured and validated during the piloting activities are established.
The SecureGas KPIs were defined in the early stage of the project so that they guide its targeted implementation. Preliminary activities, regarding user and system requirements identification as well as the CONOPS and HLRA definition, have already been completed providing valuable input to the KPIs definition task.
For the purposes of the SecureGas project, the KPIs were classified along two main indicator types: a. SecureGas component KPIs, which reflect the key characteristics and functionalities offered by each SecureGas component and are applied for their performance evaluation; b. SecureGas Cross-KPIs, which reflect the key functionalities and the expected quality of the entire SecureGas solution.
Both the SecureGas component KPIs and the SecureGas Cross-KPIs establish the validation criteria to be measured during SecureGas pilot demonstrations. Although both KPI categories are equally important for the evaluation of objectives' fulfillment, this section emphasizes on the KPIs defined for the integrated SecureGas system (i.e. SecureGas Cross-KPIs).
The methodology adopted for the definition of the KPIs was built on a bottomup rationale. The SecureGas component KPIs (low level KPIs) were initially defined. Then, drawing on that information, the SecureGas Cross-KPIs (high level KPIs) were derived. The procedural pathway followed for the identification of KPIs is depicted in Figure 3.
Considering that KPIs depend on the end-users and stakeholders interested in the SecureGas system, the first step of the adopted methodology regarded their active engagement in the KPIs definition activities. This initiative had already started taking place through the definition of the user requirements (i.e. end-users needs and expectations from an integrated security system (such as the SecureGas system), as well as through dedicated stakeholders' workshops organized for the user requirements validation. The user requirements together with their external validation results shed light to those characteristics of the system that are deemed important by the end-users. In addition, information on the KPIs already applied by the end-users to assess the performance of their gas network daily operations allowed consortium partners to draft broad areas in which evaluations are performed. This information also enabled the consortium to examine how the SecureGas solution could contribute and add value to the resilience of end-users' infrastructure.
In parallel, drawing on the already defined technical requirements of the SecureGas components, consortium technical partners defined the key capabilities, characteristics and functionalities offered by every technical subsystem. The so-called SecureGas component KPIs enable components' development and implementation.
The next step regarded the definition of the SecureGas Cross-KPIs which reflect the most important features and characteristics offered by the entire (i.e. all subsystems integrated into one system) SecureGas solution. The end-users KPIs, the SecureGas component KPIs and the already defined SecureGas system specifications (Cross-Requirements), provided the baseline for the extraction of a list of eleven SecureGas Cross-KPIs ( Table 1) that are key to performance success.
As presented in Table 1, the SecureGas Cross-KPIs were classified into specific Fields that outline the general domain categories where the impacts are going to exert their effect. Those Fields are as follows: • Reliability, i.e. the capability of the system to function in a correct manner within the given timeframe. This includes high accuracy of alert localization, avoidance of any delays in data provision, and a low rate of false alerts or errors.
• Autonomy, i.e. the level of independence of the system. An autonomous system is capable to operate (detect and process incidents) without human supervision (human in the loop only when deemed necessary).
• Interoperability, i.e. the ability of the system to work with new products (i.e. sensors or sub-systems) without special configurations. • Usability, i.e. is a set of attributes covering the effort needed for using a solution, and on the individual assessment of the use of the solution, by a stated or implied set of users.
• Resilience, i.e. is the ability of the SecureGas system to adapt from a disruption. This means that the system is able to identify potentially disruptive events and adapt to the evolving circumstances.
Each of the aforementioned Fields was linked to a set of Indicators, each one being assigned a Description, Metric and Target Value.
Following the main principles of the SecureGas project, the SecureGas Cross-KPIs aimed and achieved to addresses all the Risk and Resilience phases. Those phases reflect the activities that need to be conducted before, during and after disruptive events, as part of a comprehensive risk and resilience management procedure. The Risk and Resilience phases are as follows: Prepare, Detect, Prevent, Absorb, Respond, Recover, Learn and Adapt. The ultimate goal of developing Cross-KPIs for all those phases was to showcase how the core functionalities and performance indicators of the SecureGas system can add value to the enhancement of the resilience of gas critical infrastructure networks. Figure 4 presents the Risk and Resilience phases that are affected by each SecureGas Cross-KPIs. Some of the Cross-KPIs are linked to one phase, some others to more, while the Cross-KPI "Multilingual Interface" is related to all the seven Risk and Resilience phases, since the enhancement of the usability parameters of a system has the potential to affect the entire security and resilience status of a CI network. Figure 5 shows the KPIs distribution to the activities taking place before, during and after incidents. In general, the SecureGas Cross-KPIs are mostly linked to the activities/phases taking place before the occurrence of an incident (prepare, detect, prevent) (approx. 47.1% of KPIs), although the SecureGas system do have performance parameters that are related to the post incident activities (response, recover, learn and adapt) (approx. 32.4%).
Questionnaires and interviews
Within the context of the evaluation of SecureGas components and solution, two main instruments will be used: questionnaires and interviews.
Regarding the first one, two types of questionnaires will be used for the evaluation purposes, one more generic that can be distributed to all participants (during testing, demonstrations, workshops) and one more specific, that would be filled by targeted participants within the audience, as further described below: 1. Questionnaire 1 (generic): This will be addressed to all participants of the BC demonstrations and is based on the System Usability Scale (SUS), developed by John Brooke in 1986 [45]. The questionnaire 1 provides a "quick and dirty" though reliable tool for measuring the usability of tested systems. SUS consists of a 10-item questionnaire with five response options for respondents; from strongly agree to strongly disagree. This allows to gather evaluation feedback concerning a wide variety of products, systems and services, including hardware, software, mobile devices, websites and applications. SUS has become an industry standard, with references in several articles and publications.
2. Questionnaire 2 (specific): The second questionnaire aims to extract endusers' assessed indicators on the basis of intuitiveness, usability, performance, etc. of the proposed solution. The end-users are going to fill-in this specific questionnaire after they have experienced the capabilities and the use of the system during the BC demonstration. This questionnaire is divided in seven main sections (i.e. general information, ease of installation, facilitation of user learning, data requirements, integrity, usability, usefulness), each one aimed at examining a different aspect of the end-users' view on the SecureGas components. Regarding the second instrument for evaluation, indicative topics that may be used for discussion during the interviews comprise: 1. Experience and comments on the parallel processing, dataflow and cooperating applications within the SecureGas system.
2. Integration and interoperability of components, input/output and automatic/ manual procedures for components.
3. Evaluation of SecureGas solution as a whole for the identification, detection, assessment and mitigation of threats and risk.
Conclusions
The validation framework is a key activity of every project, which broadly includes the validation of the proposed solution to determine whether it satisfies specified requirements, the verification of the system specifications, and the evaluation of the developed solution, all further analyzed as processes in Section 2. In the framework of the SecureGas project, the developed solution is a set of technological components and practical tools which aim to strengthen the resilience of the European gas network.
The envisaged validation framework (Section 3) mainly includes two types of assessment (Section 4): (a) Quantitative assessment, using a series of KPIs to validate components and the solution as a whole, (b) Qualitative assessment, based upon a dedicated questionnaire and interview, to get feedback from participants in the BCs implementation.
The methodological procedure, described in Section 3 of this chapter, is of no doubt necessary for any technological team providing a solution in order to identify potential gaps and updates needed. Furthermore, it is also valuable for end-users, in order to recognize the suitability of the proposed solution based on their requirements and specific security issues and appreciate the added value offered. Such validation framework is applicable, at least as a concept, to all projects offering technological solutions towards CI operators (or other type of end users) and can be adapted and tailor made to each case, leading to valuable feedback. On the other hand, the proposed methodology may need some adjustments, in order to cover the needs of an end-user that would like to assess and validate a process or a procedure that may have already in hand or is proposed (e.g. KPIs redefinition, questionnaires restructuring, etc.).
The next steps of this research contain the implementation of the BCs, based on this validation plan, and the documentation of the results of each BC, consolidating them into an overall validation and performance evaluation, which may lead to lessons identified, best practices and recommendations for the interested stakeholders. | 9,086 | 2020-11-24T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Computer Science"
] |
Internalization and Down-regulation of Human Muscarinic Acetylcholine Receptor m2 Subtypes ROLE OF THIRD INTRACELLULAR m2 LOOP AND G PROTEIN-COUPLED RECEPTOR KINASE 2*
,
Sequestration/internalization of  2 -adrenergic receptors seemed to be independent of phosphorylation by GRK2 on the basis of results with  2 -adrenergic receptor mutants lacking phosphorylation sites or GRK-specific inhibitors (16 -20). On the other hand, the agonist-induced sequestration of hm2 receptors expressed in HEK293 cells is hampered by deletion of the third intracellular loop (I3-loop) which includes the GRK2 phosphorylation sites (21,22). Moreover, agonist-dependent phosphorylation and sequestration of m2 receptors expressed in COS-7 cells are facilitated by coexpression of GRK2 and attenuated by coexpression of a dominant-negative mutant of GRK2 (DN-GRK2) that lacks kinase activity (23). Recently, Ferguson et al. have reexamined the relationship between the phosphorylation by GRK2 and sequestration of  2 -adrenergic receptors, demonstrating that phosphorylation by GRK2 (24) or other GRKs (25) facilitates sequestration of  2 -adrenergic receptors. Phosphorylation facilitates -arrestin binding to  2adrenergic receptors (26) and thereby appears to enhance sequestration, possibly interacting with clathrin (27), a major protein of coated pits. Pals-Rylaarsdam et al. (8,28) have provided results showing that the phosphorylation by GRK2 of m2 receptors is involved in their internalization as well as in their uncoupling from G proteins in HEK293 cells. These results suggest that the phosphorylation by GRK2 of m2 muscarinic and  2 -adrenergic receptors may be involved in both internalization and uncoupling through facilitation of their interaction with -arrestin/arrestin 3.
No studies have been carried out on the relation between down-regulation and phosphorylation of G protein-coupled receptors, except that down-regulation of  2 -adrenergic receptors has been reported to be independent of their phosphorylation by GRK2 (16,18). It is also unclear whether the cellular pathway leading to down-regulation is distinct from that of internalization. If a portion of receptors in clathrin-coated vesicles translocates into lysosomes and is down-regulated, their phosphorylation with GRKs or the deletion of I3-loop should also affect down-regulation. However, if down-regulation occurs by a distinct pathway, receptor phosphorylation may not play a role. Alternatively, both phosphorylated and non-phosphorylated receptors may enter the clathrin-dependent internalization pathway, albeit at different rates. Finally, receptor phosphorylation could affect the rate of translocation between endosomes and lysosomes, or recycling to the cell surface.
Here, we provide evidence that down-regulation as well as internalization of hm2 receptors are facilitated by coexpression of GRK2. Moreover, deletion of I3-loop, which contains the GRK2 phosphorylation sites (22), suppressed rapid internalization and markedly reduced the rate of down-regulation.
Materials-[ 3 H]NMS (specific activity of 71.3 Ci/mmol) and [ 3 H]QNB
(specific activity of 36.4 Ci/mmol) were purchased from NEN Life Science Products; restriction enzymes were from Toyobo Corp. and Takara Shuzo Co., Ltd.; Cy3-conjugated goat anti-mouse IgG antibody was from Jackson Laboratories. cDNA of GRK2 was kindly donated by Dr. R. J. Lefkowitz, mammalian expression vector for hygromycinresistant gene (pSV-hygro) was from Dr. H. Okayama, and mammalian expression vector with neomycin-resistant gene (pEF-neo) and mammalian expression vector pEF-BOS were from Drs. S. Nagata and T. Shimizu. Hybridoma cells expressing 9E10 were obtained from the American Type Culture Collection; Chinese hamster ovary CHO-K1 cells were from the Japanese Cancer Research Resources Bank.
Construction of Stable Transfectant Expressing hm2 Receptors and GRK2-The construction of mammalian expression vectors for c-Myc epitope-tagged hm2 receptor (pEF-Myc-hm2) and GRK2 (pEF-GRK2) was described previously (23). CHO-K1 cells (5 ϫ 10 4 cells) were transfected with 18 g of expression vectors of pEF-Myc-hm2 and 2 g of pEF-neo by the calcium phosphate precipitation method (29). Stable transfectants were selected in the presence of 400 g/ml Geneticin (Life Technologies, Inc.) and were subcloned by limiting dilution. Expression of receptors was detected by [ 3 H]QNB binding. The [ 3 H]QNB binding sites in these cells were estimated to be 165 fmol/mg of protein in total homogenate. The transfectants were cultured in F-12 nutrient mixture (Ham's) (Life Technologies Inc.) supplemented with 10% fetal bovine serum (Cansera International Inc.), 40 units/ml penicillin G (Meiji Seika, Kaisha Ltd.), 40 mg/ml streptomycin sulfate (Meiji Seika, Kaisha Ltd.), and 100 g/ml Geneticin at 37°C in 95% air and 5% CO 2 . One of the CHO cell clones expressing hm2 receptors was transfected with 18 g of pEF-GRK2 and 2 g of pSV-hygro, and stable transfectants were selected in the presence of 300 g/ml hygromycin B (Boehringer Mannheim) and subcloned by limiting dilution. Expression of GRK2 was detected with use of Western blotting as described previously (23). The [ 3 H]QNB binding sites of these cells were estimated to be 330 fmol/mg of protein in total homogenate, and expressed amounts of GRK2 were estimated to be 300 -600 fmol/mg of protein in the supernatant by immunostaining with anti-GRK2 antibodies. The transfectants were cultured in F-12 nutrient mixture (Ham's) supplemented with 10% fetal bovine serum, 40 units/ml penicillin G, 40 mg/ml streptomycin sulfate, and 100 g/ml hygromycin B at 37°C in 95% air and 5% CO 2 . A mammalian expression vector for a m2 receptor mutant that lacks a central part of the third intracellular loop (I3-del m2 receptor) was constructed by inserting the NheI-XhoI fragment of the pSG5/ Hm2(d234 -381) (21) into the NheI/XhoI site of pEF-Myc-hm2. I3-del m2 receptors were stably expressed in CHO-K1 cells as described above, and the [ 3 H]QNB binding sites of these cells were estimated to be 260 fmol/mg of protein in total homogenate.
Sucrose Density Gradient Centrifugation Experiments-Sucrose density gradient centrifugation was carried out as described by Harden et al. (30). Semiconfluent CHO cells cultured in a 15-cm diameter dish were treated with 10 Ϫ5 M carbamylcholine for 20 min and then washed three times with 10 ml of ice-cold, phosphate-buffered saline (PBS; 137 mM NaCl, 2.7 mM KCl, 8.1 mM Na 2 HPO 4 , 1.5 mM KH 2 PO 4 , pH 7.5). Washed cells were incubated with 10 ml of serum-free F-12 medium containing 50 g/ml concanavalin A for 20 min on ice, then washed with 10 ml of lysis buffer (1 mM Tris, 2 mM EDTA, pH 7.4), and hypotonically lysed by incubation in 10 ml of lysis buffer for 20 min on ice. After removing the lysis buffer, cells were collected in a small volume of lysis buffer with rubber policeman. 4 , and 1 mM NaH 2 PO 4 , pH 7.4; 0.5 ml/well) at 4°C for 4 h. After incubation, cells were washed three times with 1 ml ice-cold PBS/well. After washing, cells were dissolved in 0.3 ml of 1% Triton X-100 (w/v), mixed with 4.5 ml of Triton-toluene mixture containing 0.4% 2,5-diphenyloxazole and 0.01% 1,4-bis-2-(methyl-5-phenyloxazolyl)benzene, and the radioactivity measured. Quadruplicate samples were assayed for each point. In some experiments, cells were treated with carbamylcholine in the hypertonic medium containing 0.32 M sucrose besides normal constituents. Down-regulation in the hypertonic medium was examined for cells treated with carbamylcholine for 1-4 h, because the incubation for longer than 4 h in the hypertonic medium caused CHO cells to deteriorate.
Immunofluorescence Confocal Microscopy of hm2 Receptors-CHO cells expressing human c-Myc-tagged m2 receptors were grown overnight on plastic chamber slides (Nunc Inc.). Treatment with various concentrations of carbamylcholine was carried out at 37°C for 10 min. At the end of drug treatment, cells were washed twice with PBS, fixed for 10 min at room temperature with 3.7% paraformaldehyde in PBS, permeabilized in PBS containing 0.25% fish gelatin, 0.04% saponin, and 0.05% NaN 3 . After permeabilization, cells were labeled with anti-Myc monoclonal antibody (9E10) (31) for 1 h, washed four times with PBS, incubated with Cy3 (indocarbocyamine)-conjugated goat anti-mouse secondary antibody, and then washed four times with PBS and once with water. Slides were mounted using Fluoromount G (Fisher Scientific) containing a trace amount of phenylenediamine and stored at 4°C. Samples were visualized using laser scanning confocal microscopy with a krypton-argon laser coupled with a Bio-Rad MRC-600 confocal head attached to an Optiphot II Nikon microscope with a Plan Apo 60 ϫ 1.4 NA objective lens with 1.4 numeric aperture. Cy3 emission was detected with a yellow high sensitivity filter block.
Sequestration of hm2 Receptors as Assessed by Loss of [ 3 H]NMS Binding
Sites from the Cell Surface-CHO cells expressing hm2 receptors with or without GRK2 were treated with carbamylcholine for various times, and then [ 3 H]NMS binding activity of intact cells was measured. greater (21). It should be noted that the portion of sequestered m2 receptors was higher for CHO cells (80%) than for COS-7 cells (40%) or BHK-21 cells (20 -25%).
Many membrane proteins including G protein-coupled receptors have been shown to be internalized through coated vesicles (26,27,32,33), whereas some receptors including m2 muscarinic receptors have also been reported to be internalized via caveolae (34,35). To determine which process is involved in the sequestration of hm2 receptors expressed in CHO cells, we have examined the effect of hypertonic medium on the sequestration, because the hypertonic medium is known to inhibit the internalization through clathrin-coated vesicles but not the internalization through caveolae (33,34,36,37). Sequestration of hm2 receptors in the presence of 10 Ϫ4 M or lower concentrations of carbamylcholine was completely suppressed in the hypertonic medium containing 0.32 M sucrose (Fig. 1E), indicating that hm2 receptors are internalized through clathrincoated vesicles. The inhibition of sequestration by the hypertonic medium was observed whether GRK2 was coexpressed or not, excluding the possibility that the coexpression of GRK2 facilitated the internalization of hm2 receptors through the pathway different from the coated vesicle-mediated pathway.
Assessment of Internalization of hm2 Receptors by Sucrose Density Gradient Centrifugation and Confocal Microscopy-
Sequestration of muscarinic receptors as assessed by the loss of [ 3 H]NMS binding sites from the cell surface is generally thought to represent internalization of receptors in the form of endocytosed vesicles. We confirmed internalization of hm2 receptors expressed in CHO cells with two different methods: sucrose density gradient centrifugation and confocal microscopy. Sucrose density gradient centrifugation was carried out as described by Harden et al. (30). The carbamylcholine-treated cells were incubated with concanavalin A, hypotonically lysed, and then subjected to the centrifugation, which resulted in the separation of two fractions: a heavy membrane fraction containing cell surface membranes and a light fraction containing intracellular vesicles (endosomes). As shown in Fig. 2, the peak of [ 3 H]QNB binding sites shifted from the heavy to light fraction by treatment of cells with 10 Ϫ5 M carbamylcholine for 20 min. This result is consistent with the interpretation that the sequestered [ 3 H]NMS binding sites corresponding to approximately 50% of total hm2 receptors were transferred from cell membranes to light vesicle fractions.
We have also followed internalization using laser scanning confocal microscopy. CHO cells expressing Myc-tagged hm2 receptors alone or Myc-tagged hm2 receptors together with GRK2 were labeled with anti-Myc monoclonal antibody (9E10) as described previously by Tolbert and Lameh (32). In the absence of agonist, hm2 receptors can be observed only at the cell surface (Fig. 3, A and C). When the cells were treated with 10 Ϫ6 M carbamylcholine for 10 min, vesicles containing hm2 receptors were observed only in cells coexpressing GRK2 (Fig. 3D). In the cells expressing only hm2 receptors, no intracellular vesicles containing hm2 receptors were observed after agonist treatment (Fig. 3B).
These results provide evidence that the sequestration/internalization observed as the loss of [ 3 H]NMS binding sites and the transfer of [ 3 H]QNB binding sites represents the translocation of hm2 receptors from plasma membranes into cytoplasmic vesicles. (Fig. 4B). In contrast, down-regulation was undetectable in the presence of 10 Ϫ6 M carbamylcholine without GRK2, whereas significant down-regulation occurred with GRK2 coexpression (Fig. 4A). Apparent EC 50 values of carba- mylcholine for the down-regulation of hm2 receptors after 16 h of treatment were estimated to be 0.7 and 6 M for cells with or without coexpression of GRK2 (Fig. 4D). These results provide the first evidence that the down-regulation of G protein-coupled receptors is facilitated by coexpression of GRK2 and suggest that phosphorylation by GRK2 of hm2 receptors is directly or indirectly linked to their down-regulation.
Down-regulation of hm2 Receptors as Assessed by the Decrease in [ 3 H]QNB Binding Sites-The down-regulation of hm2 receptors was assessed as the agonist-induced decrease in
Down-regulation of hm2 receptors, as well as their sequestration, was markedly inhibited in the hypertonic medium (Fig. 4E). When cells were treated with 10 Ϫ4 M carbamylcholine for 4 h, the proportions of down-regulated receptors were 17-18% in the hypertonic medium, in contrast with 39 -49% in the normal medium. The inhibition was observed irrespective of the coexpression of GRK2 or not. The finding that both sequestration and down-regulation were commonly inhibited in the hypertonic medium supports the idea that both sequestration and down-regulation involve the same event, e.g. the internalization through coated vesicles.
Sequestration and Down-regulation of I3-del m2 Receptors-We have stably expressed I3-del m2 receptors in CHO (22), and I3-del m2 receptors are not phosphorylated by GRK2 (4). I3-del m2 receptors transiently expressed in HEK293 cells have been shown to sequester much less than hm2 receptors (21). Similarly, I3-del m2 receptors in CHO cells failed to sequester significantly upon treatment with carbamylcholine for 1 h (Fig. 5A). The [ 3 H]NMS binding sites were gradually decreased upon prolonged incubation with carbamylcholine, but the rate of loss was much lower for I3-del m2 receptors (t1 ⁄2 ϭ 8.4 h) than for wild type hm2 receptors (t1 ⁄2 ϭ 9.5 min) (Fig. 5C). Fig. 5B shows changes in [ 3 H]QNB binding sites after incubation of cells expressing wild type and I3-del m2 receptors for 16 h with different concentrations of carbamylcholine. Unexpectedly, appreciable loss of [ 3 H]QNB binding sites was observed even for I3-del m2 receptors, although the extent of loss was less for I3-del m2 receptors (44%) than for hm2 receptors (60%). The rate of loss was also much slower for I3-del m2 receptors than for hm2 receptors (t1 ⁄2 ϭ 9.9 versus 2.3 h) (Fig. 5C). These results indicate that the presence of the I3-loop is not required for agonist-induced down-regulation, although it may accelerate the rate of down-regulation. The loss of [ 3 H]QNB binding sites in the presence of 10 Ϫ4 M carbamylcholine occurred in parallel with the loss of [ 3 H]NMS binding sites for I3-del m2 receptors, (t1 ⁄2 ϭ 9.9 and 8.4 h for the loss of [ 3 H]QNB and [ 3 H]NMS binding sites, respectively), in a sharp contrast to the rates for hm2 wild type receptors (t1 ⁄2 ϭ 2.3 h and 9.5 min, respectively) (Fig. 5C). These results indicate that the I3-del m2 receptors are down-regulated as soon as they are lost from the cell surface and that no appreciable amounts of I3-del m2 receptors exist in an internalized form, whereas 40 -60% of hm2 receptors exist in an internalized form (Figs. 5C and 6). DISCUSSION In previous studies (23), we have shown that sequestration of m2 receptors transiently expressed in COS-7 and BHK-21 cells was facilitated by coexpression of GRK2, an effect of which was evident only at low concentrations of carbamylcholine. In the present study, a similar effect of coexpression of GRK2 was observed for the sequestration of hm2 receptors stably expressed in CHO cells. Furthermore, the sequestration assessed as the loss of [ 3 H]NMS binding sites from the cell surface was confirmed to represent the internalization of hm2 receptors from plasma membranes into cytoplasmic vesicles by analyses involving sucrose density gradient centrifugation of membrane fractions and confocal microscopy. The fact that a similar effect was observed in three different cell lines suggests that facilitation by GRK of the internalization of hm2 receptors is a general phenomenon independent of cell species. On the other hand, Pals-Rylaarsdam et al. (8) have argued against the involvement of GRK2 in the internalization of hm2 receptors, based on the finding that the level of sequestration was not affected by coexpression of GRK2 or a DN-GRK2 in a clone of HEK293. They measured the sequestration of hm2 receptors in cells treated with only a high concentration of carbamylcholine (1 mM), and therefore could have missed the effect of GRK2 coexpression. Very recently, these authors have shown that a hm2 receptor mutant with alanine residues in the place of serine/threonine residues in the GRK2 phosphorylation sites was sequestered by a lower extent compared with the wild type receptor, and concluded that sequestration of hm2 receptors was promoted by their phosphorylation (28). As for the effect of coexpression of DN-GRK2, we have also failed to detect any effect on the sequestration of m2 receptors in CHO cells and BHK-21 cells (23) In contrast to wild type hm2 receptors, I3-del m2 receptors (deletion 234 -381), which lack phosphorylation sites by GRK2, failed to internalize rapidly. The simplest interpretation for this finding is that phosphorylation by GRK2 of serine or threonine residues in the I3-loop is a necessary step for rapid internalization. We cannot exclude, however, the possibility that the I3-loop may have other functions. Pals-Rylaasdam et al. reported that a hm2 mutant with a deletion (252-327) in the I3-loop was not phosphorylated by GRK2; yet 50% of the mutant stably expressed in HEK293 cells were sequestered by treatment with 10 Ϫ3 M carbamylcholine for 2 h, although the sequestration of the mutant was less in its extent and slower in its rate as compared with the sequestration of wild type hm2 receptors. Possibly, internalization depends on phosphorylation-independent sites which were deleted from our mutant but not from the 252-327 deletion mutant. Ferguson et al. (24) have shown that overexpression of -arrestin rescues sequestration of  2 -adrenergic receptor mutant lacking phosphorylation sites, and proposed that the interaction between -arrestin and receptors is essential for internalization and that the internalization is facilitated by but does not require the phosphorylation by GRK2. Both phosphorylation sites and phosphorylation-independent sites in the I3-loop might be involved in the interaction with -arrestin which accelerates internalization.
We have found in the present study that the coexpression of GRK2 facilitates the down-regulation of hm2 receptors by reducing the effective concentrations of carbamylcholine. As the effects of GRK2 coexpression on internalization and downregulation of hm2 receptors were similar to each other, it is tempting to speculate that both internalization and down-regulation involve the same event, e.g. the phosphorylation by GRKs of agonist-bound receptors. To our knowledge, a positive relationship between down-regulation and phosphorylation by GRK2 has not been reported for any G protein-coupled receptors. As for  2 -adrenergic receptors, receptor mutants lacking phosphorylation sites for GRK2 have been shown to downregulate normally (16,18). It should be noted, however, that these authors have not examined the effect of different concentrations of agonist, and therefore, the ability of GRK2 to reduce the effective concentration might not have been noticed.
When hm2 receptor-expressing cells were treated with 10 Ϫ4 M of carbamylcholine, hm2 receptors were rapidly internalized with t1 ⁄2 of 9.5 min and slowly down-regulated with t1 ⁄2 of 2.3 h. Thus, approximately 60% of receptors were down-regulated, 30% were in an internalized form, and 10% remained in the cell surface after a 16-h incubation (see Fig. 5C). In contrast, I3-del m2 receptors were lost from the cell surface and down-regulated with slower rates of t1 ⁄2 ϭ 8.4 and 9.9 h, respectively, so that approximately 60% of receptors were down-regulated, no appreciable receptors were detectable in an internalized form, and 40% remained in the cell surface after a prolonged incubation (see Fig. 5C). These results indicate that down-regulation may occur without the I3-loop. However, the I3-loop is necessary for rapid internalization and accumulation of internalized receptors.
In Fig. 6, we have presented a tentative schema for the relationship between internalization and down-regulation of hm2 receptors. We assume in this schema that agonist-bound receptors are rapidly internalized and that internalized receptors are slowly down-regulated. This schema explains the present findings that both rapid internalization and down-regulation in the presence of low concentrations of carbamylcholine are accelerated in parallel by coexpression of GRK2; this explanation is based on the assumption that the amounts of phosphorylated hm2 receptors are increased by coexpression of GRK2, the rate of internalization is limited by the concentration of phosphorylated receptors, and the rate of down-regulation is limited by concentrations of internalized receptors. The finding that both sequestration and down-regulation are inhibited in the hypertonic medium supports the scheme and suggests that the rapid internalization occurs via coated vesicles. We, however, cannot exclude the possibility that the downregulation occurs through multiple pathways. In contrast to hm2 receptors, the I3-del m2 receptors are lost from the cell surface and down-regulated with similar slow rates (see Fig. 5C), indicating that no appreciable amounts of receptors exist in an internalized form. It is possible that hm2 receptors may down-regulate via two independent pathways, the I3-looprequiring and I3-loop-independent pathways, which do and do not involve the rapid internalization, respectively. The I3-looprequiring and I3-loop-independent pathways may represent the coated vesicle-mediated and coated vesicle-independent pathways, respectively. This interpretation is consistent with the results that the internalization of hm2 receptors caused by 10 Ϫ4 M carbamylcholine was completely suppressed in the hypertonic medium but the down-regulation was only partly suppressed, and that the proportions of down-regulated hm2 receptors in the hypertonic medium were similar to those of down-regulated I3-del hm2 receptors in normal medium (compare Figs. 4E and 5C). Another interpretation is that downregulation of hm2 receptors occurs through a single step involving internalized vesicles and that the internalization step proceeds rapidly for hm2 receptors with intact I3-loop but greatly slows down and becomes the rate-limiting step for down-regulation for I3-del hm2 receptors. At present, the question remains open whether down-regulation of hm2 receptors occurs through a single route via internalized receptors or through multiple independent pathways.
In the present study, we have shown that both internalization and down-regulation of hm2 receptors are facilitated by coexpression of GRK2, and that the I3-loop is necessary for FIG. 6. Schema for a possible relationship between internalization and down-regulation of hm2 receptors. The rates of internalization (9.5 min and 8.4 h) and down-regulation (2.3 and 9.9 h), and proportions of cell surface receptors, internalized receptors, and downregulated receptors were estimated for cells treated with 10 Ϫ4 M carbamylcholine for 16 h or more (Fig. 5C). R, receptor. rapid internalization but not necessary for down-regulation, although the rate of down-regulation is reduced in its absence. | 5,196.8 | 1998-02-27T00:00:00.000 | [
"Biology",
"Chemistry"
] |
DFKI: Multi-objective Optimization for the Joint Disambiguation of Entities and Nouns & Deep Verb Sense Disambiguation
We introduce an approach to word sense disambiguation and entity linking that combines a set of complementary objectives in an extensible multi-objective formalism. During disambiguation the system performs continuous optimization to find optimal probability distributions over candidate senses. Verb senses are disambiguated using a separate neural network model. Our results on noun and verb sense disambiguation as well as entity linking outperform all other submissions on the Se-mEval 2015 Task 13 for English.
Introduction
The task of assigning the correct meaning to a given word or entity mention in a document is called word sense disambiguation (WSD) (Navigli, 2009) or entity linking (EL) (Bunescu and Pasca, 2006), respectively. Successful disambiguation requires not only an understanding of the topic or domain a document is dealing with (global), but also an analysis of how an individual word is used within its local context. E.g., the meanings of the word "newspaper" as the company or the physical product, often cannot be distinguished by the topic, but by recognizing which type of meaning fits best into the local context of its occurrence. On the other hand, for an ambiguous entity mention such as "Michael Jordan" it is important to recognize the topic of the wider context to distinguish, e.g., between the basketball player and the machine learning expert.
The combination of the two most commonly used reference knowledge bases for WSD and EL, e.g., WordNet (Fellbaum, 1998) and Wikipedia, by Ba-belNet (Navigli and Ponzetto, 2012) has enabled a new line of research towards the joint disambiguation of words and named entities. Babelfy (Moro et al., 2014) has shown the potential of combining these two tasks in a purely knowledge-driven approach that jointly finds connections between potential word senses in the global context. On the other hand, typical supervised methods (Zhong and Ng, 2010) trained on sense-annotated corpora are usually quite successful in dealing with individual words in a local context. Hoffart et al. (2011) recognize the importance of combining both local context and global context for robust disambiguation. However, their approach is limited to EL, where optimization is performed in a discrete setting. We present a system that combines disambiguation objectives for both global and local contexts into a single multi-objective function. In contrast to prior work we model the problem in a continuous setting based on probability distributions over candidate meanings. Our approach exploits lexical and encyclopedic knowledge, local context information and statistics of the mapping from text to candidate meanings. Furthermore, we introduce a deep learning approach to verb sense disambiguation based on semantic role labeling.
Approach
The SemEval-2015 task 13 (Moro and Navigli, 2015) requires a system to jointly detect and disambiguate word and entity mentions given a reference knowledge base. The provided input to the system are tokenized, lemmatized and POS-tagged doc-uments; the output are sense-annotated mentions.
Our system employs BabelNet 1.1.1 as reference knowledge base (KB). BabelNet is a multilingual semantic graph of concepts and named entities that are represented by synonym sets, called Babel synsets.
Mention Extraction & Entity Detection
We define a mention to be a sequence of tokens in a given document for which there exists at least one candidate meaning in the KB. The system considers all content words (nouns, verbs, adjectives, adverbs) as mentions including also multi-token words of up to 5 tokens that contain at least one noun. In addition, we apply a pre-trained stacked linear-chain CRF (Lafferty et al., 2001) using the FACTORIE toolkit of version 1.1 (McCallum et al., 2009) to identify named entity (NE) mentions. In our approach, we distinguish NEs from common nouns and treat them as two different classes because there are many common nouns also referring to NEs making disambiguation unnecessarily complicated.
Candidate Search
After potential mentions are extracted the system tries to identify their candidate meanings, i.e., the appropriate synsets. Mentions without such candidates are discarded. The mapping of candidate mentions to synsets is based on similarities of their surface strings or lemmas. If the surface string or lemma of a mention matches the lemma of a synonym in a synset that has the same part of speech, the synset will be considered a candidate meaning. We allow partial matches for Ba-belNet synonyms derived from Wikipedia titles or redirections. A partial match allows the surface string of a mention to differ by up to two tokens from the Wikipedia title (excluding everything in parentheses) if the partial string was used at least once as an anchor for the corresponding Wikipedia page. For example, for the Wikipedia title Armstrong School District (Pennsylvania), the following surface strings would be considered matches: "Armstrong School District (Pennsylvania)", "Armstrong School District", "Armstrong", but not "School", since "School" was never used as an anchor. If there is no match we try the same procedure applied to the lowercased text or lemma.
Because of the distinction between nouns and named entities we treat NE as a separate POS tag. Candidate synsets for NEs are Babel synsets considered NEs in BabelNet, and additionally Babel synsets of all Wikipedia senses that are not considered NEs. Similarly, candidate synsets for nouns are noun synsets that are not considered NEs in addition to all synsets of WordNet senses in BabelNet. We add synsets of Wikipedia senses and WordNet senses, respectively, because the distinction of NEs and simple concepts is not always clear in BabelNet. For example the synset for "UN" (United Nations) is considered a concept whereas it could also be considered a NE. Finally, if there is no candidate for a potential noun mention we try to find NE candidates for it and vice versa.
Disambiguation of Nouns and Named Entities
We formulate the disambiguation problem in a continuous setting by using probability distributions over candidates. This has several advantages over a discrete setting. First, we can exploit well established continuous optimization algorithms, such as conjugate gradient or LBFGS, which guarantee to converge to a local optimum. Second, by optimizing upon probability distributions we are optimizing the actually desired result in contrast to densest subgraph algorithms where such probabilities need to be calculated artificially afterwards, e.g., Moro et al. (2014). Third, discrete optimization usually works on a single candidate per iteration whereas in a continuous setting, probabilities are adjusted for each candidate, which is computationally advantageous for highly ambiguous documents. Given a set of objectives O the overall objective function O is defined as the sum of all normalized objectives O ∈ O given a set of mentions M : . (1) We normalize each objective using the difference of their maximum and minimum value for the given document. For disambiguation we optimize the multi-objective function using Conjugate Gradient (Hestenes and Stiefel, 1952) with up to 1000 iterations per document.
Coherence Jointly disambiguating all mentions within a document has been shown to have a large impact on disambiguation quality. We adopt the idea of semantic signatures and the idea of maximizing the semantic agreement among selected candidate senses from Moro et al. (2014). We define the continuous objective function based on probability distributions p m (c) over the candidate set C m of each mention m ∈ M in a document as follows: where S denotes the semantic interpretation graph, 1 the indicator function and p m (c) is a softmax function. The only free, optimizable parameters are the softmax weights λ m,c . This objective can be interpreted as finding the densest subgraph of the semantic interpretation graph where each node is weighted by its probability and therefore each edge is weighted by the product of its adjacent vertex probabilities.
Type Classification One of the biggest problems of supervised approaches to WSD is the size and synset coverage of training corpora such as Sem-Cor (Miller et al., 1993). One way to circumvent this problem is to use a coarser set of semantic classes that groups synsets together. Previous studies on using semantic classes for disambiguation showed promising results (Izquierdo-Beviá et al., 2006). WordNet provides a mapping, called lexnames, of synsets into 45 types based on the syntactic categories of synsets and their logical groupings 1 . A multi-class logistic (softmax) regression model was trained that calculates a probability distribution q m (t) over lexnames t given a potential WordNet mention m. The features used as input to the model are the following: embedding of the mention's text, sum of embeddings of all sentence words, embedding of the dependency parse parent, collocations of surrounding words (Zhong and Ng, 2010), surrounding POS tags and possible lexnames. We used pre-trained embeddings from Mikolov et al. (2013).
Type classification is included in the overall objective in the following form: Priors Another advantage of working with probability distributions over candidates is the easy integration of prior information. E.g., the word "Paris" without further context has a strong prior on its meaning as a city instead of a person. Our approach utilizes prior information in form of frequency statistics over candidate synsets for a mention's surface string. These priors are derived from annotation frequencies provided by WordNet for Babelsynsets containing the respective WordNet sense and from occurrence frequencies in Wikipedia extracted by DBpedia Spotlight (Daiber et al., 2013) for synsets containing only Wikipedia senses. Laplacesmoothing is applied to all prior frequencies. This prior is used to initialize the probability distribution over candidate synsets. Note that the priors are used "naturally", i.e., as actual priors and not during context based optimization itself. Furthermore, because candidate priors for NE mentions can be very high we add an additional L2-regularization objective for NE mentions with λ = 0.001, which we found to work best on development data. Finally, named entities were filtered out if they were included in another NE, had no connection in the semantic interpretation graph with another candidate sense of the input document or were overlapping with another NE but were connected worse.
Disambiguation of Verbs
The disambiguation of verbs requires an approach that focuses more on the local context and especially the usage of a verb within a sentence. Therefore, we train a neural network based on semantic role labeling (SRL) and sentence words. Figure 1 illustrates an example network. The input is composed of the word embeddings (Turian et al., 2010) for each feature (word itself, its lemma, SRLs and bag of sentence words). All individual input embeddings are won win Obama Prize BoW ...
Input layer
Hidden layer Output layer Figure 1: Disambiguation neural network for "won" in the sentence "Obama won the Nobel Prize." 50-dimensional and connected to a 100-dimensional hidden layer. The output layer consists of all candidate synsets of the verb. The individual output weights W c are candidate specific. To ensure better generalization and to deal with the sparseness of training corpora, W c is defined as the following sum: where s(c) is the respective synset of c, P s is the set of all hypernyms of s (transitive closure) and E s are the synsets entailed by s. We used ClearNLP 2 (Choi, 2012) for extracting SRLs.
Results
The results of our system are shown in Table 1
Conclusion
We have presented a robust approach for disambiguating nouns and named entities as well as a neural network for verb sense disambiguation that we used in the SemEval 2015 task 13. Our system achieved an overall F1 score of 70.3 for nouns, 88.9 for NEs and 57.7 for verbs across different domains, outperforming all other submissions for these categories of English. The disambiguation of nouns and named entities performs especially well compared to other systems and can still be extended through the introduction of additional, complementary objectives. Disambiguating verbs remains a very challenging task and the promising results of our model still leave much room for improvement. | 2,742.6 | 2015-06-01T00:00:00.000 | [
"Computer Science",
"Linguistics"
] |
Research on Cooperative Planning of Distributed Generation Access to AC / DC Distribution ( Micro ) Grids Based on Analytical Target Cascading
With the wide application of distributed generation (DG) and the rapid development of alternating current/direct current (AC/DC) hybrid microgrids, the optimal planning of distributed generation connecting to AC/DC distribution networks/microgrids has become an urgent problem to resolve. This paper presents a collaborative planning method for distributed generation access to AC/DC distribution (micro) grids. Based on the grid structure of the AC/DC distribution network, the typical interconnection structure of the AC/DC hybrid microgrid and AC/DC distribution network is designed. The optimal allocation models of distributed power supply for the AC/DC distribution network and microgrid are established based on analytical target cascading. The power interaction between the distribution network and microgrid is used to establish a coupling relationship, and the augmented Lagrangian penalty function is used to solve the collaborative programming problem. The results of distributed power supply allocation are obtained, solving the problem so that distribution generation with different capacity levels is connected to the power grid system in a single form.
Introduction
An alternating current-direct current (AC-DC) hybrid microgrid provides an effective way to solve the problems caused by large-scale distributed generation and DC load access, and this microgrid has become the mainstream of distribution network terminal development [1][2][3].With the large-scale operation of distributed generation, the progress of power electronics technology and the large amount of DC load access, the traditional AC distribution network has been unable to meet the demand for power system development.The DC distribution network has certain advantages in energy transmission and fast control, which can improve system stability and reduce the utilization of power electronic devices such as converters.Therefore, to meet the demand for a high proportion of distributed generation access and a large amount of DC load access, collaborative optimization planning between AC/DC hybrid microgrids and AC/DC distribution networks has become a research hotspot in recent years [4][5][6].However, research on the interconnection structure, operation control and fast protection of the two hybrid power grids by domestic and foreign scholars is still in the initial stage, and a large number of problems of coordinated planning and control need to be solved urgently.
When an AC/DC hybrid microgrid is interconnected with an AC/DC distribution network, the system has flexible network structure, multiple connection modes and operation modes, which effectively improves the reliability of the AC/DC hybrid distribution (micro) grid system.Faced with the changeable structure and wide application of a hybrid power grid, the design of As shown in Figure 1, AC and DC distribution networks are interconnected by AC/DC converters.Under this structure, large-scale and centralized distributed generators are connected to AC or DC distribution network systems through converters, which reduces the use of converters and improves the access capacity and generation efficiency of distributed generators [9][10][11].The microgrid in the system can exist in many forms, mainly depending on the type of load and load demand.For pure AC or DC load systems, AC or DC microgrids can be established to supply power.If AC and DC loads need a power supply at the same time and the load cannot be transferred, it is As shown in Figure 1, AC and DC distribution networks are interconnected by AC/DC converters.Under this structure, large-scale and centralized distributed generators are connected to AC or DC distribution network systems through converters, which reduces the use of converters and improves the access capacity and generation efficiency of distributed generators [9][10][11].The microgrid in the system can exist in many forms, mainly depending on the type of load and load demand.For pure AC or DC load systems, AC or DC microgrids can be established to supply power.If AC and DC loads need a power supply at the same time and the load cannot be transferred, it is the most economical Energies 2019, 12, 1847 3 of 20 choice to construct an AC/DC hybrid microgrid, which can effectively reduce system costs and losses and improve the power supply capacity of the system.
The typical structure of the AC/DC hybrid microgrid is powered by both ends of the AC and DC distribution networks; the DC bus of the hybrid microgrid is interconnected with the DC distribution network, and the AC bus is interconnected with the AC distribution network.Therefore, the structure has many operation modes, including ring network operation, DC distribution network operation, AC distribution network operation, hybrid microgrid islanding operation, AC/DC sub-microgrid islanding operation, and AC/DC sub-microgrid disconnection operation, which greatly improve the reliability and flexibility of hybrid microgrids.When the size of a sub-microgrid in the AC/DC hybrid microgrid is small, it can be disconnected from the corresponding distribution network and supplied by a single distribution network that can meet the stability requirements of the system.
At present, scholars at home and abroad have carried out a great deal of research on DC distribution networks and even AC/DC distribution networks [12][13][14][15][16].One study presented a mathematical method to determine the minimum required efficiency of power electronic converters in a DC distribution network and concluded that a DC system can only be considered when the minimum required efficiency can be economically achieved [12].In Reference [13], the economic efficiency of hybrid AC/DC distribution systems was evaluated and compared with conventional AC systems, and the proposed methodology determined the optimal AC/DC distribution substation location and size and AC/DC feeder routing, as well as the length and capacity of AC/DC feeders on both the low-voltage and medium-voltage sides.One paper presented an AC/DC hybrid smart power system, and the DC bus voltage was maintained within an acceptable range by applying power consumption control with the droop characteristic [16].
How to access the distribution network of a DC microgrid or AC/DC hybrid microgrid has been discussed in [17][18][19][20][21][22].A method of forming a DC network by replacing some AC lines with a DC line is proposed.Compared with the pre-engineered project, the construction of the DC microgrid significantly reduces the transmission cost of the AC/DC hybrid microgrid, and further optimizes the grid loss and voltage stability indicators in [17].A hybrid planning model of distributed energy and power generation system is proposed, and the type of microgrid is selected according to economic factors [18].In Reference [19], considering the impact of line investment cost and interaction power cap on the planning results, the capacity and location of distributed power resources are optimized.For the distributed grid technology [20], the topology structure of synchronous AC/DC hybrid microgrid and the basic working principle of microgrid under different operation modes are proposed.Combined with power electronics technology, the modular multi-interface structure of power router is applied.AC-DC hybrid microgrid and proposed control strategy.In the above research results, there are few studies on the distributed generation access optimization planning of AC/DC distribution networks.In the established mathematical model, the interaction between the microgrid and the distribution network was not considered, and how to select the distributed generation access mode in the variable hybrid mode was not analysed.
When researching the optimal planning method of distributed generation access to AC/DC distribution (micro) grids, it is necessary to coordinate the resource requirements of microgrids and distribution networks, so a hierarchical programming model is needed to solve the problem [23][24][25][26][27].A multi-agent system was introduced to deal with the problem of source-network-load coordination caused by a high proportion of renewable energy access to a distribution network.Optimization models of the distribution network layer, direct coordination layer and indirect coordination layer were constructed, and coordination was carried out among different levels through price leverage [23].
In Reference [24], aiming at the randomness of distributed generation output, a two-level programming model for an active distribution network was designed; this model considered the influence of energy storage access to determine the optimal installation capacity of distributed generation.To solve this problem, this paper establishes two hierarchical distributed generation planning models of AC/DC distribution networks/microgrids, respectively, uses the coupling variables between them to solve iteratively, and obtains the coordinated optimal planning results.
In view of the above problems, this article proposes a collaborative optimization planning method for distributed generation access to AC/DC distribution (micro) grids, realizing the collaborative optimal allocation of distributed generation in microgrids and distribution networks and improving the economy and reliability of hybrid systems.Based on the grid structure of the AC/DC distribution network, this paper designs a typical interconnection structure between the AC/DC hybrid microgrid and the AC/DC distribution network.The optimal allocation models of distributed generation connecting to AC/DC distribution networks and AC/DC hybrid microgrids are established.The analytical target cascading (ATC) method is used to establish the interactive power coupling relationship between the microgrid and distribution network, and the parallel collaborative optimization planning calculation is carried out.Finally, an example is given to verify the accuracy and efficiency of the proposed method and model.
The remainder of this work is organized as follows: Section 2 introduces the optimization planning mathematical model for distributed generation access to AC/DC distributed (micro) grids.Section 3 presents the method for solving the optimization model.Section 4 employs an actual example to analyse the optimal planning results.Section 5 presents the conclusions.
Optimization Planning Model for Distributed Generation Access to Alternating Current/Direct Current (AC/DC) Distributed (Micro) Grids
In the AC/DC hybrid distribution (micro) grid system, the AC/DC hybrid microgrid, as a local unit, combines distributed power generation and low-voltage AC/DC load closely to realize the local absorption of renewable energy.When the capacity of the microgrid is excessive or insufficient, the system stability is maintained by power interaction with the distribution network.Therefore, in the planning stage, it is necessary to plan the distributed power supply capacity connected to the distribution network and the microgrid, so as to minimize the cost of the microgrid system.At the same time, the distribution network can satisfy the stable operation of the microgrid and reduce its own operating costs.As different stakeholders, the microgrid and distribution network have different economic indicators, but there is a certain power interaction between them, which has a strong coupling in actual operation.Therefore, the AC/DC hybrid distribution (micro) grid planning model can be established by coupling relationship.
Based on the objective cascade analysis method, this paper studies the cooperative optimization planning of distributed generation access to AC/DC distribution (micro) grids.The optimal allocation models of the distributed generation supply for the AC/DC distribution network and AC/DC hybrid microgrid are established, and then the relationship coupling between the microgrid and distribution network is realized by the interactive power between the two grids.The mathematical model of the specific optimization planning is described below.
Optimal Model of Distributed Generation Access to AC/DC Distribution Network
Large-scale distributed generations (such as photovoltaic plants, wind farms, and energy storage systems) are connected to the distribution network and have a great impact on the distribution network system.Therefore, the AC/DC distribution network planning is mainly aimed at optimizing the operation cost of access to distributed generations.This paper chooses the operation cost and sale cost of distributed generation as the objective function and considers power balance, power output constraints, tie-line transmission power constraints, etc., to optimize the planning of distributed generation access to the AC/DC distribution network.
(1) Objective function Aiming at minimizing the cost of operation and maintenance and the cost of purchasing and selling electricity, this paper takes the distributed generation capacity and real-time electricity price of access distribution network as decision variables, and establishes an optimal planning model.
where F DS is the comprehensive cost of the AC/DC distribution system; M is the type of distributed generation connected to the AC/DC distribution network; Nm is the amount of power supply in class m; T is the running time; k i is the operation and maintenance coefficient of distributed generation in group i; p ibt is the power access capacity; J is the number of microgrids; λ(t) is the electricity price at time t; and p j (t) is the energy interactive power between the distributed network and microgrid at time t; when the distribution network transfers energy to the microgrid, this value is positive, and conversely, this value is negative.
(2) Constraints For the AC/DC distribution system, the constraints include system energy conservation, power quality and access characteristics of renewable energy.
where P DC-DN load (t) is the load value at time t. 2) Tie-line load level constraints where P L min and P L max are the upper and lower limits of the tie-line load level and P L j (t) is the tie-line flow power at time t.
3) Distributed generation output constraints The distributed generation in distribution network system is centralized power station, such as photovoltaic power station, wind farm, etc.Therefore, the constraints on the distribution network layer are centralized.
Photovoltaic power plants output constraints: Wind-turbine output constraints: where P PV (t) is the output of photovoltaic power plants at time t; P Wit is the wind turbine output at time t; P PV (t) min and P PV (t) max are minimum and maximum output of photovoltaic power station; P min Wi and P max Wi are minimum and maximum output of wind farm.4) Distributed generation capacity constraints for access to distribution network When the distributed generation is connected to the AC/DC distribution network in the form of a large-scale and high-capacity connection, its installed capacity and voltage level also meet certain requirements.Distributed power sources such as photovoltaic power plants and wind farms connected to distribution networks need to meet certain upper and lower capacity limits.Distributed generation Energies 2019, 12, 1847 6 of 20 with limited capacity is allowed to access the distribution network, otherwise they can only access the micro-grid level or abandon the power supply.P DN-min ≤ P DG ≤ P DN-max (6) where P DG is the distributed generation capacity for planned access to the power grid; P DN-min and P DN-max are the upper and lower limits of distributed generation capacity allowing access to the distribution network.
Optimal Model of Distributed Generation Access to AC/DC Hybrid Microgrid
For an AC/DC distribution network system, a microgrid has more flexibility when it is connected to a distribution network, and there is more choice of access mode and location.In the AC/DC distribution network, a DC-dominated AC/DC hybrid microgrid or DC microgrid can be connected to the DC distribution network; an AC microgrid or AC-dominated AC/DC hybrid microgrid can be connected to the AC distribution network.Aiming to minimize the investment cost, operation cost and purchase and sale costs of microgrids, this paper establishes an optimal configuration model of distributed generation at the level of microgrids, taking into account the constraints of microgrid system power balance, output of distributed generation, battery charge and discharge, and access capacity of distributed generation.
(1) Objective function In the objective function of the microgrid level, this paper takes into account the economic index of distributed generation in microgrid system.Distributed generation cost per unit of electricity and electricity price at different times will have a greater impact on the planning results and play a decisive role.
where F i ins is the microgrid investment cost in group i; F i op is the microgrid operation cost in group i; F i buy is the microgrid costs of purchasing and selling electricity from the distribution network in group i; and K is the number of microgrids.
The expression of each cost is shown in formulae ( 8)- (10).
where F ins is the microgrid investment cost; M is the new distributed generation types; N z is the number of installations of distributed generation in group m; C DGz is the purchasing cost of distributed generation in group m; Y m is the lifetime of distributed generation in group m; δ is the discount rate, taken as 10%; K is the number of installed converters; C con is the purchase cost of converters; and r is the discount rate, taken as 10%.
where F op is the microgrid operation cost; Ω m is the distributed generation consumption unit operation and maintenance costs in group m; E DGm is the total annual power generation of distributed generation in group m; ϕ is the unit power converter operation and maintenance costs; and P con is the total power of the installation converter.
(−1) n k t P gridt (10) where F buy is the microgrid costs of purchasing and selling electricity from the distribution network; n is constant; when the microgrids purchase electricity from the distribution network, n = 0, and when the microgrids transmit power to the distribution network, n = 1.k t is the electricity price at time t, and P gridt is the interactive power between the microgrid and the distribution network at time t.
(2) Constraints The optimal optimization of an AC/DC hybrid microgrid connected to a distribution network needs to meet the system power balance constraints, distributed generation output constraints and battery charge and discharge constraints.
1) System power balance constraints [19]
P load + P loss = P wind + P pv + P bat + P grid (11) where P load is the system load power consumption, P loss is the system loss, P wind is the wind power, P pv is the Photovoltaic (PV) power, P bat is the energy storage battery power (if the energy storage battery stores energy, then the value is negative), and P grid is the interactive power between AC/DC hybrid microgrid systems and the distribution networks (if the grid transmits energy to the microgrid system, then the value is positive, whereas if the microgrid system sends energy to the distribution network, then the value is negative).
2) Distributed generation output power constraints In AC/DC hybrid microgrid, different types of distributed generation need to be connected to the system in order to meet the load demand.The uncertainties of distributed generation output will affect the system, so the power output should be constrained.
Photovoltaic and wind output power constraints: 0 ≤ P wind ≤ P wind.max0 ≤ P pv ≤ P pv.max (12) where P wind is the output power of wind; P pv is the output power of photovoltaic panels; P pv.max is the maximum output power of photovoltaic and P wind.max is the maximum output power of wind.
3) Battery charge and discharge constraints where SOC min and SOC max are the lower and upper limits of the state of charge, respectively; P char.bat(t) and P dischar.bat(t) are the charging and discharging power of the energy storage device at time t; P dischar.max and P char.max are the maximum discharging and charging power of the energy storage device, respectively; η C is the energy conversion efficiency in energy storage charging; η D is the energy conversion efficiency in energy storage discharging; R bat is the energy storage capacity; and ∆t is the time step.
4) Distributed generation capacity constraints for accessing microgrid
The installed capacity of distributed generation connected to the microgrid is also limited and needs to meet a certain capacity range.If the installed capacity is too large and exceeds the overload capacity of the microgrid line, the distributed generation needs to be connected to the distribution network.If the capacity of distributed generation is too small, it is not appropriate to control the voltage Energies 2019, 12, 1847 8 of 20 level, which will affect the stability of the power grid; therefore, this kind of distributed generation should be centralized or abandoned.The specific threshold is shown below.P MG-min ≤ P DG < P MG-max (14) where P DG is the capacity of the distributed generation accessing to the microgrid; P MG-min and P MG-max are the upper and lower limits of distributed generation capacity allowing access to the microgrid.
Analytical Target Cascading
First proposed by Professor Kim, ATC has become a part of multidisciplinary optimization design methods [28].Its main principle is to use parallel structures to realize the design of complex programs and to solve them synchronously.When using the ATC method to optimize the calculation, ATC decomposes the system into several levels and solves the objectives of each level separately, which greatly reduces the calculation time of the system.Second, each hierarchical function receives the element values from its superior function and then optimizes the objective of the function.At this point, multiple solving processes can be computed in parallel, which effectively improves the efficiency of the system.
ATC is classified according to the object, target or module and other aspects to be solved.Mathematical models are established according to the different functions of different levels.The hierarchical structure of ATC is shown in Figure 2. The set of elements at each level contains all the elements at that level; at the same time, a set at higher levels contains all the factors at its sub-levels, such as the factors in C j are included in C.
Energies 2018, 11, x FOR PEER REVIEW 8 of 20 point, multiple solving processes can be computed in parallel, which effectively improves the efficiency of the system.ATC is classified according to the object, target or module and other aspects to be solved.Mathematical models are established according to the different functions of different levels.The hierarchical structure of ATC is shown in Figure 2. The set of elements at each level contains all the elements at that level; at the same time, a set at higher levels contains all the factors at its sub-levels, such as the factors in Cj are included in C. ATC is a modular and hierarchical optimization algorithm.Each level is composed of optimization design module P and analysis calculation module Q, as shown in Figure 3.The main purpose of the optimization design module P is to optimize the objective function, while the main purpose of the analysis calculation module Q is to calculate the elements.The information allocated at the above level is used as input, and the output information of the Q module is transmitted to the P module.The main idea of ATC is to distribute optimization objectives at different levels and then provide information as feedback from each sub-level system to the upper level, alternately optimizing until convergence is achieved.ATC is a modular and hierarchical optimization algorithm.Each level is composed of optimization design module P and analysis calculation module Q, as shown in Figure 3.The main purpose of the optimization design module P is to optimize the objective function, while the main purpose of the analysis calculation module Q is to calculate the elements.The information allocated at the above level is used as input, and the output information of the Q module is transmitted to the P module.The main idea of ATC is to distribute optimization objectives at different levels and then provide information as feedback from each sub-level system to the upper level, alternately optimizing until convergence is achieved.
at the above level is used as input, and the output information of the Q module is transmitted to the P module.The main idea of ATC is to distribute optimization objectives at different levels and then provide information as feedback from each sub-level system to the upper level, alternately optimizing until convergence is achieved.
Hierarchy i-1
Hierarchy i
Hierarchy i+1
Optimal design Pij By describing the mathematic model of coordinated planning of AC/DC distribution (micro) grids with distributed generators, it can be seen that the optimal planning of distributed generators for AC/DC hybrid microgrids and AC/DC distribution networks can be solved independently.In addition, for the whole power system, the power interaction between the distribution network and By describing the mathematic model of coordinated planning of AC/DC distribution (micro) grids with distributed generators, it can be seen that the optimal planning of distributed generators for AC/DC hybrid microgrids and AC/DC distribution networks can be solved independently.In addition, for the whole power system, the power interaction between the distribution network and microgrid is realized through the connection line, so there must be a certain coupling relationship between them.We can use this coupling variable to obtain the optimal result of the system objective by optimizing the algorithm.The main idea of the ATC method is consistent with the cooperative optimization planning model of AC/DC distribution (micro) grids connected by distributed generators.Therefore, the mathematical model of cooperative optimization can be solved by the ATC method, and the optimal planning of distributed generators in a distribution network/microgrid system can be obtained.
Solving Process
When using the ATC method to optimize AC/DC distribution (micro) grid planning, the system should be divided into two levels, the microgrid and distribution network, and the coupling relationship between the two levels should be established as the connecting factor between the upper and lower levels.From Section 2.1 of this paper, we can see that there is a coupling relationship between the microgrid and the distribution network, that is, the power interaction between the microgrid and the distribution network; therefore, this element is regarded as the transfer variable of the system.
For the AC/DC distribution network, the coupling variable P DM can be attributed to the load connected to the distribution network and can interact with the distribution network to achieve energy interaction; conversely, for the AC/DC hybrid microgrid, the coupling variable P MD is equivalent to a power source.When the distribution network planning is solved independently, an optimization result can be obtained that is a quantitative value for the coupling variable P DM .At this point, the fixed value should be transferred to the microgrid level as a parameter of microgrid optimization planning.Then, when optimizing the configuration of the AC/DC hybrid microgrid, the coordination between P DM and P MD should be considered at the same time.The goal is to obtain values of the two coupling variables that are approximately equal.
There are many methods to constrain coupling variables.Penalty function methods are mainly used today, including the quadratic penalty function form, Lagrange form [29], augmented Lagrange form [30], second-order diagonal form [31] and Lagrange dual form [32].The augmented Lagrange penalty function has high accuracy and can achieve fast convergence.Therefore, the objective function of the AC/DC hybrid microgrid in this paper is adjusted as follows: 2 (15) where π j (t) and λ j (t) are the corresponding weight coefficients of Lagrange's first and second terms at time t, respectively; P MD (t) is the power transferred from microgrid to distribution network at time t; P DMj (t) is the interactive power transferred from microgrid to distribution network by level j function.
Similarly, for the AC/DC distribution network, when K microgrids are connected, the objective function of the distribution network needs to introduce K penalty functions, whose expression is revised as follows: where P DM (t) is the power transferred from distribution network to microgrid at time t; P MDj (t) is the interactive power transferred from distribution network to microgrid by level j function.
Therefore, for the collaborative planning model of the AC/DC hybrid microgrid/distribution network based on the ATC method, the optimal planning model of the AC/DC distribution network is composed of formula (16) and formulae ( 2)-( 6), and the optimal allocation model of the AC/DC hybrid microgrid is composed of formula (15) and formulae ( 11)- (14).To solve the above model, iteration is alternately carried out until the convergence condition is reached, which is shown in formula (17).
where P k DM (t) is the power transferred from distribution network to microgrid in group k at time t; P k MD (t) is the power transferred from microgrid to distribution network in group k at time t.
The flow chart of collaborative planning for distributed power supply access to the AC/DC distribution (micro) power grid based on the ATC method is shown in Figure 4, and the specific solution process is as follows: (1) Count the system raw data, set coupling variables between the microgrid and distribution network and the initial value of the Lagrange multiplier, and set iteration number k = 1.
(2) Build the mathematical model of optimal planning for the AC/DC hybrid microgrid, solve the optimized model, and transfer the virtual power P DM to the DC distribution network.
(3) Optimize the planning of the distribution network according to the improved objective function and constraints after receiving the data transmitted by the hybrid microgrid, optimize the model itself while transferring values near the AC/DC hybrid microgrid level, and transfer the result of PDC to the microgrid.
(4) Check whether the convergence condition meets the requirement.If it is satisfied, stop the iteration process, obtain the optimal planning results, and output them to the outside; otherwise, increase the iteration number k, update the Lagrange multiplier, and return to step (2) for re-solving.
Example System Description
Based on the aforementioned collaborative planning model of distributed generation access to AC/DC distribution (micro) grids, this paper takes the actual AC/DC distribution grid system in a certain area as an example and carries out the optimization analysis of distributed generation access to the AC/DC hybrid microgrid/distribution network according to the load demand, power distribution, and other factors.The basic architecture of the example system is shown in Figure 5.For each level of the power grid in the example system, the AC microgrid is the original structure of the system and needs to be expanded according to the load demand; because the system adds many DC loads, it needs to build a new AC-DC hybrid microgrid system to meet the load demand.For the microgrid system, the accessible distributed generation includes distributed PV, wind and smallscale energy storage systems; for the distribution network system, the accessible distributed generation includes centralized photovoltaic power stations and large-scale energy storage systems.
Example System Description
Based on the aforementioned collaborative planning model of distributed generation access to AC/DC distribution (micro) grids, this paper takes the actual AC/DC distribution grid system in a certain area as an example and carries out the optimization analysis of distributed generation access to the AC/DC hybrid microgrid/distribution network according to the load demand, power distribution, and other factors.The basic architecture of the example system is shown in Figure 5.For each level of the power grid in the example system, the AC microgrid is the original structure of the system and needs to be expanded according to the load demand; because the system adds many DC loads, it needs to build a new AC-DC hybrid microgrid system to meet the load demand.For the microgrid system, the accessible distributed generation includes distributed PV, wind and small-scale energy storage systems; for the distribution network system, the accessible distributed generation includes centralized photovoltaic power stations and large-scale energy storage systems.
system and needs to be expanded according to the load demand; because the system adds many DC loads, it needs to build a new AC-DC hybrid microgrid system to meet the load demand.For the microgrid system, the accessible distributed generation includes distributed PV, wind and smallscale energy storage systems; for the distribution network system, the accessible distributed generation includes centralized photovoltaic power stations and large-scale energy storage systems.Figure 5 shows that the DC distribution network of the system is connected to a photovoltaic power station, energy storage system and AC/DC hybrid microgrid, and the AC distribution network provides power to AC loads and the AC microgrid at the same time.The new load of the system is mainly concentrated in the DC system, including a 1 MW high-voltage DC load, 400 kW low-voltage DC load and 450 kW AC load.Therefore, for the distribution network level, it is necessary to optimize the capacity optimization of PV power plants and energy storage systems, taking into account the AC load demand.For the AC/DC hybrid microgrid level, two microgrids are connected to the system, and the distributed generation in the microgrid should be optimized separately.In this paper, the upper limit of interactive power between the microgrid and distribution network is set at 500 kW.The distributed generation parameters, annual load curve and hourly tariff data [34] are shown in Table 1, Figure 6 and Table 2, respectively.Figure 5 shows that the DC distribution network of the system is connected to a photovoltaic power station, energy storage system and AC/DC hybrid microgrid, and the AC distribution network provides power to AC loads and the AC microgrid at the same time.The new load of the system is mainly concentrated in the DC system, including a 1 MW high-voltage DC load, 400 kW low-voltage DC load and 450 kW AC load.Therefore, for the distribution network level, it is necessary to optimize the capacity optimization of PV power plants and energy storage systems, taking into account the AC load demand.For the AC/DC hybrid microgrid level, two microgrids are connected to the system, and the distributed generation in the microgrid should be optimized separately.In this paper, the upper limit of interactive power between the microgrid and distribution network is set at 500 kW.The distributed generation parameters [33], annual load curve and hourly tariff data [34] are shown in Table 1, Figure 6 and Table 2, respectively.Figure 5 shows that the DC distribution network of the system is connected to a photovoltaic power station, energy storage system and AC/DC hybrid microgrid, and the AC distribution network provides power to AC loads and the AC microgrid at the same time.The new load of the system is mainly concentrated in the DC system, including a 1 MW high-voltage DC load, 400 kW low-voltage DC load and 450 kW AC load.Therefore, for the distribution network level, it is necessary to optimize the capacity optimization of PV power plants and energy storage systems, taking into account the AC load demand.For the AC/DC hybrid microgrid level, two microgrids are connected to the system, and the distributed generation in the microgrid should be optimized separately.In this paper, the upper limit of interactive power between the microgrid and distribution network is set at 500 kW.The distributed generation parameters, annual load curve and hourly tariff data [34] are shown in Table 1, Figure 6 and Table 2, respectively.On the basis of the abovementioned example system, this paper carries out research on collaborative planning of distributed generation access to AC/DC distribution (micro) grids based on the ATC method.The program is built on the platform of MATLAB.It runs under Intel i5 3.4-GHz, 8 GB RAM, Windows 7 system and calls the function of MATLAB to solve the model.
Analysis of Optimal Configuration Results
The coupling variables between the AC/DC hybrid microgrid and the distribution network are updated alternately by the ATC method.After convergence, the optimal planning results of distributed generation are obtained, as shown in Table 3.This table shows that the capacity of the PV power station accessing the DC distribution network is larger than that of the AC distribution network, and a certain capacity of the energy storage device is installed on the side of the DC distribution network.The output power of photovoltaics is DC, and this power can be connected to a DC distribution network through a DC-DC converter, which has certain advantages in cost.The main purpose of the photovoltaic power station connected to the AC distribution network is to match the AC load and realize local absorption as much as possible.For energy storage devices, the current distribution network is mainly AC, while the DC grid is slightly weak; therefore, the charging and discharging of energy storage is connected to the DC distribution network to reduce the power supply on the DC side and the system loss.For the AC distribution network, when the PV power station generates more power than the AC side consumes electricity, it sells electricity at any time; in contrast, the AC distribution network directly supplies electricity and reduces the system cost by reducing the energy storage configuration.
For the microgrid system, the regional distribution network is connected to two microgrids, namely, an AC/DC hybrid microgrid and AC microgrid.Both microgrids are equipped with a certain amount of PV, wind and energy storage.Because of the weak wind resources in this area, the amount of wind access is small.For the PV power supply, the AC/DC hybrid microgrid not only supplies power to DC loads but also to some AC loads, which can increase the installed capacity and improve the system absorption capacity.The energy storage device can realize the storage of electric power and the island operation of the microgrid.
After the optimal planning of AC/DC hybrid distribution (micro) power grid, the whole system meets the stability requirements of the system.The variation of voltage amplitude of the main nodes is shown in Figure 7. Figure 7 shows that the voltage amplitude of each node changes within the normal range, and the system remains stable.Voltage amplitude value is stable in the range of 0.98-1.0,and the voltage amplitude value of some nodes is 0.97, which is the permissible range of the system.Therefore, the stability of AC/DC distribution (micro) grid after optimization planning meets the requirements.
At present, energy storage is a contradictory factor in microgrid planning because of its high cost and short service life, and its assembly capacity needs to be limited economically.However, the uncertainty of the output of distributed generation inevitably requires the operation of a certain capacity of energy storage to maintain the stability and improve the level of local absorption.By changing the installation cost and service life of energy storage system, the impact of energy storage on the optimal planning results of AC/DC hybrid distribution (micro) power grid is analyzed, as shown in Table 4.The concept of PV rejection rate is introduced in the table, which means the ratio of abandoned photovoltaic energy to total photovoltaic power generation throughout the year.It can be seen from the table that when the installation cost of energy storage is reduced or the service life is increased, the distributed power supply capacity accessed to the microgrid side increases, and the storage capacity increases accordingly.At this time, the penetration of distributed generation increases, and the PV rejection rate decreases significantly.
Comparisons of Algorithms in Advantages and Disadvantages
In this chapter, the ATC method is used to model and solve the AC/DC hybrid microgrid and AC/DC distribution network, respectively.The parallel computation of distributed generation Figure 7 shows that the voltage amplitude of each node changes within the normal range, and the system remains stable.Voltage amplitude value is stable in the range of 0.98-1.0,and the voltage amplitude value of some nodes is 0.97, which is the permissible range of the system.Therefore, the stability of AC/DC distribution (micro) grid after optimization planning meets the requirements.
At present, energy storage is a contradictory factor in microgrid planning because of its high cost and short service life, and its assembly capacity needs to be limited economically.However, the uncertainty of the output of distributed generation inevitably requires the operation of a certain capacity of energy storage to maintain the stability and improve the level of local absorption.By changing the installation cost and service life of energy storage system, the impact of energy storage on the optimal planning results of AC/DC hybrid distribution (micro) power grid is analyzed, as shown in Table 4.The concept of PV rejection rate is introduced in the table, which means the ratio of abandoned photovoltaic energy to total photovoltaic power generation throughout the year.It can be seen from the table that when the installation cost of energy storage is reduced or the service life is increased, the distributed power supply capacity accessed to the microgrid side increases, and the storage capacity increases accordingly.At this time, the penetration of distributed generation increases, and the PV rejection rate decreases significantly.
Comparisons of Algorithms in Advantages and Disadvantages
In this chapter, the ATC method is used to model and solve the AC/DC hybrid microgrid and AC/DC distribution network, respectively.The parallel computation of distributed generation optimal allocation is realized, which has significant advantages in the efficiency and stability of optimal planning.This paper discusses the advantages and disadvantages of the ATC method by comparing the ATC method with a traditional independent optimization method for microgrids and distribution networks.
As shown in Table 5, by changing the number of access microgrids, the independent optimization and ATC methods are used for planning, and the advantages and disadvantages of these two methods are analysed and compared.Table 5 shows that when the number of microgrids is small, distributed planning has a small advantage in computing time.However, when the number of microgrids increases gradually, the computing time of ATC increases slowly, while that of the distributed method increases greatly.Therefore, the ATC method has higher superiority in dealing with complex systems of multi-microgrids.
Additionally, this paper compares the ATC method with the distributed and bi-level programming methods and discusses the effectiveness of the ATC method in terms of the number of iterations.The analysis and comparison results are shown in Figure 8.
Energies 2018, 11, x FOR PEER REVIEW 15 of 20 optimal allocation is realized, which has significant advantages in the efficiency and stability of optimal planning.This paper discusses the advantages and disadvantages of the ATC method by comparing the ATC method with a traditional independent optimization method for microgrids and distribution networks.As shown in Table 5, by changing the number of access microgrids, the independent optimization and ATC methods are used for planning, and the advantages and disadvantages of these two methods are analysed and compared.Table 5 shows that when the number of microgrids is small, distributed planning has a small advantage in computing time.However, when the number of microgrids increases gradually, the computing time of ATC increases slowly, while that of the distributed method increases greatly.Therefore, the ATC method has higher superiority in dealing with complex systems of multimicrogrids.
Additionally, this paper compares the ATC method with the distributed and bi-level programming methods and discusses the effectiveness of the ATC method in terms of the number of iterations.The analysis and comparison results are shown in Figure 8. Figure 8 shows that the optimization performance of the distributed algorithm and bi-level programming algorithm is similar, and these algorithms converge prematurely after approximately 9 iterations and fall into a local optimum.Compared with the two methods mentioned above, the ATC method has strong climbing ability, and the convergence curve decreases quickly in the early stage of iteration, falls into a local optimum after approximately 2 iterations, jumps out of local optima after 5 iterations and 17 iterations, and continues to search iteratively.The final optimization result reduces the cost compared with that of the other two methods.In conclusion, the ATC method can balance the global and local search performance well and has obvious advantages in solving the cooptimization planning problem of AC/DC hybrid microgrids and AC/DC distribution networks.
At present, there are many researches on the interconnection between microgrid and distribution network, but most of them focus on an AC system.In this paper, the target cascade analysis method is used to study an AC/DC hybrid distribution (micro) power grid, which has certain Figure 8 shows that the optimization performance of the distributed algorithm and bi-level programming algorithm is similar, and these algorithms converge prematurely after approximately 9 iterations and fall into a local optimum.Compared with the two methods mentioned above, the ATC method has strong climbing ability, and the convergence curve decreases quickly in the early stage of iteration, falls into a local optimum after approximately 2 iterations, jumps out of local optima after 5 iterations and 17 iterations, and continues to search iteratively.The final optimization result reduces the cost compared with that of the other two methods.In conclusion, the ATC method can balance the global and local search performance well and has obvious advantages in solving the co-optimization planning problem of AC/DC hybrid microgrids and AC/DC distribution networks.
At present, there are many researches on the interconnection between microgrid and distribution network, but most of them focus on an AC system.In this paper, the target cascade analysis method is used to study an AC/DC hybrid distribution (micro) power grid, which has certain advantages in model and algorithm.This paper chooses literature [35] as a comparison to analyse the advancement of the method proposed in this paper.The comparison results are shown in Table 6.From the table, we can see that ATC method does not dominate when the number of microgrids is small, but when the number of microgrids connected to the distribution network is increasing, the calculation time of the method used in this paper is greatly reduced, and the advantage is obvious.
Analysis of Coupled Element Impact
The AC/DC hybrid microgrid and AC/DC distribution network are coupled by interactive power, and the ATC method also achieves parallel calculation of two-part planning by interactive power.Therefore, the size of the interactive power has a great impact on the planning results.Table 5 lists the cost calculation results for distribution networks and microgrids under different interactive power ratios.The interactive power ratio represents the percentage of the interactive power upper limit set to the initial value.
Table 7 shows that: (1) When the upper limit of interactive power increases, the costs of DC and AC distribution networks increase slowly.Because the increase in interactive power has little influence on the distributed generation connected to the AC/DC distribution network, the main responsibility of this power is to supply the load directly connected to the distribution network, and the distribution network hierarchy is less affected by the interactive power.(2) For the AC microgrid and AC/DC hybrid microgrid, the cost decreases first and then increases with increasing upper limit.When the interactive power value is small, the microgrid needs to rely on its own distributed power generation to meet the load demand, so the stability of the system can be achieved by increasing the cost of power supply; when the interactive power value is large, more energy interaction can be achieved between the microgrid and the distribution network, so the microgrid will invest a large number of distributed generators to generate electricity and gain benefits by selling electricity to the grid, and the cost of the microgrid will increase accordingly.(3) As far as the total cost of the system is concerned, the cost of the microgrid reaches the minimum when the ratio of interactive power is 60%.At this time, the total cost is the lowest and the system is the best.When the upper limit of power interaction is in the middle value, the power allocation of microgrid is low, and it is likely to meet the local absorption; at the same time, the power interaction with the distribution network is small, and the cost of power purchase is greatly reduced.At the same time, for the distribution network, there is no need to configure a large number of power sources to meet the needs of the micro-grid, which also reduces the cost of the distribution network, so the system is in the optimal state.
Analysis of the Capacity Limitation Effect of Distributed Generation
For distributed generation, different power scales are connected to different levels of the power grid.Low-capacity renewable energy sources, such as roof photovoltaics and small fans, are powered by the microgrid; large-capacity renewable energy, such as photovoltaic power plants and wind farms, can be directly interconnected with the distribution network to achieve grid connection.In this paper, the power capacity of the access microgrid and distribution network is constrained in the optimization planning model, and the upper and lower limits of the installed capacity of distributed generation are discussed below.
(1) In the case of P MG-max = P DN-min = P IC (P IC is constant ) Figure 9 shows that: Energies 2018, 11, x FOR PEER REVIEW 17 of 20
Analysis of the Capacity Limitation Effect of Distributed Generation
For distributed generation, different power scales are connected to different levels of the power grid.Low-capacity renewable energy sources, such as roof photovoltaics and small fans, are powered by the microgrid; large-capacity renewable energy, such as photovoltaic power plants and wind farms, can be directly interconnected with the distribution network to achieve grid connection.In this paper, the power capacity of the access microgrid and distribution network is constrained in the optimization planning model, and the upper and lower limits of the installed capacity of distributed generation are discussed below.
(1) In the case of Figure 9 shows that: 1) As the upper and lower limit values change from small to large, the cost of the distribution network decreases gradually.Because the lower limit of PV installed capacity is increased in the distribution network, the capacity of distributed generation directly connected to the distribution network is reduced, which directly leads to the cost reduction of the distribution network.
2) For the microgrid level, with the change in P IC value, the cost of the microgrid decreases first and then increases.When the upper limit of the microgrid power supply is small, the renewable energy capacity of the microgrid is small, and the microgrid needs to increase the purchased power to maintain the stability of the system.When the upper limit of the microgrid increases, the installed capacity of the power supply connected to the microgrid increases greatly, and the investment cost of the microgrid increases greatly.
3) For the total cost of the AC/DC hybrid microgrid and AC/DC distribution network system, when the constant term is small, the distributed generation is mainly connected to the distribution network, and the purchase cost of the microgrid is higher.With increasing P IC , the installed capacity of the distributed power supply in the microgrid increases, and the cost of purchasing electricity and investing in the microgrid decreases, while the cost of the distribution network decreases.However, the full power operation of the microgrid with a large capacity of renewable energy cannot be realized, which causes the cost of the system to decrease first and then increase.When defining the capacity range of distributed generation in the power grid, there is a phenomenon that the definition is not clear.Partial capacity of the power supply can be connected to the distribution network or transmitted through the micro-grid.The upper limit of the microgrid and 1) As the upper and lower limit values change from small to large, the cost of the distribution network decreases gradually.Because the lower limit of PV installed capacity is increased in the distribution network, the capacity of distributed generation directly connected to the distribution network is reduced, which directly leads to the cost reduction of the distribution network.
2) For the microgrid level, with the change in P IC value, the cost of the microgrid decreases first and then increases.When the upper limit of the microgrid power supply is small, the renewable energy capacity of the microgrid is small, and the microgrid needs to increase the purchased power to maintain the stability of the system.When the upper limit of the microgrid increases, the installed capacity of the power supply connected to the microgrid increases greatly, and the investment cost of the microgrid increases greatly.
3) For the total cost of the AC/DC hybrid microgrid and AC/DC distribution network system, when the constant term is small, the distributed generation is mainly connected to the distribution network, and the purchase cost of the microgrid is higher.With increasing P IC , the installed capacity of the distributed power supply in the microgrid increases, and the cost of purchasing electricity and investing in the microgrid decreases, while the cost of the distribution network decreases.However, the full power operation of the microgrid with a large capacity of renewable energy cannot be realized, which causes the cost of the system to decrease first and then increase.
(2) In the case of P MG-max ≥ P DN-min When defining the capacity range of distributed generation in the power grid, there is a phenomenon that the definition is not clear.Partial capacity of the power supply can be connected to the distribution network or transmitted through the micro-grid.The upper limit of the microgrid and the lower limit of the distribution network may be P MG-max ≥ P DN-min ; but there cannot be P MG-max < P DN-min , in this case a distributed power supply with partial capacity will not be able to access the system.By changing the value of P MG-max and P DN-min , this paper analyses the impact of upper and lower capacity limits on the results of optimal planning.The change of the total cost of the system is shown in Figure 10., in this case a distributed power supply with partial capacity will not be able to access the system.By changing the value of max MG P − and min DN P − , this paper analyses the impact of upper and lower capacity limits on the results of optimal planning.The change of the total cost of the system is shown in Figure 10.It can be seen from the figure that the system cost is higher when the upper limit value and the lower limit value of the distribution network are small, because at this time the distributed generation mainly connects to the micro-grid level, which improves the investment cost of the system.When the two thresholds are similar and float around 400 kW, the system cost changes little and the economic factors tend to be stable.
Conclusions
With a large number of distributed power supply access and DC load requirements, a single microgrid cannot meet the system requirements; moreover, power electronic equipment is constantly updated, and DC distribution networks are developing rapidly.In this context, based on the ATC method, this paper studies the optimal configuration scheme of distributed generation when an AC/DC hybrid microgrid and AC/DC distribution network are interconnected.
This paper proposes a collaborative optimization planning method of distributed generation accessing an AC/DC hybrid distribution (micro) grid, and based on the ATC method, the coupling relationship between the AC/DC hybrid microgrid and AC/DC distribution network is established.The mathematical model is solved iteratively by the augmented Lagrangian penalty function, and the results of the distributed generation configuration are obtained at two levels of the microgrid and distribution network.The optimal access location and capacity of distributed generation are planned, so the local absorption level of the system is improved and the cost is reduced.Through the analysis of practical engineering examples, the advantages and disadvantages of different optimization algorithms and the influence of system parameters on the planning results are compared.
The main contribution of this paper is that an interaction mechanism between AC/DC microgrids and distribution networks is established, and the collaborative planning of distributed generation at two system levels is realized.The ATC method can achieve the goal of fast convergence It can be seen from the figure that the system cost is higher when the upper limit value and the lower limit value of the distribution network are small, because at this time the distributed generation mainly connects to the micro-grid level, which improves the investment cost of the system.When the two thresholds are similar and float around 400 kW, the system cost changes little and the economic factors tend to be stable.
Conclusions
With a large number of distributed power supply access and DC load requirements, a single microgrid cannot meet the system requirements; moreover, power electronic equipment is constantly updated, and DC distribution networks are developing rapidly.In this context, based on the ATC method, this paper studies the optimal configuration scheme of distributed generation when an AC/DC hybrid microgrid and AC/DC distribution network are interconnected.
This paper proposes a collaborative optimization planning method of distributed generation accessing an AC/DC hybrid distribution (micro) grid, and based on the ATC method, the coupling relationship between the AC/DC hybrid microgrid and AC/DC distribution network is established.The mathematical model is solved iteratively by the augmented Lagrangian penalty function, and the results of the distributed generation configuration are obtained at two levels of the microgrid and distribution network.The optimal access location and capacity of distributed generation are planned, so the local absorption level of the system is improved and the cost is reduced.Through the analysis of practical engineering examples, the advantages and disadvantages of different optimization algorithms and the influence of system parameters on the planning results are compared.
The main contribution of this paper is that an interaction mechanism between AC/DC microgrids and distribution networks is established, and the collaborative planning of distributed generation at two system levels is realized.The ATC method can achieve the goal of fast convergence when
Figure 1 .
Figure 1.Typical structure of alternating current/direct current (AC/DC) hybrid microgrid connected to AC/DC distribution network.
Figure 1 .
Figure 1.Typical structure of alternating current/direct current (AC/DC) hybrid microgrid connected to AC/DC distribution network.
Figure 3 .
Figure 3. Interactive schematic diagram at each ATC level.
Figure 3 .
Figure 3. Interactive schematic diagram at each ATC level.
Figure 4 .
Figure 4. Cooperative planning process of a microgrid and distribution system based on ATC.
Figure 5 .
Figure 5. Basic structure of a regional power system.
Figure 5 .
Figure5.Basic structure of a regional power system.
Figure 5 .
Figure5.Basic structure of a regional power system.
Figure 6 .
Figure 6.Parameters of load in a year.
Figure 6 .
Figure 6.Parameters of load in a year.
Figure 9 .
Figure 9. Cost changes under different PIC conditions.
( 2 )
In the case of
Figure 9 .
Figure 9. Cost changes under different P IC conditions.
Energies 2018 ,
11, x FOR PEER REVIEW 18 of 20 the lower limit of the distribution network may be
Figure 10 .
Figure 10.System cost change trend under different capacity upper and lower limits.
Figure 10 .
Figure 10.System cost change trend under different capacity upper and lower limits.
Table 2 .
Time-of-day tariff data.
Table 3 .
Configuration results of microgrid and distribution network DG collaborative planning.
Table 4 .
Effect of energy storage factor on optimal configuration results.
Table 4 .
Effect of energy storage factor on optimal configuration results.
Table 5 .
Analysis of advantages and disadvantages between the ATC and distributed methods.
Table 5 .
Analysis of advantages and disadvantages between the ATC and distributed methods.
Table 6 .
Comparative analysis with other literature.
Table 7 .
Cost calculation results under different interactive power ratios. | 13,400.8 | 2019-05-15T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
FIVE DIMENSIONAL COSMOLOGICAL MODEL IN THE FORM OF TSALLIS HDE
Here, in the context of Tsallis holographic dark energy, the Kaluza-Klein five dimensional metric is explored. The time based deceleration parameter is found by solving the field equations using the hybrid scale factor. It depicts the universe from the initial stages of deceleration to the current state of acceleration. The model’s physical features are also addressed.
INTRODUCTION
Recent astronomical experiments such as Type Ia supernovae [1], CMB [2], and LSS [3], highly recommend that the universe is governed by a non-positive pressure component known as dark energy (DE) [4]. As per Ade et al. [5], the universe's present matter energy density is close to its critical value, with DE accounting for 68.3%, cold dark matter for 26.8%, and conventional baryonic matter accounting for only 4.9%. As a reason, one of the most intriguing and difficult 6482 NITIN SARMA questions in modern cosmology is clarifying the character of DE and cosmic growth.
The cosmological constant (Λ, Lambda) is the simplest theoretical choice for DE, and it closely reflects the facts. However, it faces fine-tuning and cosmic coincidence issues [6]. As a consequence, a dynamically developing entity is preferable to a Λ. Apart from dynamical models, numerous alternative DE models have been proposed to tackle the problem in the last decade, notably quintessence [7], k-essence [8], tachyon [9], phantom [10], chaplygin gas [11], quintom [12], agegraphic DE [13] and many more.
The holographic DE (HDE) [14] model of the cosmos has also become increasingly popular to comprehend cosmic growth. In the field of black hole physics, the HDE model is founded on the holographic principle, which was first presented by G. 't Hooft [15] who defined the energy density as = 3 2 2 −2 ( M, c, L are Planck mass, infrared cut off radius, and constant respectively). Tsallis HDE (THDE) is a novel version of HDE model that incorporates generalized entropy = to explain the universe's growth where is an unknown constant and signifies the non-additive parameter. Cohen et al. [16] have used the holographic principle to formulate a relationship among the system entropy (S), IR cut-off (L), and UV (Λ) cut-offs as 3 Λ 3 ≤ 0.75 , and if joined with = leads to Λ 4 ≤ ( (4 ) ) 2 −4 . The THDE density can be computed = 2 −4 using this inequality, where B is just an unknown parameter [17]. The standard HDE is provided by this expression, where = 3 2 2 and = 1. By accepting L as the future horizon in HDE, Saridakis et al. [18] produced a coherent formulation of THDE. THDE has been investigated by numerous researchers [19,20,21].
It is widely recognized that within 4 dimensional space-time, a merger of gravitational forces with other natural forces is unfeasible. Due to recent advances in super gravity and superstring theory, the research of higher dimensional models has gained significance. Kaluza [22] and Klein [23] intended to combine electromagnetic and gravitational forces, leading to the development of the Kaluza-Klein 5 dimensional theory. This theory appeals to me because it has a beautiful geometric presentation. The 5 dimensional Kaluza-Klein metric [24,25,26] is now extensively 6483 FIVE DIMENSIONAL COSMOLOGICAL MODEL performed to explore the character of DE in many scenarios.
As a product of the foregoing explanation, we have constructed THDE in Kaluza-Klein metric, adopting Hubble horizon as the IR cut-off and a hybrid scale factor. The current work is unique from the earlier researches. The below is a description of the paper's arrangement: Section 2 presents the metric and field equations. Section 3 describes the solutions and the model. Section 4 represents the physical features of the model. In Section 5, we sum up our findings.
METRIC AND FIELD EQUATIONS
The 5 dimensional Kaluza-Klein metric is expressed in the form where the fifth dimension is considered to be space-like co-ordinate.
We presume that the universe is made up of DM and THDE components and the Einstein's field equations are where and R denote the Ricci tensor and scalar respectively.
For the physical interpretation, the matter energy momentum tensor is where the matter energy density is .
The THDE momentum tensor is where is the THDE density and , , , are the pressures in the x, y, z, directions respectively and = = = = is the EOS parameter of THDE.
The field equations (2) for the metric (1) with the help of (3) and (4) For the metric (1), we now define some cosmological parameters that are vital for solving the field equations. Spatial volume (V) and the mean scalar factor (R) are defined as The mean Hubble parameter (H) is defined as The expansion ( ) and shear ( 2 ) The deceleration (q) and anisotropic (Δ) parameters are given by
SOLUTIONS AND THE MODEL
Subtracting (6) from (7), we get With the help of (9), the above equation can be written as Integrating (15)
∫ −1 ]
The conservation of energy ; = 0 gives the continuity equation as The continuity equation (19) will be used independently because the two fluids examined here non interacting. Therefore We have added two more criteria to solve the field equations (5)- (7) adequately.
Firstly, we take the THDE density ( ) as where k, l are non negative constants.
PHYSICAL FEATURES OF THE MODEL
The physical properties of the model (Equation (27) The graphical illustrations of these parameters are given below. We can see that from Figure 2, how q (deceleration parameter) is positive at first and then becomes negative. It signifies that the universe is transitioning from a stage of deceleration to another of acceleration.
Both (matter energy density, red line) and (THDE density, blue line) gradually decline with time (t), as shown in Figure 3. Late in the process, the THDE density becomes a constant, while the matter energy density reaches 0.
In Figure 4, ∆ (anisotropic parameter) reaches to zero when → ∞. As a consequence, our model approaches isotropy a later point in time.
CONCLUSION
The focus of this research was to give the new solutions to the field equations acquired using the hybrid scale factor for the Kaluza-Klein metric filled with dark matter and THDE. We noticed that the model starts out with 0 volumes and afterwards expands at an unlimited rate. The deceleration parameter (q) is positive at first and then turns negative as time progress. When → ∞ , the anisotropic parameter (∆) approaches to 0, it is a reducing function of time.
The THDE is seen to tend to a constant value, but the matter energy density is seen to become zero at a later point in time. The value of the EOS parameter of THDE is also determined to be -1, demonstrating that THDE performs like such a cosmological constant. In the accelerated model, the THDE was used as the DE for Kaluza-Klein metric at least mathematical abstractions. The above results indicate that our model accurately reflect contemporary observations.
CONFLICT OF INTERESTS
The author(s) declare that there is no conflict of interests. | 1,649.2 | 2021-09-08T00:00:00.000 | [
"Physics"
] |
Exploring the Spatiotemporal Patterns of Residents’ Daily Activities Using Text-Based Social Media Data: A Case Study of Beijing, China
: The use of social media data provided powerful data support to reveal the spatiotemporal characteristics and mechanisms of human activity, as it integrated rich spatiotemporal and textual semantic information. However, previous research has not fully utilized its semantic and spatiotemporal information, due to its technical and algorithmic limitations. The efficiency of the deep mining of textual semantic resources was also low. In this research, a multi-classification of text model, based on natural language processing technology and the Bidirectional Encoder Representations from Transformers (BERT) framework is constructed. The residents’ activities in Beijing were then classified using the Sina Weibo data in 2019. The results showed that the accuracy of the classifications was more than 90%. The types and distribution of residents’ activities were closely related to the characteristics of the activities and holiday arrangements. From the perspective of a short timescale, the activity rhythm on weekends was delayed by one hour as compared to that on weekdays. There was a significant agglomeration of residents’ activities that presented a spatial co-location cluster pattern, but the proportion of balanced co-location cluster areas was small. The research demonstrated that location conditions, especially the microlocation condition (the distance to the nearest subway station), were the driving factors that affected the resident activity cluster patterns. In this research, the proposed framework integrates textual semantic analysis, statistical method, and spatial techniques, broadens the application areas of social media data, especially text data, and provides a new paradigm for the research of residents’ activities and spatiotemporal behavior.
Introduction
The continuous advancements of globalization and informatization have profoundly affected people's daily lives and behavioral activities, causing tremendous changes in the traditional patterns of residents' activities. On the one hand, the emergence of network activities has had numerous effects, including the substitution, complementarity, and enhancement of residents' daily activities [1], thereby affecting the use of urban physical space. On the other hand, the rapid development of information and communication technologies (ICTs) has changed the temporal and spatial relationships of residents' daily activities so that some activities are no longer subject to specific temporal and spatial constraints, thereby allowing better flexibility and coordination [2]. In this context, research into residents' activities has received extensive attention in many disciplines, such as geography, urban planning, transportation, computers, and public health [3][4][5]. By exploring the differences in the distribution scales and types of various resident activities, the urban function and spatial structure can be better understood, the temporal and spatial laws of urban dynamics can be grasped, and the relationships between residents' activities and the objective environment in different temporal and spatial scenarios can be effectively revealed. This is of great significance for improving human health, guiding transportation and planning, and promoting the scientific understanding of human behavior [6][7][8][9].
With the rapid development of ICTs, while people are enjoying the convenience of ICTs, there has been an explosion of information on user activities and access records, either actively posted by users or passively recorded by devices and networks. It provides a wealth of convenient data to support research into human activities, and thus the exploration of new knowledge and methods of human activity patterns [10][11][12]. Some scholars have suggested that a new field is emerging that can utilize the capacity to collect and analyze data at a certain scale to reveal patterns of individual and group behavior [13], with sensible data mining algorithms making practical movement predictions that reveal trends and patterns that are difficult for humans to detect [14]. In this context, there has been tremendous progress in conducting resident activity research based on various types of big data, with an increasing variety of research results. However, the understanding of the interactions between residents' behavioral decisions and activity dynamics based on general big data (GPS data, mobile phone data, smart card data, inter-floating data, etc.) is limited due to the lack of information on the destinations that instigate population movements [15].
Fortunately, this limitation does not exist for text-based social media data based on user-initiated posts. Generally speaking, social media data contain detailed spatiotemporal, textual, image, social, and other multidimensional information about the user. It can be used to infer the user's activities and deeply study the characteristics of residents' activities and the influence mechanism of their choice of activities. However, among the existing studies on residents' activities via the use of social media data, most have primarily focused on using the spatial information in social media data, but less on the textual semantic information that contains rich activity content [16]. Specifically, the textual semantic information not only directly reflects the purpose and type of individual activities at a fine-grained level; furthermore, the quantity of data can also indicate the intensity of an individual's activity, and, combined with the spatial location, can efficiently reveal the behavioral activity characteristics of individual users [17]. Therefore, social media data (especially textual information) deserves more attention in the field of resident activity research. Meanwhile, it is necessary to organically combine spatiotemporal information with semantic information to improve the comprehensive utilization efficiency of social media data, and then to comprehensively and truly uncover the spatiotemporal characteristics of the users' activities and enhance the understanding of the dynamic characteristics of cities and residents.
In view of this, this study aims to introduce the current advanced natural language processing (NLP) technology into the field of resident activity research in order to efficiently extract the rich semantic information from the social media data. Then, the textual semantic information is combined with spatiotemporal information to improve the efficiency of social media data utilization in residential activity research, thus providing a high-quality data base for residential activity research. On this basis, the spatiotemporal characteristics and related patterns of residents' daily activities were explored, and the driving forces were then investigated to better examine and analyze the spatial structure of the city.
The remainder of this paper is organized as follows. In Section 2, existing research related to resident activities is presented from three perspectives. In Section 3, the study area and data collection method are presented. In Section 4, the process of classifying the residents' daily activities information is introduced, and the main methods used in this study are elaborated. In Section 5, the semantic characteristics, spatiotemporal patterns and attribution results of various residents' activities are specifically analyzed. Finally, the conclusions of this paper are drawn and future research directions are proposed in Section 6.
Study of Resident Activity under the Traditional Perspective
The study of residents' activities can be traced back to the time−geography theory proposed by Hagerstrand [18], which was later praised and applied by related scholars [19][20][21][22]. In traditional resident activity research, travel surveys, questionnaires, interviews, and other means are primarily used to construct resident activity-diary surveys, and to carry out various related studies involving residents' activities and travel behaviors [23][24][25]. These efforts demonstrate that the daily activity patterns of residents have great regularity and are closely related to land use and urban built environments [26][27][28]. However, the questionnaire and interview-based approaches to the collection of activity information are costly in terms of time and money. Meanwhile, there is variability and risk to the reliability of the data and findings with the limitation of questionnaire design, interview rules, spatiotemporal scale, and the subjective nature of the respondents [29][30][31].
Study of Resident Activity in the Era of Big Data
With the rapid development of ICTs, the information storm triggered by the era of big data is transforming our lives, work, and thinking, and is initiating a major transformation of the era [32]. In this context, scholars have conducted a series of studies on residents' activities in the era of big data with the help of various types of big data. Specifically, big data-based research has been mainly focused on the use of the user activity records collected from various data platforms such as GPS devices, mobile phones, smart cards, floating vehicles, social media, wearable devices, etc., to explain the movement patterns of individuals or groups of people, reveal the spatiotemporal patterns of various residents' activities (travel, work, leisure etc.) or specific groups and dynamic changes [7,[33][34][35][36][37]. For example, transit or travel smart card data are used to reveal the residents' daily activity patterns and laws [34,37] by extracting useful mobility information from the mobile phone location and call data to identify where residents live and work [38], investigating individual mobility patterns within cities [39]. Moreover, based on the extension of geographical information system (GIS) spatial models and analysis methods, and combined with data fusion, machine learning, and other means, the spatiotemporal patterns of human behavior can be extracted and the geospatial characteristics of human and socioeconomic elements can be inverted, which has become a hot research topic in recent years [40].
Study of Resident Activity Based on the Social Media Data
With the widespread adoption of mobile devices and location-based services, social media data have increasingly attracted the attention of scholars due to their large user base, rich spatiotemporal and semantic information, and low cost of access [12,17]. Social media data incorporating spatiotemporal and textual semantic multidimensional information has greatly enhanced the role of understanding human behavior and complex social dynamics in geographic space. Some scholars even argue that data generated based on internet communication and interaction may revolutionize our understanding of collective human behavior [41].
However, among the existing studies on residents' activities via the use of social media data, most have primarily focused on using the spatiotemporal information. These investigations include scalable and efficient spatiotemporal analyses via large-scale, locationbased social media data [42,43], the modeling and prediction of user behavior and activity patterns [44,45], and the revelation of the functions, dynamics, and spatial structures of cities [46,47]. Specifically, the digital footprints collected from social media platforms are clustered through various spatiotemporal analysis methods and their variants to identify various types of residents' daily activities (e.g., living, working, entertainment, and eating) [48]. Alternatively, activity types are inferred based on the geographical location of each data record linked to the type of place in combination with other types of data, such as land-use data, points of interest (POI), and street-view imagery [49][50][51]. Nevertheless, most clustering methods consider only the temporal or spatial distribution characteristics of travel activity points while ignoring their geographical context. It would result in the clustering of different types of activity data into the same cluster. This problem also exists in the use of place types to infer activity types. For example, a check-in at a residential building may label the location as "home", however, the place may not be the user's home, but rather their friend's home, and the user's behavior at this location should instead be labeled as "social" or "party". Similarly, a place marked as a place of entertainment may also be the user's workplace. In such cases, the results should be interpreted with caution [52].
In addition, as mentioned earlier, using textual information from social media data to conduct research on residents' activities is a very effective approach, but due to previous technical and algorithmic limitations, it is difficult to fully extract semantic information from text, and thereby the relevant literature is lacking. In the only relevant studies, activity information or activity topics were mainly identified and extracted via feature word extraction (e.g., Word2vec model) or some clustering methods (e.g., density-based spatial clustering of applications with noise (DBSCAN), Latent Dirichlet Allocation (LDA) model, etc.) [16,49,53,54]. However, these studies lack a comprehensive consideration of the semantic content, resulting in some bias in the authenticity of the obtained activity data. Especially for social media data with word limits like Twitter and Weibo, the sparse nature of short textual features makes it riskier to rely on feature words alone for semantic classification (e.g., "apple is ripe" and "Apple Inc." have two completely different meanings). With the significant breakthroughs in natural language processing in recent years [55], existing technologies have been able to support the efficient classification of large-scale text data. Therefore, it is necessary to introduce advanced natural language processing techniques into the study of residents' activities, by conducting comprehensive and in-depth mining of textual resources in social media data to obtain a high-quality dataset of residents' activities, which is important for improving the understanding of residents' activities and urban dynamics [48,56,57].
Study Area
Beijing, the capital of China, is also the political and cultural center of the country. As of the end of 2019, the city had a total area of approximately 16,410 km 2 , including 16 districts, and a resident population of 21,536,000 [58]. To fit well with the other datasets used in this research, the study area was divided into more than 16,000 one km grids; thus, the one km grid was the basic research unit for this study.
Data
The social media data in 2019 were obtained from Sina Weibo API using web crawler tools. In total there were 11,500,105 pieces of Weibo data involving more than one million users covering Beijing City. The attributes included the user ID, text, time, latitude, and longitude. According to the Weibo User Trends Report in 2020 by Weibo Data Center (http://data.weibo.com/datacenter/recommendapp, accessed on 30 September 2020), as of September 2020, the number of monthly active users on Weibo had increased to 511 million, with an average of 224 million daily active users. These data indicate the further strengthening of Sina Weibo's position as the leading social media platform in China. However, we are also aware of the problems of sample bias and representativeness in social media data. Specifically, the social media platforms are used by a relatively young group, and the users on social media may be varied by socio-economic attributes (ages, gender, occupation, etc.) and individual behavior differences [59]. However, many studies have shown that social media data still play an important role for the extraction of human activities, emotions and experiences associated with a place with the advantage of rich contextual content and geographical location information [17,60]. On this basis, large amounts of social media data can be integrated in order to profile groups of users and their activity patterns, thereby providing insight into the dynamics of cities and people on a larger scale [16,61]. Therefore, it is reasonable and effective to select the massive Weibo data to identify the spatiotemporal patterns of residents' daily activities at the group scale.
2019 Beijing POIs data were also sourced from the Gaode Map platform, and included 12 main categories (restaurants, shopping, accommodation, science, education, culture, etc.). The 2019 WorldPop dataset was also used as one of the base datasets for this study.
Methodology
This study constructs a framework for studying residents' activities by integrating natural language processing, statistical analysis and spatial analysis (Figure 1), and innovatively introduces textual multiclassification techniques, combined with machine learning methods into the study field of residents' daily activities in order to fully exploit the rich textual and spatiotemporal information in social media data. Specifically, first of all, the main types of residents' daily activities are identified based on time-geography and behavioral geography theories. Secondly, machine learning algorithms and BERT models are used to perform text multiclassification on the collected large-scale social media datasets to identify the specific types of users' activities, and then a high-quality spatiotemporal dataset of residents' daily activities is formed by combining the posting locations. Based on this, spatiotemporal patterns of residents' daily activities and related laws are explored from three perspectives: semantic, temporal and spatial.
Identification of Activity Categories
The study of residents' daily activities has traditionally been an important part in the fields of time-geography and behavioral geography, involving a range of activities such as commuting, shopping, and leisure [62]. There exists a wealth of existing methods for the classification of daily activities, with the number of activity types ranging from four to hundreds [63,64]. However, residents' daily activities are habitual, stable and highly repetitive, overly broad or trivial classifications of activities are not conducive to research, and the randomness of related activities may make it difficult to extract valuable regular features. Moreover, some scholars have found that social media-based daily activities include those such as at-home activities, working, eating, shopping, learning, leisure, and entertainment, with an activity coverage rate of 94.5% [15,52]. However, given the broad scope of at-home activities, the spatial characteristics are not obvious, and the Weibo data themselves have a certain outdoor feature. Therefore, based on previous research and taking into account the behavioral characteristics of local residents, seven types of activities were selected as the main types of daily activities (Table 1), based on which the subsequent step of text classification was carried out.
Classification of Residents' Daily Activities Based on BERT
Since the research object is the daily activities of Beijing residents, the original datasets need to be screened to exclude the user data of non-local residents. Therefore, according to the specific filtering rules (the time and frequency of user posts), 7,293,190 pieces of Weibo data of local residents were ultimately obtained. At the same time, as this research only focused on the Weibo items about the residents' daily activities, the original Weibo datasets were filtered to select the related items using the Bidirectional Encoder Representations from Transformers (BERT). It is one kind of language encoder, released by Google in 2018, able to translate the input sentences or paragraphs into corresponding semantic features, which has performed amazingly well and become an important recent advancement in NLP [55].
In this research, a text classification model was constructed based on the BERT to perform multiple classifications on the crawled Weibo texts. Specifically, first, 70,000 items were randomly selected as the training samples. For each item, if it was related to different residents' activities, it was labeled as 1−7 manually, otherwise labeled as 0 (Table 2) [64,65]. Second, using machine learning and the original BERT model to pretrain 70,000 training data and verify the classification accuracy, by adjusting the corresponding parameters and the number of iterations for several times under the experiment, the trained text multiclassification model was obtained (the overall accuracy exceeded 87%). Third, based on the derived classifier, all the Weibo items were input to BERT and the items relating to the various residents' activities were classified. Then, 1,198,600 pieces of data of the daily activities of Beijing residents in 2019 were identified. Finally, a further 5000 randomly selected social media posts from each of the seven categories of activity data were manually validated and an average accuracy of 94.12% was achieved, thereby verifying the excellent classification effect of the model. The detailed data screening process is presented in Figure 2.
Weibo Items Label
It's so boring! 0 (Irrelevant) An extraordinarily enjoyable team building~1 (Social) This hot pot is really delicious 2 (Eating) Take a stroll around the Forbidden City 3 (Entertainment) Come out to shop! 4 (Shopping) There are many things to learn, trying to learn 5 (Studying) Five kilometers completed 6 (Sports) I'm still struggling in the office at this hour 7 (Working) Figure 2. The data filtering process.
The Identification of Residential Activity Clusters
In order to reveal the clustering patterns of various residents' activities in the geospatial range and their distribution combinations, this study uses the activity density and type ratio methods with reference to the study of function, mixing the degree of spatial unit [66,67] to identify the clustering patterns and activity combinations of daily residents' activities in Beijing.
First, the activity density method was employed to calculate the proportions of different types of resident activity in each grid to the total number of corresponding types of resident activity in the study area, which were calculated as follows: where A ij is the number of type i activities in grid j as a proportion of the total number of type i activities in all spatial units in the study area, P ij indicates the number of type i activities in grid j, and n indicates the number of grids in the study area. Next, the relative proportions of the densities of different types of residential activity in each grid were calculated to reflect their type-ratio characteristics. These proportions were calculated as follows: where AC ij is the ratio of type i residential activities in spatial unit j, and A ij has the same meaning as in Equation (1).
In the identification of residential activity clusters, if the proportion of one type of activity in the grid is ≥50%, the spatial unit is dominated by a single activity; if the proportion of all types of activity in the grid is below 50%, the unit is considered to be a co-located cluster space of multiple activities. Particularly, in this study, the co-located clusters were subdivided and the activities with a proportion of ≥25% of the type of activity in the grid were considered as the dominant activity type within the spatial unit. If the proportion of all types is less than 25%, this indicates that the frequency density of different types of public services is relatively evenly distributed and there are no clearly dominant activities.
Furthermore, a vibrant urban space must maintain sufficient diversity to meet the diversity of people's needs [68]. To calculate the mix of daily activities of residents within different types of activity clusters, this study draws on the concept of measuring the landuse mix, which is commonly used in urban research [69], to construct a characteristic indicator for the measurement of activity diversity while taking into account the number and types of activities. The formula is as follows: where M j denotes the activity diversity index of grid j, q i denotes the ratio of the number of type i activities in grid j to the total number of activities in the spatial unit j, and k denotes the number of activity types in spatial unit j. The activity diversity index M j has a value range from 0 to 1, and its size reflects the degree of mixing of different activities; a larger value indicates a more balanced distribution of various types of activities in the spatial unit and a higher activity diversity, while a smaller value indicates a more homogeneous distribution of activity types and a lower diversity.
Analysis of Influencing Factors
The Geodetector was used to explore the causes of the spatial and temporal heterogeneity of the residents' daily activities. It consists of four components: risk detection, factor detection, ecological detection and interaction detection, to detect geo-spatial heterogeneity and reveal the driving forces behind it [70,71]. The method is good at analyzing typological quantities, detecting both numerical and qualitative data, and is unique in its ability to investigate the interaction between two explanatory variables to a response variable. As the decision-making process of the residents' activities is the result of a combination of factors, the spatial differentiation mechanism and diversity of residents' daily activities cannot be separated from the systematic analysis of multiple influencing factors. According to urban diversity theory [72], the formation of diversity is closely related to factors such as population, land, and transportation. Based on this, the Geodetector was employed to analyze the influences of various factors, including the socioeconomic attributes, facility configuration, and location conditions of the grid, on the formation of diversity in residential activity clusters. Specifically, the dependent variable was considered to be the resident activity pattern in the grid, while the explanatory variables included the population density, land price, traffic accessibility, and others (see Table 3). In short, factor detection and interaction detection were used to reveal the influencing mechanisms of the clustering pattern and diversity of residents' daily activities.
Semantic Characteristics of Residents' Daily Activities
According to the results of text classification processing, 1,198,600 pieces of data for seven types of activities were identified, and the percentages of the numbers of various activities and the top 10 highest-frequency words are reported in Table 4. Entertainment, eating, and studying activities were found to account for more than 80% of the total. It indicates that residents are more inclined to share these activities on online communities as opposed to work and social activities, with entertainment and study spaces becoming the main physical spaces corresponding to virtual online spaces. Based on the classified activity data, the top 100 highest-frequency keywords for each type of activity were extracted by word frequency statistics and arranged in reverse order (i.e., for the top 100 words in each type of activity, the No. 1 word was swapped with the No. 100 word, the No. 2 word was swapped with the No. 99 word, etc.), and were displayed in word clouds (Figure 3). For example, social activities often imply positive emotions, eating activities reflect the dishes and taste preferences of the residents' daily diets and some important places and restaurants, and healthy eating is also highly recognized. Entertainment, shopping, and sports reflect the corresponding specific types of activities and main venues. Studying highlights an intense state and atmosphere of learning, in addition to recording the activities themselves, such as exams and assignments. Moreover, working reflects more complex emotions about work itself (e.g., effort, motivated, nervous, too hard, etc.).
Activity Dynamics on a Long Timescale
The Weibo data of residents' daily activities in Beijing in 2019 were counted separately by months and days. First, from the results of the month-to-month statistics (Figure 4a), the activity intensities in terms of months and the types of resident activities were found to have large variations. Specifically, eating and entertainment were found to have the highest intensities and to be closely related to the temporal distribution of holidays, with obvious temporal clustering characteristics (May, August−October). Studying was found to have a strong correlation with China's own education system. The start of the term (March and September) and the end of the term (June and December) were found to have significantly more Weibo posts related to studying than the other periods. The temporal distribution of sports was found to be more seasonally correlated, with the fewest data related to sports in the winter and the most in the summer. In contrast, social, shopping, and working activities were found to be less intense and very evenly distributed between months. Secondly, after residents' daily activities in 2019 were counted by days (Figure 4b), it was found that the intensity of activities was significantly higher on weekends than on weekdays. Particularly, the intensities of entertainment, eating, social, and shopping activities were found to have more pronounced increases. However, working was found to be significantly less intense on weekends than on weekdays. In addition, the intensities of activities on Mondays and Fridays were also found to be more prominent due to the influence of weekend activities.
Activity Dynamics on a Short Timescale: The Activity Rhythm on Weekends Is Delayed by One Hour as Compared to That on Weekdays
The intraday distribution characteristics of residents' daily activities on weekdays and weekends were further examined. To reduce the impact of weekend activities on weekday activities, the activity data for Monday and Friday were excluded (see Figure 5). It was found that, overall, there is a clear temporal pattern for the seven types of activities, i.e., the intensity of residential activity is lowest before dawn, it fluctuates while increasing during the day, and it peaks at night. However, as compared to that on weekdays, the intensity of activity on weekends was found to be higher and longer, and the activity rhythm was found to be delayed by one hour, i.e., the nighttime sleep period (the six hours in the day with the lowest activity intensity) was found to be between 2:00 and 7:00 on weekends, compared to 1:00 and 6:00 on weekdays. Moreover, the peak of lunchtime activity was found to be 13:00 on weekends, as compared to 12:00 on weekdays. The intensity of activity at night continued to increase until 22:00 hours on weekends as compared to 21:00 hours on weekdays. This result is very similar to the pattern of mood changes observed by Golder et al. (2011) for Twitter users, i.e., people are happier on weekends, but the morning peak of positive affect is delayed by two hours [73]. This indicates that late bedtimes and late starts are becoming the norm on weekends, due to the increase in discretionary time and activities, resulting in a higher intensity and delayed pace of activity on weekends. However, compared to other countries, the daily activities of Chinese residents are more strongly influenced by the traditional routine, and the relative delay in activities on weekends is shorter.
Distribution of Hotspot Areas for Residents' Daily Activities
Kernel density estimation (KDE) is a non-parametric estimation method for the analysis of the density of geographic elements in the surrounding area [74]. KDE was used to carve out hotspot areas for the daily activities of Beijing residents, and the results are presented in Figure 6. Overall, the spatial distribution reveals that the hotspots of the residents' daily activities are mainly concentrated in the central city within the Fifth Ring Road. Moreover, the spatial distribution of residents' daily activities within the Fifth Ring Road is characterized by significant differences between the north and south, with the hotspots of various activities mainly located in the northern areas of the Fifth Ring Road. This reveals that the spatial structure of Beijing's city is still dominated by the traditional northern city model from the perspective of resident activity. In addition, due to the characteristics of various activities and the differences in the distributions of different types of activity facilities, the spatial characteristics of various activities were also found to be highly variable, as shown in Table 5. In general, the hotspot distributions of various activities and activity facilities exhibit spatial co-location patterns, especially some of the comprehensive activity facility clusters, which often become mixed clustering areas for multiple types of activities. The nearest neighbor index (NNI) is an indicator that characterizes the proximity and mutual relationship between point-like geographic elements in a particular region [74]. The NNI was used to quantitatively measure the degree of agglomeration of various daily activities. The results indicate that the NNI for each type of activity is much less than 1 (Table 6), and there is significant spatial agglomeration for all types of activities. However, there is some variability in the degree of agglomeration of the various types of activity, with eating, entertainment, and studying being more spatially agglomerated, followed by shopping and sports, and social and working activities being relatively less agglomerated.
Identifying Residential Activity Clusters
To understand the common agglomeration characteristics and activity combination distribution patterns of different types of activities, the activity density and type ratio methods were utilized to identify the resident activity clusters, which were divided into two major categories: single-activity-dominant areas and multiple activity co-location clusters. Among them, the co-location clusters can be subdivided into four subcategories, namely co-location clusters I to IV, according to the distribution of activities in the grid. Ultimately, the daily activity clusters of Beijing residents are divided into five patterns. The specific definitions and characteristics of the activity patterns are reported in Table 7, and the spatial distributions of activity clusters are presented in Figure 7. Table 7. The definitions of resident activity patterns and spatial distribution characteristics.
Residential Activity
Cluster Definition Major Characteristics
Distribution Areas
Single-activitydominant areas The proportion of one activity in the grid is more than 50% Uneven distribution of activities within the grids with only one dominant activity 22.59% Areas outside the Fifth Ring Road Co-location cluster I The proportion of all activities in the grid is less than 25%. Overall, the activity areas in the central urban district within the Fifth Ring Road and the urban district in the outer suburbs and counties were dominated by co-location clusters I and II. It reflects the relatively rich and balanced distribution of activities in the central city and the urban district in the outer suburbs and counties as places where various highdensity socioeconomic activities take place. However, the internal spatial distributions of the different residential daily activity clusters were found to vary considerably, with the exception of co-location cluster I, for which the activity combination could not be broken down. The specific activity patterns within the four different types of activity clusters are exhibited in Figure 8.
In addition, the information entropy was used to calculate the diversity of residents' daily activities to explore the balance of the activity distributions within different clusters. Moreover, for comparative analysis, the same method was employed to calculate the POI diversity to characterize the balance of activity facilities within the activity clusters ( Figure 9). It was found that the mean values of activity diversity in various activity clusters exhibited the following decreasing pattern: co-location cluster I > co-location cluster II > co-location cluster III > co-location cluster IV > single-activity-dominant areas. Moreover, the distributions of POI and activity diversities were found to have a high degree of matching. It indicates that the balanced co-location clusters have high activity diversity, which corresponds to the high accessibility of activity facilities and a variety of facility types. The monolithic co-location clusters were found to have the second-highest activity diversity, and single-activity-dominant areas were found to have low activity diversity, which corresponded to the low accessibility of facilities and a relatively homogeneous composition of activity types.
Main Influencing Factors and Explanatory Power
Factor detection was used to measure the factor explanatory power of various variables for the residential activity pattern. The results show that all kinds of explanatory variables have passed the 0.001 level significance test, indicating that these explanatory variables are important factors influencing the co-location clustering of residential activities in the Beijing. Figure 10 shows the explanatory power of specific influencing factors. The order of explanatory power of the factors is "X4 (distance to the nearest subway station) > X5 (distance to the city center) > X6 (urban planning positioning) > X2 (land price) > X1 (population density) > X3 (POI density)". The distance to the nearest subway station has the largest explanatory power, reaching 0.73, indicating that the residential activity pattern of Beijing is most strongly influenced by the accessibility of transport. The more convenient the transport conditions, the more significant the co-location cluster of the residents' daily activities. The distance to the city center is the next most important factor, indicating that macro-location conditions play an important role in the resident activities pattern. At the same time, the urban planning positioning in the area of the research unit also has an important influence in the formation of its activity clustering pattern. In addition, the factor explanatory power of land price and population density is also greater than 0.1, which also has a certain influence on the activity. However, the POI density in the research unit has a weaker influence on the type of activity cluster pattern, and its factor explanatory power is only 0.04, reflecting that the facilities configuration in the research area is not highly correlated with resident activity.
Analysis of Factor Interactions
Interaction detection was used to analyze whether the interaction of two different influencing factors enhances or weakens their explanatory power for the dependent variable, and can effectively reveal the impact of the joint action of two types of explanatory variables on the resident activity pattern ( Table 8). The results demonstrate that the explanatory power of any two influencing factors tends to increase after a two-by-two interaction, which indicates that the resident activity pattern is jointly constrained by the sub-factors of each dimension. Specifically, the type of factor interaction is "Enhance bi-", i.e., the explanatory power of the factors after interacting is significantly stronger than that of a single factor, but not higher than the sum of the explanatory powers of two factors acting independently. Overall, the order of the top five interaction results was found to be as follows: X4 ∩ X5, X2 ∩ X4, X1 ∩ X4, X4 ∩ X6, X3 ∩ X4. These results indicate that the resident activity pattern is most significantly affected by the combination of a micro-location condition, represented by the distance to the nearest subway station, and a macro-location condition, represented by the distance to the city center. Moreover, although the influences of land price and population density are minor when they act independently, they remain important basic factors that cannot be ignored.
Discussion and Conclusions
User-initiated social media data, based on a social network platform, contain a wealth of information on resident behavior dynamics, which is of great significance for the understanding of the spatiotemporal patterns and dynamic laws of resident activities in the information age [60]. Nevertheless, existing related research remains limited in terms of mining the resident activity information in the social media data, while various spatiotemporal clustering methods and their variants are usually used to cluster digital footprints from social media platforms, and while keyword extraction or topic model clustering is performed to identify various resident daily activities. Most of these methods only consider the location information of resident activity while ignoring the geographic background, which leads to certain problems in the identification of activity types [48]. In this regard, some scholars have pointed out that social media big data contain not only spatial information (e.g., locations, place names, etc.), but, more importantly, rich contextual and semantic information. Via NLP technology, spatiotemporal information can be extracted from text, and the semantics of the places, resident activities, and emotional experiences behind the text can also be mined [17].
The present study was based on a comprehensive integrated method that combines NLP technology and spatiotemporal analysis to achieve the organic integration of textual and spatiotemporal information. The results revealed that the BERT-based text classification model achieved excellent results in identifying residents' daily activities with an accuracy of more than 90%, which can effectively solve the current problems of the low utilization of social media text data and the poor integration of spatiotemporal and semantic information. Meanwhile, it also provides a solid data foundation for the full exploration of the spatiotemporal patterns and laws of human behavior activities hidden behind social media data, and provides a new research framework for the study of residents' daily activities in the mobile information era. Furthermore, based on the perspective of residents' daily activities, this study treated residents' daily activities and their spatiotemporal information as a complete system, and comprehensively explored the diversified and heterogeneous characteristics of resident activities, thereby solving the existing problem of the segmentation of the residents' activity types and providing a useful exploration and scientific guidance for the comprehensive and systematic revelation of urban dynamics, urban rhythms, and urban spatial structures from the perspective of the residents' daily activities.
The findings of this research can be summarized as follows. First, residents are more inclined to share their entertainment, eating, and studying activities in online communities as opposed to their work and social activities, with entertainment and study spaces becoming the main physical spaces corresponding to virtual online spaces. Second, the distribution differences and types of resident activities are closely related to the characteristics of the activities and holiday arrangements. However, compared to that on weekdays, the intensity of activity on weekends was found to be higher and longer, and the activity rhythm was found to be delayed by one hour. Third, there was significant spatial clustering of resident daily activities, with the main hotspot areas concentrated in the central city within the Fifth Ring Road and exhibiting differentiation characteristics between the north and south, with more activities in the north. The cluster patterns of resident daily activities can be divided into five modes, namely single-activity-dominant areas and multiple activity co-location clusters (co-location clusters I-IV). There are certain differences between the spatial distributions and activity combination types of various cluster patterns. In general, while the co-location cluster pattern has taken shape in Beijing, the proportion of balanced co-location cluster areas remains low, These cluster areas are mainly concentrated in the central urban district within the Sixth Ring Road and some urban districts in the outer suburbs and counties. Finally, the results indicate that the location conditions, especially the micro-location condition (distance to the nearest subway station), are the main factors that affect the resident activity cluster pattern. However, land price and population density, despite their limited influence when acting independently, remain fundamental influencing factors of the resident activity cluster patterns that cannot be ignored.
However, user data from various social media platforms are often affected by spurious correlation problems and their spatial and temporal dynamics may be partially linked to accidental events [75,76]. Therefore, the relevant data should be cleaned and filtered before being fed into the model for subsequent operations. Meanwhile, due to the biased nature of various social media data in terms of user groups, the relevant conclusions and patterns obtained by using social media data should be limited in scope (e.g., the relevant conclusions in this paper mainly reflect the activity patterns of relatively young groups). In order to improve the quality and reliability of the conclusions, other data from different crowd-sourcing platforms can be combined to corroborate them in a subsequent study [77]. In addition, the in-depth mining of social text data should continue to be strengthened, and multi-label technology should be employed to identify text information types and hidden content more efficiently. Moreover, spatiotemporal correlation data should be combined with information about other urban elements to effectively connect the users' places of residence, work, and activities, construct the daily life chains of residents, and improve the overall perception of residents' daily life space and the dynamic understanding of the urban spatial structure. Such research will provide more effective technical, theoretical support, and a governance basis with which to solve the practical problems of residents' daily activities and construct an efficient, convenient, and livable living pattern. | 9,721.6 | 2021-06-05T00:00:00.000 | [
"Computer Science"
] |
Static and Dynamic Hand Gestures: A Review of Techniques of Virtual Reality Manipulation
This review explores the historical and current significance of gestures as a universal form of communication with a focus on hand gestures in virtual reality applications. It highlights the evolution of gesture detection systems from the 1990s, which used computer algorithms to find patterns in static images, to the present day where advances in sensor technology, artificial intelligence, and computing power have enabled real-time gesture recognition. The paper emphasizes the role of hand gestures in virtual reality (VR), a field that creates immersive digital experiences through the Ma blending of 3D modeling, sound effects, and sensing technology. This review presents state-of-the-art hardware and software techniques used in hand gesture detection, primarily for VR applications. It discusses the challenges in hand gesture detection, classifies gestures as static and dynamic, and grades their detection difficulty. This paper also reviews the haptic devices used in VR and their advantages and challenges. It provides an overview of the process used in hand gesture acquisition, from inputs and pre-processing to pose detection, for both static and dynamic gestures.
Introduction
Civilizations throughout history have shown that gestures are the easiest method of communication, surpassing even speech, mainly because they are easily understood and widely accepted, even in very different cultures.By definition, a gesture is a visual communication method (not including sound) that can be performed with the entire body or partially, e.g., the fingers, hands, face, etc. [1,2], with hand gestures being the most used.Static hand gestures can quickly communicate simple (such as numbers and letters) and complex ideas.Dynamic hand gestures are used to transmit more complex ideas, whereby deaf people can communicate complicated ideas extremely quickly [1,2].
Vision systems for hand and face gesture detection were developed at the beginning of the 1990s, in which computer algorithms were used to find gesture patterns in static images.Current advances in sensor technology and artificial intelligence, along with the exponential growth of computing power and GPUs, have led to the development of real-time gesture recognition for applications such as video games and virtual reality [1,2].
In virtual reality (VR), digital representations of imaginary places are created by deploying a complex mixture of 3D modeling, sound effects, and sensing technology to stimulate the user, making the immersion experience more credible and exciting [3][4][5].Static and dynamic hand gestures could play an important role in the growing VR industry Sensors 2024, 24, 3760 2 of 21 to increase the immersion and manipulation capabilities in digital scenarios and, in the case of haptic devices, to introduce a sense of touch [5][6][7].
In this paper, we present state-of-the-art hardware and software techniques for hand gesture detection, focusing on virtual reality applications.First, we introduce the general aspects that make hand gesture detection challenging, classify gestures as static and dynamic, and grade the detection difficulties.In Section 2, we define the working space for most virtual reality setups in visual hand detection techniques and the most commonly used hardware.In Section 4, we review the haptic devices used in VR applications as well as their advantages and disadvantages.In Section 5, we show a simplified process most systems use for static and dynamic hand gesture acquisition, from inputs, pre-processing, and algorithms to pose detection.Then, we analyze the most used and successful algorithms.Finally, we discuss the use of visual vs. haptic hand gesture detection, emphasizing future applications [8].
Characteristics of the Gestures
To start our analysis of gesture detection techniques, it is necessary to understand the hands' morphology and how it influences their detection.In Figure 1, the name of the hands is shown as a type of view, and the detection mode is seen in a first-person view, which is the mode of sight in VR simulations.Most hand detection techniques detect the palmar area of the hands, while VR systems mostly analyze the dorsal area [9].
Sensors 2024, 24, 3760 2 of 22 and dynamic hand gestures could play an important role in the growing VR industry to increase the immersion and manipulation capabilities in digital scenarios and, in the case of haptic devices, to introduce a sense of touch [5][6][7].
In this paper, we present state-of-the-art hardware and software techniques for hand gesture detection, focusing on virtual reality applications.First, we introduce the general aspects that make hand gesture detection challenging, classify gestures as static and dynamic, and grade the detection difficulties.In Section 2, we define the working space for most virtual reality setups in visual hand detection techniques and the most commonly used hardware.In Section 4, we review the haptic devices used in VR applications as well as their advantages and disadvantages.In Section 5, we show a simplified process most systems use for static and dynamic hand gesture acquisition, from inputs, pre-processing, and algorithms to pose detection.Then, we analyze the most used and successful algorithms.Finally, we discuss the use of visual vs. haptic hand gesture detection, emphasizing future applications [8].
Characteristics of the Gestures
To start our analysis of gesture detection techniques, it is necessary to understand the hands' morphology and how it influences their detection.In Figure 1, the name of the hands is shown as a type of view, and the detection mode is seen in a first-person view, which is the mode of sight in VR simulations.Most hand detection techniques detect the palmar area of the hands, while VR systems mostly analyze the dorsal area [9]. Figure 1 shows the areas of the human hand.Static gestures are mostly used to issue a specific message or state.Dynamic gestures are commonly used to express actions or more complex ideas.In addition, the detection of dynamic and static gestures varies with their degrees of difficulty.Figure 2 presents the more common static gestures and their detection difficulties.We assigned gestures 1, 2, 3, and 8 a low detection complexity since the techniques that detect the union of the fingers of the hand are not difficult.We assigned gestures 5, 6, and 7 a medium detection difficulty degree since the detection patterns produce false positives due to the similarity of the hand positions, e.g., gesture 7 communicates the message that everything is fine.It is difficult because the other fingers are occluded, and another finger (such as the pointer finger) could be in this position instead (gesture 8).Finally, we assigned gestures 9, 10, 11, and 12 a high detection difficulty degree because of the techniques applied to obtain optimal results, the occlusion of the hands, and the camera not detecting depth [8][9][10].
Figure 3 shows the most common dynamic gestures and their difficulty degrees.These kinds of gestures mostly present a continuous action, such as a greeting (gesture 2) or the gripping of objects (such as gestures 4, 5, and 6).Unlike static gestures, dynamic gestures require a more robust algorithm that allows the pattern to be detected in realtime, as well as filters to eliminate adjacent noise to avoid false positives or the generation of gestures not made by the user.Gestures 1, 2, and 3 have a low detection complexity Figure 1 shows the areas of the human hand.Static gestures are mostly used to issue a specific message or state.Dynamic gestures are commonly used to express actions or more complex ideas.In addition, the detection of dynamic and static gestures varies with their degrees of difficulty.Figure 2 presents the more common static gestures and their detection difficulties.We assigned gestures 1, 2, 3, and 8 a low detection complexity since the techniques that detect the union of the fingers of the hand are not difficult.We assigned gestures 5, 6, and 7 a medium detection difficulty degree since the detection patterns produce false positives due to the similarity of the hand positions, e.g., gesture 7 communicates the message that everything is fine.It is difficult because the other fingers are occluded, and another finger (such as the pointer finger) could be in this position instead (gesture 8).Finally, we assigned gestures 9, 10, 11, and 12 a high detection difficulty degree because of the techniques applied to obtain optimal results, the occlusion of the hands, and the camera not detecting depth [8][9][10].
Figure 3 shows the most common dynamic gestures and their difficulty degrees.These kinds of gestures mostly present a continuous action, such as a greeting (gesture 2) or the gripping of objects (such as gestures 4, 5, and 6).Unlike static gestures, dynamic gestures require a more robust algorithm that allows the pattern to be detected in real-time, as well as filters to eliminate adjacent noise to avoid false positives or the generation of gestures not made by the user.Gestures 1, 2, and 3 have a low detection complexity since the hand can be detected entirely and has no occlusion.The complexity of gestures 4, 5, and 6 rises to medium since the hand hides the majority of the fingers.This type of gesture, defined as "grip-type grip and tubular-type grip", is considered fundamental for humans to use tools or daily utensils [11][12][13][14][15][16][17][18].Gesture 7 in Figure 3 presents two terminations and is one of the most complex dynamic gestures to detect since it shows occlusion of the hands, obstructing all the information from the camera.
Sensors 2024, 24, 3760 3 of 22 since the hand can be detected entirely and has no occlusion.The complexity of gestures 4, 5, and 6 rises to medium since the hand hides the majority of the fingers.This type of gesture, defined as "grip-type grip and tubular-type grip", is considered fundamental for humans to use tools or daily utensils [11][12][13][14][15][16][17][18].Gesture 7 in Figure 3 presents two terminations and is one of the most complex dynamic gestures to detect since it shows occlusion of the hands, obstructing all the information from the camera.12) Dorsal front with inclination, one hiding others with separated fingers [19].
Sensors 2024, 24, 3760 3 of 22 since the hand can be detected entirely and has no occlusion.The complexity of gestures 4, 5, and 6 rises to medium since the hand hides the majority of the fingers.This type of gesture, defined as "grip-type grip and tubular-type grip", is considered fundamental for humans to use tools or daily utensils [11][12][13][14][15][16][17][18].Gesture 7 in Figure 3 presents two terminations and is one of the most complex dynamic gestures to detect since it shows occlusion of the hands, obstructing all the information from the camera.
Visual System
Users have a limited area of view in different virtual reality systems according to the detection devices.Therefore, the size of the objects in the simulation and the distance between the user and these virtual objects must be correct to avoid distortions or inconsistencies in the virtual environment [20][21][22][23][24][25][26][27][28][29].As shown in Figure 4, commercial virtual reality systems can use a screen system or mobile phone to generate a 3D-simulated virtual world.
Visual System
Users have a limited area of view in different virtual reality systems according to the detection devices.Therefore, the size of the objects in the simulation and the distance between the user and these virtual objects must be correct to avoid distortions or inconsistencies in the virtual environment [20][21][22][23][24][25][26][27][28][29].As shown in Figure 4, commercial virtual reality systems can use a screen system or mobile phone to generate a 3D-simulated virtual world.Figure 5a shows a 3D model of a person using a virtual reality lens.It presents the volume where the system can detect the user's gestures.Figure 5b shows the user's hands from an aerial perspective to observe the range horizontally.The opening range from the center to the edges is approximately 62 for each eye.The field of view can be as far as the simulation allows; however, the range of interaction with objects does not exceed 1 m from the user's perspective.Figure 5c shows a user-side view to indicate the range of vertical vision openings.In this segment, the opening range of the upper part of the horizon line is 50 and that of the lower part is 70, (b) The maximum interaction range is 1 m since the user tends to perform actions with flexed arms.Figure 5a shows a 3D model of a person using a virtual reality lens.It presents the volume where the system can detect the user's gestures.Figure 5b shows the user's hands from an aerial perspective to observe the range horizontally.The opening range from the center to the edges is approximately 62 for each eye.The field of view can be as far as the simulation allows; however, the range of interaction with objects does not exceed 1 m from the user's perspective.Figure 5c shows a user-side view to indicate the range of vertical vision openings.In this segment, the opening range of the upper part of the horizon line is 50 and that of the lower part is 70, (b) The maximum interaction range is 1 m since the user tends to perform actions with flexed arms.
Visual System
Users have a limited area of view in different virtual reality systems according to the detection devices.Therefore, the size of the objects in the simulation and the distance between the user and these virtual objects must be correct to avoid distortions or inconsistencies in the virtual environment [20][21][22][23][24][25][26][27][28][29].As shown in Figure 4, commercial virtual reality systems can use a screen system or mobile phone to generate a 3D-simulated virtual world.Figure 5a shows a 3D model of a person using a virtual reality lens.It presents the volume where the system can detect the user's gestures.Figure 5b shows the user's hands from an aerial perspective to observe the range horizontally.The opening range from the center to the edges is approximately 62 for each eye.The field of view can be as far as the simulation allows; however, the range of interaction with objects does not exceed 1 m from the user's perspective.Figure 5c shows a user-side view to indicate the range of vertical vision openings.In this segment, the opening range of the upper part of the horizon line is 50 and that of the lower part is 70, (b) The maximum interaction range is 1 m since the user tends to perform actions with flexed arms.This work emphasizes these commercial systems due to their increasing use for scientific and commercial developments, continuously improving their characteristics to offer users better experiences.Although some of these systems include gestures in virtual environments, they have limitations.Examples of gesture and interaction detection systems in virtual environments are shown in Table 1.They use common optical systems, such as LeapMotion, Kinect, and mobile phone cameras, to obtain images.Their main difference is in their operation ranges, Kinect with 2.5 m and Leap Motion with only 0.40 m.However, Leap Motion offers an optimized gesture detection algorithm including finger detection.Meanwhile, hybrid systems use vision systems and controls with inertial sensors, such as Oculus Touch, HTC VIVE controllers, and Play Station Bundle.These systems have a high accuracy; however, they recognize only the hands' position and some interactions with the buttons and fail to detect complete finger gestures [22][23][24][25][26][27][28].Note: An important feature is immersiveness, it seeks to replicate the real, physical world through a digitized experience.It has no defined range, "+" indicates less immersive and "++" more immersive.
Haptic System
Haptics refers to the ability to touch and manipulate objects based on tactile sensations, which provides awareness of stimuli on the body's surface [30][31][32][33].These features make haptic systems ideal for the control and manipulation of virtual reality environments.Haptics can also be classified by whether they provide force feedback, tactile feedback, or proprioceptive feedback [34].Each type of feedback provides different information about haptic stimuli, so the function and correct choice of haptics are key to their proper application.Furthermore, the interfaces can be cataloged based on their portability or support as desktop, fixed, or portable interfaces.The latter include exoskeletons and gloves, which coat the hands to emulate their movements and are the most used and developed interfaces in the scientific and commercial fields.
Table 2 shows the most representative haptic devices according to their technique, the hardware used, or the commercial model.It describes the methods for tracking the degrees of freedom, the range of operation, and the degree of immersion a user feels while using these devices.A cyber glove comprises a system of small vibrators that generate sensations and emulate textures via different frequencies.However, it does not include force feedback and, thus, cannot identify contours.PHANTOM consists of a robotic arm that transmits force feedback via an opposing resistance to movement with DC motors, but the fingers cannot move independently [35].Rice University's project HANDS OMNI uses micro-air chambers to block the finger joints and, thus, movement.However, in addition to being invasive, the infrastructure needed to generate the air pressure makes the system expensive and difficult to use [36].Dexmo was developed by Dexa Robotics and comprises a haptic glove-type exoskeleton that covers the hand and wrist, focusing on force feedback.The device provides a sense of grip that is similar to reality due to its mechanical structure; however, it is available only by pre-order as it is still under development [37].Visual systems cannot detect some dynamic gestures due to occlusion.In these cases, haptic devices could be a good option [38].In addition, visual systems cannot offer force feedback, which is a fast and primitive method of communication needed in the real world.The skin is the largest human organ and is full of sensors [39].One can live without sight, but living without the sense of touch is extremely difficult, even for walking or holding objects [40][41][42].Therefore, haptic systems for manipulating virtual environments could be essential for a realistic immersion, and more efforts should be made to develop systems that can detect more complex dynamic gestures (including position and force) [43].
Gesture Detection Process for Virtual Interfaces
The process of detecting, capturing, and processing a gesture can be defined as follows.Figure 6 shows the procedure according to the method of obtaining information, using either visual (a camera, Kinect, or a smartphone) or mechanical (data gloves or Exos) systems.However, some systems use both visual and mechanical methods to increase their accuracy.Visual hardware first captures an image with a camera, which is usually infrared to obtain more accurate information [44].At this stage, the images usually present background or ambient noise, which provides information to the images and complicates gesture detection with pre-established patterns [45,46].Therefore, noise-eliminating filters are applied.Most algorithms are applied under controlled light conditions since this is a fundamental factor in the techniques' accuracy.After the noise is eliminated from the images, the method or algorithm for gesture detection is applied.Once the gesture location with the coordinates or positions is obtained, a 3D model of the fingers and hand is created to generate the gesture in a virtual system [34,35].
Gesture Detection Process for Virtual Interfaces
The process of detecting, capturing, and processing a gesture can be defined as follows.Figure 6 shows the procedure according to the method of obtaining information, using either visual (a camera, Kinect, or a smartphone) or mechanical (data gloves or Exos) systems.However, some systems use both visual and mechanical methods to increase their accuracy.Visual hardware first captures an image with a camera, which is usually infrared to obtain more accurate information [44].At this stage, the images usually present background or ambient noise, which provides information to the images and complicates gesture detection with pre-established patterns [45,46].Therefore, noise-eliminating filters are applied.Most algorithms are applied under controlled light conditions since this is a fundamental factor in the techniques' accuracy.After the noise is eliminated from the images, the method or algorithm for gesture detection is applied.Once the gesture location with the coordinates or positions is obtained, a 3D model of the fingers and hand is created to generate the gesture in a virtual system [34,35].The capture methods used by data gloves or Exos increase their accuracy by being directly mounted on the hand [47].These methods use accelerometers, gyroscopes, and resistive sensors to detect finger movements and hand positions; however, the electronic systems generate noise during movements and require many sensors to include all the fingers [48,49].Therefore, an electronic filtering stage that eliminates the noise generated by the hardware is required.Once the information is obtained, it is compared with The capture methods used by data gloves or Exos increase their accuracy by being directly mounted on the hand [47].These methods use accelerometers, gyroscopes, and resistive sensors to detect finger movements and hand positions; however, the electronic systems generate noise during movements and require many sensors to include all the fingers [48,49].Therefore, an electronic filtering stage that eliminates the noise generated by the hardware is required.Once the information is obtained, it is compared with expected models or databases to implement the positioning algorithm for the hand parts.This method requires a lower processing capacity since the algorithms tend to be simpler than vision algorithms [34,35,50].
Gesture Detection Techniques
The problems with the different techniques include the amount of light when conducting an experiment, which is crucial to obtain a good result, and the amount of information presented in the environment or background, which usually increases the noise during gesture detection.Several works on gesture detection were published in the 1990s.The techniques were based on several classifiers for hand gesture recognition (HGR), including the k-nearest neighbors' algorithm, support vector machines (SVMs), neural networks (NNs), and finite-state machines (FSNs), in addition to hidden Markov models and neural networks for calibration [51][52][53][54][55][56].Tables 3 and 4 show the detection techniques for static and dynamic gestures, respectively.Each table shows the techniques' accuracies, main characteristics, and possible improvements according to the literature.The techniques were selected due to being highly efficient and the most used by researchers.The techniques use similar learning algorithms, such as KNN, artificial neural networks, SVMs, and CNNs, which can adapt to the tonal and morphological variations in the hands of different users.The statistical algorithms presented, such as hidden Markov models (HMMs), multi-layer perceptron, and Euclidean distance, optimize the amount of processing by evaluating the positions of the different hand parts and eliminate the errors produced by impossible positions.Tables 3 and 4 show the probabilistic semantic network [11][12][13][14]18], RGB filter and binary mask [13][14][15][16], and distance transform [15][16][17]19], techniques that are optimized for use in virtual reality systems and allow detection employing smartphones.Despite being static gesture detection techniques, the algorithms can be optimized for detecting dynamic gestures, allowing users to use virtual reality helmets to visualize their hands in simulations and increase the degree of immersion.Performing these techniques with smartphones allows the experiments to be easily replicated and improves them.Some researchers have used the combination of a mobile phone camera and gloves with sensors to increase accuracy, generating applications for sign language and motion limiters for job training applications or medical rehabilitation [15][16][17][18][20][21][22][23][24][25]34].Additionally, Table 5 details the techniques specifically used for dynamic gesture detection, highlighting their accuracy, main features, and required hardware, thereby providing further insight into the advancements in this area.Tests were performed on the devices in Table 6.Using Kinect (Redmond, WA, USA) was ruled out due to Microsoft's statement about terminating its production.The use of conventional cameras presented good results but was limited by needing to be connected to a PC, making the manipulation of objects difficult due to the limited area.
Figure 7 shows two images of a hand obtained with a GoPro Hero camera (San Mateo, CA, USA) in a panoramic format using a 120 • C lens aperture.A binary filter was applied to the images to obtain the area used by the hand.This camera had a 4K-type resolution, which generated a 6 s delay in the flow of information to the PC for the real-time application of the algorithm, so it was discarded.
Mobile phone camera
Device embedded in all Smartphones tion, low cost, and high transfer speed.
Limited to the mobile phone processor.
Tests were performed on the devices in Table 6.Using Kinect (Redmond, WA, USA) was ruled out due to Microsoft's statement about terminating its production.The use of conventional cameras presented good results but was limited by needing to be connected to a PC, making the manipulation of objects difficult due to the limited area.
Figure 7 shows two images of a hand obtained with a GoPro Hero camera (San Mateo, CA, USA) in a panoramic format using a 120 °C lens aperture.A binary filter was applied to the images to obtain the area used by the hand.This camera had a 4K-type resolution, which generated a 6 s delay in the flow of information to the PC for the realtime application of the algorithm, so it was discarded.Finally, a mobile phone camera can also be used due to the technological advancements in these devices.We explore this in the next section.
Gesture Capture Using a Smartphone
Virtual reality simulations can be implemented using cardboard virtual reality lenses, performing all the processing with a mobile phone.Figure 7 shows that the hardware used and the characteristics of the technique's application must be established in the image capture, pre-image processing, image processing, virtual model generation, and modelchecking processes.The hardware used was a Samsung S6 cell phone, and the technical specifications are shown in Table 7. Remarkably, solutions based on machine learning and artificial intelligence have revolutionized gesture detection techniques in virtual reality (VR) environments.These technologies enable precise and rapid recognition of both static and dynamic gestures, significantly enhancing human-computer interaction [59,67].For instance, Shantakumar proposed a method based on angular velocity that achieves efficient real-time gesture recognition without the need for extensive data preprocessing [68].This approach, supported by machine learning algorithms, has demonstrated high accuracy in gesture detection, making it ideal for high-frequency interactive applications such as video games and interactive tools [69].Moreover, the implementation of machine learning in gesture detection in VR ronments has allowed for overcoming previous limitations related to self-occlusion and complex finger movements [70,71].By utilizing motion tracking sensor-based systems, precise capturing of three-dimensional hand and finger movements is achieved, thus avoiding common issues in vision-based systems [68].This enhanced recognition capability, supported by artificial intelligence algorithms, has paved the way for more natural and fluid interaction in VR environments, providing users with an immersive and engaging experience [72].
Image Capture
An open-source system that allowed access to the cell phone camera to capture and process a video was used.In the first test, a hand-shaped pattern with a black glove was taken as a reference to make the negative.The code was based on shape detection and programmed for Android in C.However, it generated considerable errors when detecting other black areas, so the use of a glove was ruled out.It was established that a start calibration pattern was needed to detect the contrasts in the user's hand using the detection system.Figure 8 shows the figures obtained with a GoPro Hero3 camera, with a binary filter applied.
Sensors 2024, 24, 3760 12 of 22 system.Figure 8 shows the figures obtained with a GoPro Hero3 camera, with a binary filter applied.Figure 9 shows the image capture process, wherein a few hands were taken as examples to determine the hand's total area without defining the fingers.Figure 10 shows the input images using the smartphone's rear camera.The relevant techniques and filters were applied via this pre-processing.The location of the hands could be detected in the total capture space of the smartphone camera used, system.Figure 8 shows the figures obtained with a GoPro Hero3 camera, with a binary filter applied.Figure 9 shows the image capture process, wherein a few hands were taken as examples to determine the hand's total area without defining the fingers.Figure 10 shows the input images using the smartphone's rear camera.The relevant techniques and filters were applied via this pre-processing.The location Figure 10 shows the input images using the smartphone's rear camera.The relevant techniques and filters were applied via this pre-processing.The location of the hands could be detected in the total capture space of the smartphone camera used, so the first algorithm optimization was performed by eliminating the areas that did not require analysis.Figure 10 shows the input images using the smartphone's rear camera.The relevant techniques and filters were applied via this pre-processing.The location of the hands could be detected in the total capture space of the smartphone camera used, so the first algorithm optimization was performed by eliminating the areas that did not require analysis.
Application of Filters
Once the area to work was obtained, RGB or color filters were applied, as they only needed to be applied to the area where the hand was located.The RGB characteristics and value extraction process are shown in Figures 11 and 12
Application of Filters
Once the area to work was obtained, RGB or color filters were applied, as they only needed to be applied to the area where the hand was located.The RGB characteristics and value extraction process are shown in Figures 11 and 12.The descriptors extracted while capturing the RGB values of the capture area where the hand was located allowed us to obtain the nuances of the user's hand and the RGB values of the background and other elements, such as clothing.Once the skin's RGB value was captured, the parts of the image in which the skin color was found could be detected to delimit it from other values.Figure 12 shows the application of the binary filter, in which only the skin's RGB value is assigned a positive value or a value of one, while everything else is assigned a negative value or a value of zero to obtain a negative contour.The descriptors extracted while capturing the RGB values of the capture area where the hand was located allowed us to obtain the nuances of the user's hand and the RGB values of the background and other elements, such as clothing.Once the skin's RGB value was captured, the parts of the image in which the skin color was found could be detected to delimit it from other values.Figure 12 shows the application of the binary filter, in which only the skin's RGB value is assigned a positive value or a value of one, while everything else is assigned a negative value or a value of zero to obtain a negative contour.This process resulted in a high gesture detection accuracy under optimal conditions with a regular light level, neither saturated nor absent.Figure 13 shows the results of the filter's application under different light levels, whereby a problem could be detected without adequate light.The descriptors extracted while capturing the RGB values of the capture area where the hand was located allowed us to obtain the nuances of the user's hand and the RGB values of the background and other elements, such as clothing.Once the skin's RGB value was captured, the parts of the image in which the skin color was found could be detected to delimit it from other values.Figure 12 shows the application of the binary filter, in which only the skin's RGB value is assigned a positive value or a value of one, while everything else is assigned a negative value or a value of zero to obtain a negative contour.
This process resulted in a high gesture detection accuracy under optimal conditions with a regular light level, neither saturated nor absent.Figure 13 shows the results of the filter's application under different light levels, whereby a problem could be detected without adequate light.The solution was to generate a new descriptor to perform a calibration based on the hand's skin tone.
New descriptor: The step to obtain the chromatic coordinates (intensity division) was The change in the image intensity, which is a scalar product, was defined as follows: The intensity was canceled, and the new descriptor was invariant to the intensity: ) The new descriptor was applied to obtain the chromatic coordinates, which were independent of the amount of light in the experiment.As shown in Figure 14, the new descriptor was applied to the images with light variation, which obtained images with chromatic coordinates and eliminated the problem of light incidence.
Results
The techniques in this paper were studied according to the detection of static or dy- The solution was to generate a new descriptor to perform a calibration based on the hand's skin tone.
New descriptor: The step to obtain the chromatic coordinates (intensity division) was The change in the image intensity, which is a scalar product, was defined as follows: The intensity was canceled, and the new descriptor was invariant to the intensity: The new descriptor was applied to obtain the chromatic coordinates, which were independent of the amount of light in the experiment.As shown in Figure 14, the new descriptor was applied to the images with light variation, which obtained images with chromatic coordinates and eliminated the problem of light incidence.) The new descriptor was applied to obtain the chromatic coordinates, which were independent of the amount of light in the experiment.As shown in Figure 14, the new descriptor was applied to the images with light variation, which obtained images with chromatic coordinates and eliminated the problem of light incidence.
Results
The techniques in this paper were studied according to the detection of static or dynamic gestures, the types of applications, and the mechanical (gloves or armbands) or visual (cameras or infrared sensors) systems used.Mechanical systems usually have greater precision but require being in contact with the user, reducing their comfort and portability.Some systems have become commercial, for example, Dexmo and ManusVR can be purchased online since the other methods are under development in laboratories, making it difficult to replicate the reported results [1,2,5,7,9].
Results
The techniques in this paper were studied according to the detection of static or dynamic gestures, the types of applications, and the mechanical (gloves or armbands) or visual (cameras or infrared sensors) systems used.Mechanical systems usually have greater precision but require being in contact with the user, reducing their comfort and portability.Some systems have become commercial, for example, Dexmo and ManusVR can be purchased online since the other methods are under development in laboratories, making it difficult to replicate the reported results [1,2,5,7,9].
Visual techniques use a camera system, such as Kinect [10][11][12][13][14][15][16][17][19][20][21]23,34].Although Kinect has been used thanks to the publication of its codes and functionalities, Microsoft replaced this device in 2017, with Azure Kinect, which was discontinued in 2023.Nevertheless, the technique can be applied to other types of cameras.Kinect was developed for use in infrared sensors.Therefore, several researchers have used a similar system called LeapMotion, although its range of coverage is lower [73].
Table 3 shows the RGB color filter and the binary filter [22][23][24][25][26]30,31,74], techniques (although they could also be included in Table 4 since they require little processing capacity).These easily applied techniques can detect dynamic gestures via mobile devices and are used in most of the techniques shown in Tables 3 and 4 to reduce environmental noise.The limitation of the RGB color filter and the binary filter is in the hand's morphology.If the camera does not detect the correct form, the filters will take the hand's shape as noise, thus not presenting information for image processing.Therefore, the RGB color filter and the binary filter are added to other techniques that allow the hand to be detected when blocked by objects or superimposed hands.The techniques shown in this paper are divided according to the detection of static and dynamic gestures and the type of application to which they are applied.However, they can also be divided into mechanical (use of gloves or bands on the arm) and visual (use of cameras and infrared sensors) systems.The mechanical ones usually have greater precision, but they need to be in contact with the user, reducing the comfort and in many cases the portability of the systems.Few systems have become commercial, e.g., GloveOne, Dexmo, and ManusVR are the only ones that can be purchased online since other methods are laboratory developments of which the reported results make them very difficult to replicate [1,2,5,7,9,26,34,35].
The challenge of detection lies in the color detection calibration at the beginning of the image capture and filter's application.Figure 15 shows the results of a mobile application where image capture, chromatic descriptions, and the binary filter were implemented.First, the user positioned his/her hand on the device, allowing the RGB value to be captured.(The application of a neural network allowing the self-calibration of the hand's RGB values is proposed for this method in future works).The chromatic detection model was then applied, whereby the light values were eliminated to obtain only the values of the hand with the chromatic descriptions.The binary filter was applied with the obtained values, and the saturation values were added to simplify detection.However, increasing this range required a greater processing capacity.
be captured.(The application of a neural network allowing the self-calibration of the hand's RGB values is proposed for this method in future works).The chromatic detection model was then applied, whereby the light values were eliminated to obtain only the values of the hand with the chromatic descriptions.The binary filter was applied with the obtained values, and the saturation values were added to simplify detection.However, increasing this range required a greater processing capacity. Figure 16 shows the mobile application that allows the hand position to be detected in motion after applying all the necessary processes.Figure 16 shows the mobile application that allows the hand position to be detected in motion after applying all the necessary processes.
Virtual Interface Results
The virtual interface was developed using the Unity 3D program as it supports most 360 virtual vision lenses.This project used the Oculus Rift V3 system.Figure 17 shows the graphical interface of the software.
Virtual Interface Results
The virtual interface was developed using the Unity 3D program as it supports most 360 virtual vision lenses.This project used the Oculus Rift V3 system.Figure 17 shows the graphical interface of the software.
As the image shows, the software allows the programmer to have a development view and a view window with dual focuses, which is the view in each lens of the Oculus Rift viewfinder (Irvine, CA, USA).The image undergoes some distortion due to the lenses of the virtual reality viewer.As shown in Figure 18, the user can visualize their hands via the smartphone camera.In the environment selected for the application, the user needed to touch buttons and doors to advance through a corridor where he/she interacted with 3D devices that enabled him to identify that his fingers were interacting with a 3D object.
tion without a filter.(B) illustrates the detection of the hand position with noise.
Virtual Interface Results
The virtual interface was developed using the Unity 3D program as it supports most 360 virtual vision lenses.This project used the Oculus Rift V3 system.Figure 17 shows the graphical interface of the software.As the image shows, the software allows the programmer to have a development view and a view window with dual focuses, which is the view in each lens of the Oculus Rift viewfinder (Irvine, CA, USA).The image undergoes some distortion due to the lenses of the virtual reality viewer.As shown in Figure 18, the user can visualize their hands via the smartphone camera.In the environment selected for the application, the user needed to touch buttons and doors to advance through a corridor where he/she interacted with 3D devices that enabled him to identify that his fingers were interacting with a 3D object.The physical characteristics of objects can be replicated in the virtual environment of Unity software.The virtual hand was designed to represent gestures, based on the morphology of a real hand.The 3D model of the hand presents divisions with phalanges to show the movement of each finger of the two hands.In addition, a virtual space can be generated in which the user interacts with objects in the capture range of the mobile phone's camera.
Discussion
Gesture recognition can be seen as the way in which computers interpret human body language, being considered a natural human-computer interface.In a gesture recognition system, a gesture model appropriate to the context and application is initially defined, which in turn allows defining the interactions between specific applications and the proposed architecture [61].However, due to the number of possible movements that can be executed by the human body, it is important to determine the types of gestures that are analyzed and recognized.Taxonomies that organize and delimit the types of gestures to be analyzed, executed, and employed have already been proposed [62,63].
Human-computer interaction seeks to make the use of computers easier, more intuitive, and more comfortable.In general, it studies the design and development of new hardware and software interfaces that enable and improve the user's interaction with the The physical characteristics of objects can be replicated in the virtual environment of Unity software.The virtual hand was designed to represent gestures, based on the morphology of a real hand.The 3D model of the hand presents divisions with phalanges to show the movement of each finger of the two hands.In addition, a virtual space can be generated in which the user interacts with objects in the capture range of the mobile phone's camera.
Discussion
Gesture recognition can be seen as the way in which computers interpret human body language, being considered a natural human-computer interface.In a gesture recognition system, a gesture model appropriate to the context and application is initially defined, which in turn allows defining the interactions between specific applications and the proposed architecture [61].However, due to the number of possible movements that can be executed by the human body, it is important to determine the types of gestures that are analyzed and recognized.Taxonomies that organize and delimit the types of gestures to be analyzed, executed, and employed have already been proposed [62,63].
Human-computer interaction seeks to make the use of computers easier, more intuitive, and more comfortable.In general, it studies the design and development of new hardware and software interfaces that enable and improve the user's interaction with the computer [61,62].In this context, Microsoft's Kinect has become a widely used device in this area, as it provides researchers and developers with a large amount of spatial information of objects in a real scene.This device has enabled the development of multiple systems for educational and entertainment video games, physical rehabilitation, robotic and computational control, and augmented reality [59,[62][63][64][65][66][67].Performing interactions through a device of this type may require various tasks such as finding and identifying a user, recognizing the parts of his/her body, and the recognition of gestures (the indications) that he/she makes, among others.Of particular interest for this research is the task of detecting the hand and recognizing gestures made with the fingers, in particular the gesture of touching [59,[65][66][67][68].
In the state-of-the-art we can find several works focused on both hand detection and gesture recognition.This is due to the naturalness and intuitiveness it offers, besides being a striking mechanism, which motivates people to use it.However, to perform a proper detection of gestures, specialized hardware is being used, which can be difficult to access due to costs and infrastructure.In this work, we propose a gesture detection strategy using webcams (which are non-specialized hardware easily accessible in a standard computer), where through image preprocessing the noise is reduced, and using classifiers such as support vector machines, the detection of the gesture made by the user is performed [59,[64][65][66][67].
In recent years, hand gesture recognition has become a useful tool in the process of human-computer interaction (HCI).HCI seeks to transfer the naturalness of the interaction that occurs between people to the interaction between humans and computers, interaction that has been conditioned and limited by the use of mechanical interfaces and devices such as the mouse and keyboard [46,47].Several studies, showing different architectures based on gesture recognition, present different ways of developing systems for recognition and their integration with specialized and non-specialized hardware [50][51][52][53][54]. Gestural interaction applied to technology is increasingly part of our daily life through mobile devices [58,73,74].
In this area, the best known interaction is currently performed with gestures on touch surfaces.The evolution of touch technology has meant that we now have multi-touch screens, which allow the recognition of different points of contact through pressure and gestural interaction.There are a multitude of touch interaction gestures; the best known are those based on one touch, double touch, and continuous touch to make a selection, join two fingers to scroll, or join two fingers and extend them to perform a zoom effect [68][69][70][71].
During the development of this work, it was established that an adequate localization of the region of interest is fundamental in a gesture recognition system, as the accuracy in the detection of the area of interest directly influences the characteristics obtained by the extraction process, and consequently, the results provided by the classification method.
Conclusions
The analyzed research presents the results for specific hand positions, but virtual reality users visualize their hands conversely.Therefore, the results should be presented based on the gestures generated by virtual reality users and show the degree of efficiency in achieving both static and dynamic gestures.The described techniques present a degree of efficiency for specific light and background conditions.However, algorithms must be designed to withstand general conditions to be implemented in applications with varying conditions and be highly efficient to achieve an acceptable level of user immersion.According to the techniques' complexities, most were shown to use color and binary filters to eliminate noise.The most efficient techniques are based on statistical models, using intelligence computing algorithms to calibrate the capture ranges due to the variations in hand morphologies and colors.The techniques must be optimized to detect gestures dynamically with higher efficiency using mobile devices due to the demands of virtual reality applications, where greater accuracy in locating the hands and fingers is required.
Additionally, the combination of hand gestures with haptic devices can significantly enhance the virtual reality experience.Haptic devices provide tactile feedback, allowing users to feel virtual interactions more realistically.This improves the accuracy of gesture detection and increases user immersion by providing tangible responses to virtual actions.
For example, using haptic gloves such as the Dexmo or ManusVR in conjunction with gesture recognition techniques can achieve more precise finger movement detection and better interaction with virtual objects.These devices can be calibrated to work in tandem with RGB and binary filters, adjusting detection parameters in real time based on the haptic feedback received.
The integration of these systems addresses variations in hand morphologies and lighting conditions and provides a robust platform for complex VR applications, such as precise object manipulation and the simulation of physical interactions in a virtual environment.This combination of technologies is crucial for the development of advanced virtual reality applications that require high precision and an immersive, realistic user experience.
Figure 1 .
Figure 1.The dorsal and palmar hand areas.
Figure 1 .
Figure 1.The dorsal and palmar hand areas.
Figure 2 .
Figure 2. Static gestures in different forms or messages, including the detection complexity indicators for visual detection systems: (1) Dorsal front with separated fingers or number five.(2) Number four.(3) Number three.(4) Number two or expression of love and peace.(5) Dorsal front with closed fingers.(6) Open hands together in a vertical-transverse way or in prayer.(7) Closed hand with lifted thumb.(8) Number one.(9) Open hand in vertical-transverse shape.(10) Clenched fist.(11) Index finger pointing or touching object.(12) Dorsal front with inclination, one hiding others with separated fingers [19].
Figure 3 .
Figure 3. Dynamic gestures in different forms or messages, including the detection complexity indicator for visual detection systems: (1) Closed hand with index finger indicating a circular shape.(2) Dorsal face with separated fingers for greeting.(3) Dorsal front with separated fingers with forward movement to touch an object.(4) Thumb and index finger separated for vertical grip.(5) Thumb and index finger separated for horizontal grip.(6) Closed hand to hold a horizontal tubular shape.(7) A: Hands separated to envelope one another in an open face with separated fingers.(7) B: Hands separated to grip with both.
Figure 2 .
Figure 2. Static gestures in different forms or messages, including the detection complexity indica-tors for visual detection systems: (1) Dorsal front with separated fingers or number five.(2) Number four.(3) Number three.(4) Number two or expression of love and peace.(5) Dorsal front with closed fingers.(6) Open hands together in a vertical-transverse way or in prayer.(7) Closed hand with lifted thumb.(8) Number one.(9) Open hand in vertical-transverse shape.(10) Clenched fist.(11) Index finger pointing or touching object.(12) Dorsal front with inclination, one hiding others with separated fingers [19].
Figure 2 .
Figure 2. Static gestures in different forms or messages, including the detection complexity indicators for visual detection systems: (1) Dorsal front with separated fingers or number five.(2) Number four.(3) Number three.(4) Number two or expression of love and peace.(5) Dorsal front with closed fingers.(6) Open hands together in a vertical-transverse way or in prayer.(7) Closed hand with lifted thumb.(8) Number one.(9) Open hand in vertical-transverse shape.(10) Clenched fist.(11) Index finger pointing or touching object.(12) Dorsal front with inclination, one hiding others with separated fingers [19].
Figure 3 .
Figure 3. Dynamic gestures in different forms or messages, including the detection complexity indicator for visual detection systems: (1) Closed hand with index finger indicating a circular shape.(2) Dorsal face with separated fingers for greeting.(3) Dorsal front with separated fingers with forward movement to touch an object.(4) Thumb and index finger separated for vertical grip.(5) Thumb and index finger separated for horizontal grip.(6) Closed hand to hold a horizontal tubular shape.(7) A: Hands separated to envelope one another in an open face with separated fingers.(7) B: Hands separated to grip with both.
Figure 3 .
Figure 3. Dynamic gestures in different forms or messages, including the detection complexity indicator for visual detection systems: (1) Closed hand with index finger indicating a circular shape.(2) Dorsal face with separated fingers for greeting.(3) Dorsal front with separated fingers with forward movement to touch an object.(4) Thumb and index finger separated for vertical grip.(5) Thumb and index finger separated for horizontal grip.(6) Closed hand to hold a horizontal tubular shape.(7) A: Hands separated to envelope one another in an open face with separated fingers.(7) B: Hands separated to grip with both.
Figure 4 .
Figure 4. Commercial head-mounted displays for virtual reality.(a) Oculus Rift, (b) HTC VIVE, and (c) Samsung Gear; this headset uses a mobile phone to reproduce the virtual reality, and it is similar to Google Cardboard [29].
Figure 5 .
Figure 5. (a) Vision volume in virtual reality systems with hand gestures.(b) The range of horizontal vision.The optimal range of work is 1 m, in commercial virtual reality systems.(c) The range of vertical vision.
Figure 4 .
Figure 4. Commercial head-mounted displays for virtual reality.(a) Oculus Rift, (b) HTC VIVE, and (c) Samsung Gear; this headset uses a mobile phone to reproduce the virtual reality, and it is similar to Google Cardboard [29].
Figure 4 .
Figure 4. Commercial head-mounted displays for virtual reality.(a) Oculus Rift, (b) HTC VIVE, and (c) Samsung Gear; this headset uses a mobile phone to reproduce the virtual reality, and it is similar to Google Cardboard [29].
Figure 5 .
Figure 5. (a) Vision volume in virtual reality systems with hand gestures.(b) The range of horizontal vision.The optimal range of work is 1 m, in commercial virtual reality systems.(c) The range of vertical vision.
Figure 5 .
Figure 5. (a) Vision volume in virtual reality systems with hand gestures.(b) The range of horizontal vision.The optimal range of work is 1 m, in commercial virtual reality systems.(c) The range of vertical vision.
Figure 6 .
Figure 6.Diagram of the gesture detection process using different types of hardware (visual or nonvisual).
Figure 6 .
Figure 6.Diagram of the gesture detection process using different types of hardware (visual or nonvisual).
Figure 7 .
Figure 7. Glove pattern detection using a cell phone camera.The use of this specific glove is required, reducing the possibility of using other devices.(A) is the original photograph taken by a mobile
Figure 7 .
Figure 7. Glove pattern detection using a cell phone camera.The use of this specific glove is required, reducing the possibility of using other devices.(A) is the original photograph taken by a mobile camera, it is shown the glove used to generate a difference in the color pattern.(B) is a binary B&W image used as a monochromatic change detection, it only detected a rough approximation.(A) is the original image taken by the cell phone camera, showing the glove used to create a difference in the color pattern.(B) is the image after applying a binary filter for monochromatic color change detection, where only the color difference is detected but the shape is not correctly identified.
Figure 8 .
Figure 8. Images were obtained from a GoPro Hero3 camera, with a binary filter applied.(A) was obtained after a binary filter using a black glove over a white background, for gesture detection.(B) was captured in same way after a rotation.(A) shows an image obtained by applying a binary filter to a hand wearing a black glove on a white background for gesture detection.(B) shows the same image after a rotation, neutral grip at 90° to the horizon.
Figure 10 .
Figure 10.Object detection diagram of the total occupied area without finger definition.
Figure 8 .
Figure 8. Images were obtained from a GoPro Hero3 camera, with a binary filter applied.(A) was obtained after a binary filter using a black glove over a white background, for gesture detection.(B) was captured in same way after a rotation.(A) shows an image obtained by applying a binary filter to a hand wearing a black glove on a white background for gesture detection.(B) shows the same image after a rotation, neutral grip at 90 • to the horizon.
Figure 9
Figure9shows the image capture process, wherein a few hands were taken as examples to determine the hand's total area without defining the fingers.
Figure 8 .
Figure 8. Images were obtained from a GoPro Hero3 camera, with a binary filter applied.(A) was obtained after a binary filter using a black glove over a white background, for gesture detection.(B) was captured in same way after a rotation.(A) shows an image obtained by applying a binary filter to a hand wearing a black glove on a white background for gesture detection.(B) shows the same image after a rotation, neutral grip at 90° to the horizon.
Figure 10 .
Figure 10.Object detection diagram of the total occupied area without finger definition.
Figure 10 .
Figure 10.Object detection diagram of the total occupied area without finger definition. .
Figure 10 .
Figure 10.Object detection diagram of the total occupied area without finger definition.
Sensors 2024, 24 , 3760 13 of 22 Figure 11 .
Figure 11.RGB descriptor extraction process for the classification of areas such as clothing, skin, and the background.
Figure 11 .
Figure 11.RGB descriptor extraction process for the classification of areas such as clothing, skin, and the background.
Sensors 2024, 24 , 3760 13 of 22 Figure 11 .
Figure 11.RGB descriptor extraction process for the classification of areas such as clothing, skin, and the background.
Figure 14 .
Figure 14.Results obtained by applying a new descriptor before applying the binary filter.
Figure 13 .
Figure 13.Binary filter's application under three light levels.
Figure 14 .
Figure 14.Results obtained by applying a new descriptor before applying the binary filter.
Figure 14 .
Figure 14.Results obtained by applying a new descriptor before applying the binary filter.
Figure 15 .
Figure 15.Mobile application for detecting the hand's RGB value and the application of chromatic qualifiers and a binary filter.(A) depicts the color calibration and amount of information, (B) presents the color camera detection, (C) shows the detection range setting at 30%, and (D) illustrates the detection range setting at 85%.
Figure 15 .
Figure 15.Mobile application for detecting the hand's RGB value and the application of chromatic qualifiers and a binary filter.(A) depicts the color calibration and amount of information, (B) presents the color camera detection, (C) shows the detection range setting at 30%, and (D) illustrates the detection range setting at 85%.
Figure 16 .
Figure 16.Mobile application to detect hand position.(A) presents the detection of the hand position without a filter.(B) illustrates the detection of the hand position with noise.
Figure 16 .
Figure 16.Mobile application to detect hand position.(A) presents the detection of the hand position without a filter.(B) illustrates the detection of the hand position with noise.
Figure 18 .
Figure 18.Hand modeling for interaction with the virtual application.
Table 1 .
Virtual reality systems and hand-tracking devices.
Table 2 .
Comparison of commercial and non-commercial haptic interfaces for virtual reality.
Table 5 .
Techniques used for Dynamic gesture detection.
Four types of Hardware that are shown in Table6were used.
Table 6 .
Hardware used in research development. | 13,062 | 2024-06-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Assessing learning processes rather than outcomes: using critical incidents to explore student learning abroad
There is an increasing emphasis on assessing student learning outcomes from study abroad experiences, but this assessment often focuses on a limited range of outcomes and assessment methods. We argue for shifting to assessing student learning processes in study abroad and present the critical incident technique as one approach to achieve this goal. We demonstrate this approach in interviews with 79 students across a range of global engineering programs, through which we identified 173 incidents which were analyzed to identify common themes. This analysis revealed that students described a wide range of experiences and outcomes from their time abroad. Students’ experiences were messy and complex, making them challenging to understand through typical assessment approaches. Our findings emphasize the importance of using a range of assessment approaches and suggest that exploring students’ learning processes in addition to learning outcomes could provide new insights to inform the design of study abroad programs.
3
& Merrill, 2010). Thus, to develop meaningful study abroad programs, support students effectively through these experiences, and understand the learning that takes place, we need to expand our assessment practices in study abroad research and practice.
In this paper, we argue for focusing on student learning processes in study abroad (in addition to outcomes), and present the critical incident technique as one approach to achieve this goal. Using this method, we explored the following questions: (a) What experiences do students highlight as most significant to them during their time abroad? and (b) How do students make meaning of these experiences? We used "significant" as a broad term to allow students to choose what types of experiences and learning they thought were important. This approach looked beyond traditionally emphasized elements of global programs that program designers may have included on a formal itinerary or class plan and considered the variety of experiences students might have during their time abroad. We also avoided focusing on a pre-set, limited number of possible learning outcomes and captured the breadth of what can be learned while abroad as noted by student participants.
Our sample focused on engineering students, a population whose global experiences have not been explored extensively. Engineering students (along with other STEM and professional disciplines) represent a unique study population because their subjects of study may not connect as obviously with local culture-compared to, for example, language, music, or history. Nevertheless, engineering educators realize the importance of developing global competence for the increasingly globalized workforce (Jesiek et al., 2015). Identifying significant cultural experiences for engineering students abroad and the process they follow in interpreting these experiences provides useful insights to inform the design of global engineering programs.
Literature review
In this section, we present traditional approaches for assessing student learning abroad along with critiques of these approaches. We suggest that the complexity of students' experiences abroad can be better understood with a focus on students' learning processes.
Assessing learning outcomes in study abroad
The increasing emphasis on assessment and learning outcomes in higher education has been felt by global education professionals, whose offices need to be able to defend continued investment in their programs (Comp & Merritt, 2010). Students' reflective reports that studying abroad "changed their life" are insufficient to make these arguments, so researchers have sought to understand more specific outcomes, such as intercultural competence (in various forms). The Intercultural Development Inventory (IDI) based on Bennett's (1986) Developmental Model of Intercultural Sensitivity became the centerpiece of many research and assessment projects in global education (Hammer et al., 2003). For example, in the edited volume Student Learning Abroad, a majority of the programs and studies highlighted use the IDI as the primary form of assessment (e.g., Engle & Engle, 2012;Lou & Bosley, 2012). Salisbury (2015) attributes this focus on the IDI and similar instruments to institutional interest in assessing student competence development over the course of their time at university, with the additional benefit that it allows easier comparison across research studies and programs. Although many studies and programs incorporate other assessment methods beyond the IDI, the goal of measuring intercultural competence remains central to assessment approaches in international education.
Several weaknesses have been identified in these typical assessment approaches. First, although many studies have claimed students learn from study abroad experiences, research in international education is plagued with methodological concerns including single-program studies, non-representative samples, and self-selection bias (Ogden, 2015;Twombly et al., 2012). Making causal claims about the impacts of a specific experience is challenging in this context. This challenge is exacerbated by the complex, messy experience of study abroad, which can have more variation in events and activities than traditional classrooms (Deardorff, 2015a) as well as variation in student participants, which is rarely accounted for in traditional pre/post studies (Niehaus & Nyunt, 2020). Several authors have argued that international education research and assessment needs to stop viewing study abroad experiences in a vacuum and take into account the inputs (student characteristics), experience (time abroad), and larger environment (college curriculum) to understand how study abroad can contribute to students' learning at university (Deardorff, 2015b;Niehaus & Nyunt, 2020;Salisbury, 2012Salisbury, , 2015. Even the originators of the IDI have suggested that "developmental interviews" are essential for interpreting what might lead to changes in IDI scores over time (Hammer, 2012).
Beyond methodological concerns, however, there is a larger question of whether learning outcomes assessment makes sense in international education. Wong (2015) argues that we limit understanding by focusing on a narrow set of outcomes and instruments, going as far as suggesting that intercultural competence development as conceptualized by educators and researchers may not be possible in education abroad (Wong, 2018). Streitwieser and Light (2017) maintain that the traditional models of intercultural competence paint a picture of international experiences "adjusting" students in a linear fashion until they achieve the desired competencies. They argue that this framing is not a realistic depiction of the "messiness" of encountering a new culture and instead suggest an alternative focus on students' conceptions of their experiences abroad. A similar argument focuses on the need to account for the psychological experience and emotion associated with studying abroad, which can be significant for students even if assessed learning outcomes show little change (Ward & Kennedy, 1993;Whalen, 1996;Zull, 2012). As argued by Alred and Byram (2002), an education abroad experience can act as a "reference point" that can continue influencing an individual over time (p. 351), so assessing student outcomes immediately following the experience may limit understanding of its potential impact (Wong, 2015;Zull, 2012). Together, these critiques suggest an alternative approach to assessment that focuses on the process of learning abroad rather than the immediate outcomes. Building on Deardorff's (2015b) recommendations for a new paradigm in assessment, our paper presents one approach for assessing learning processes.
Complexity in assessing student experiences abroad
Despite the overall focus on learning outcomes assessment in global education research and assessment, some researchers have taken a more nuanced approach and acknowledged the complexity in students' experiences abroad. For example, several studies have explored students' meaning making in global programs. One early example is Kiely's (2004Kiely's ( , 2005 study of service learning programs, which identifies six forms of transformation that students discussed after their time abroad. Kiely (2004) builds on Mezirow's (1997Mezirow's ( , 2000 transformation theory, which suggests that disorienting dilemmas can lead 1 3 to a process of perspective transformation. Jones et al. (2012) expand Kiely's model to apply to short-term immersion experiences and identify several types of experiences that are significant to transformative learning. Another study followed up with students one year after participation in a global program and found a divide between students who were continuing to be influenced by the study abroad experience and students who were not (Rowan-Kenyon & Niehaus, 2011). The influences of the experience abroad on different students were not identical nor aligned with a particular outcome, but students still reflected on the experience in meaningful ways. The best test of the success of the global program, the authors argue, is the transformative perspective change that students might experience, which may not be measurable using traditional methods (Mezirow, 1997(Mezirow, , 2000. This conclusion is echoed in the work of Papatsiba (2005Papatsiba ( , 2006, who identified that the outcomes students demonstrated related to their adoption of either distant or relational proximity in describing their interactions with a new culture. In these studies, a different idea emerges-experiences that create dissonance or are disorienting during the time abroad are often meaningful (Jones et al., 2012;Kiely, 2005;Rowan-Kenyon & Niehaus, 2011). This idea is one of the few concrete suggestions about the types of experiences that should be included in global programs and is supported elsewhere in the education abroad literature (Che et al., 2009). Although many studies have used student journals or reflective interviews to understand outcomes of global programs, fewer studies have focused on the experiences that students discuss or the process by which students make meaning of these experiences. Some studies have explored how student responses to cultural experiences shift over their time abroad, following their process of moving from "cultural bumps" to "personal triumphs" (Covert, 2014;Jackson, 2005, p. 179;Tian & Lowe, 2014). Others have asked students to list significant experiences on surveys and found that students listed different types of experiences, although often related to interacting with the local culture or taking field trips (Strange & Gibson, 2017;Vandermaas-Peeler et al., 2018). However, few of these studies have suggested that such approaches could be used to assess study abroad experiences beyond the context of a research study and often focus on assessing intercultural competence development as the primary outcome of interest.
Learning processes as an alternative approach to assessment
Building on these examples of research that have acknowledged and explored the complexity of students' experiences abroad, we argue that formal assessment of study abroad should focus more on learning processes rather than on learning outcomes. Assessment of learning outcomes is hampered because students have different backgrounds, spending time in another culture is complex and messy, and the impact of such an experience may not be immediately obvious and change over time. Assessing learning processes could explore topics such as the following: • Experiences: Are students experiencing dissonance and/or experiences that challenge them at an appropriate level? • Support: Do students have sufficient support to process these experiences? • Response: Do students demonstrate thoughtful reflection? Are they processing their dissonance? What thinking processes do they demonstrate?
3
Although several prior studies explored these topics, they typically used open-ended data collection approaches, such as interviews or student reflections. These methods allow for rich insights but may result in an overwhelming amount of unstructured data for the average global education professional to process. We explored an assessment approach that provided more in-depth data than typically used self-report instruments while simultaneously being more structured than open-ended interview questions.
In this paper, we introduce the critical incident technique (CIT) as an assessment approach that can provide insights into students' significant experiences and allow a program leader, teacher, evaluator, or researcher to assess the learning process students follow in describing and interpreting a specific situation. CIT involves asking participants to describe an event of their choice in narrative form, including what happened, their response, and any outcomes associated with that incident (Douglas, et al., 2009), and responds to critiques in the global education literature that it is hard for students to explain what was impactful about their study abroad experiences (Wong, 2015). Rather than asking students to start by thinking about abstract concepts such as "what they learned" or "how they changed," CIT asks for concrete experiences to serve as central anchors of the conversation. These experiences can be used to help students think about what they learned (Walther et al., 2011), and through this discussion, students' meaning making and learning processes become transparent. Critical incidents have frequently been used as an instructional technique in international education (Engelking, 2018;La Brack & Bathurst, 2012) but rarely analyzed for purposes of assessment. Through our findings, we aim to demonstrate how critical incidents can provide insights into students' experiences abroad and the learning processes they followed to make meaning of these experiences.
Methods
We used the critical incident technique (CIT) in interviews with students from different types of global engineering programs to explore the experiences they identified as significant while abroad and the meaning they made from these experiences. We asked students to: Talk about two specific experiences that were significant to you during your time in [ (Bott & Tourish, 2016;Hess et al., 2017;Walther et al., 2011), we prepared follow-up questions to encourage students to provide more detail as necessary.
Participants
We recruited participants from multiple types of global engineering programs including short-term study tours (24 participants), short-term class or projects abroad (9 participants), research/internships abroad (35 participants), and semester abroad (10 participants). This breadth of program types helped us understand a breadth of student experiences (called "maximum variation" sampling by Creswell [1998]). CIT studies in social science contexts have applied the concept of saturation to determine when a sufficient number of incidents has been collected (Bott & Tourish, 2016). Prior education-focused CIT studies found that 20-80 participants are sufficient depending on the context and research questions (Bott & Tourish, 2016;Hess et al., 2017;Nguyenvoges, 1 3 2015; Walther et al., 2011). We interviewed 79 students and asked for two critical incidents, following a structure from prior studies (Bott & Tourish, 2016;Nguyenvoges, 2015), which yielded 173 incidents across interviews to reach saturation (some students provided more than two). Table 1 describes the participant sample.
Data analysis
Several CIT studies have used multiple rounds of increasingly abstract coding, allowing for gradual interpretation which can enhance the reliability of the interpretation (Bott & Tourish, 2016;Walther et al., 2011). Based on these examples, we used three rounds of coding in our analysis of critical incidents, identification, topic, and concept coding (Saldaña, 2013), described in Table 2.
As in most qualitative research, this coding process was iterative, and the three rounds of coding overlapped and fed into each other (Saldaña, 2013). Our intent was to focus on student identified experiences and meaning making rather than introducing theoretical constructs during the analysis, similar to prior interpretive studies using the CIT approach (Bott & Tourish, 2016). As a result, many incidents had multiple codes in Round 2 to capture different aspects of the incidents, which were rarely identical and often had several pieces that combined to make the incident significant. Incidents could therefore fit into multiple themes in Round 3 based on the codes associated with them. During Round 3 coding, we sought input from other researchers by asking them to look at the Round 2 codes and identify themes. This process enhanced research quality and accounted for the familiarity of the researchers with some of the programs under study.
Positionality statement
We approached this project through an interpretive lens with the intent of centering the experiences and meaning making processes of the students we interviewed. We designed the study with this goal in mind and have endeavored to present the results so as to emphasize students' interpretations of their experiences rather than our own. Both authors are from the USA but have lived and traveled abroad in personal and professional roles. We were closely associated with some of the educational programs from which we recruited participants, which may have resulted in our own ideas about what experiences were meaningful in those contexts. To reduce the influence of our personal experiences and professional roles on the findings, additional researchers reviewed the results and contributed their insights into the final coding process.
Limitations
Although we included participants from a variety of program types, a majority of participants attended one university. This limitation is partially addressed by incorporating research abroad participants from other institutions. A limitation with the CIT approach is that it asks participants to describe a situation in detail, which may be challenging after time has passed. One way that we sought to overcome this challenge was to send the CIT question in advance of the interview, which helped participants provide thoughtful answers and refer to photos and journals to assist their recall of events (Bott & Tourish, 2016). Furthermore, Bott and Tourish (2016) argue that when using CIT from an interpretive perspective, complete reporting accuracy is less important than the meaning that participants assign to incidents. Lastly, there were differences in the amount of time elapsed between participants' experiences abroad and when we interviewed them. These differences are important because students continue processing experiences abroad after they return in relation to continued experiences in their educational and professional paths. Thus, incidents that are significant to a student immediately after their return may not be the same as what would stand out a few months or years later. Although we did not observe notable differences in incidents based on this variable, it would be important to consider in using CIT as an assessment method.
Results
We identified thirty types of experiences in Coding Round 2 that mapped onto eight themes in Coding Round 3 (summarized in Table 3). Our results highlight the strength of using the CIT approach in assessing study abroad: we were able to gain a broad perspective on the types of experiences that students described and their processes for making meaning of these experiences. In the sections below, we provide examples from the main themes and describe how students discussed these experiences during their interviews. We allowed the student-generated learning processes to guide the analysis rather than a prior framework or instructor-driven itinerary. Two of the themes occurred less frequently than the others: research and iconic experiences. The research theme identifies incidents in which students provided a story that was entirely focused on research and in no way related to the cultural aspects of their experience. Although these incidents provide insight on student learning, they do not connect to this paper's focus. Even less frequent was the iconic experience theme, which identifies incidents in which students found an event significant because they were at a famous location or simply because "they were there" at a specific place. This theme represents only one of the Round 2 codes which did not align with any of the themes identified in Round 3 because its significance did not relate to learning, culture, or personal growth. Connecting with people The most frequent theme was connecting with people, which describes incidents in which interacting with others was important to their significance. In a majority of cases, participants talked about interacting with local people, including students, professors, shop owners, and taxi drivers. Over half of the participants had at least one incident within this theme. Types of connections ranged in duration and depth, depending on the time available in a program. Students on shorter programs tended to report on specific conversations that were meaningful, whereas students on longer programs talked about developing relationships with local people over time. For example, a Research program student said: I was doing sports, so I just started doing sports where they were doing sports. I could get exercise and then eventually met some people, and they invited me to a barbecue and then I met a whole bunch of other people. So, I think the more times anyone puts themselves in more communities, the more they'll grow. You make friends, you learn new things. It was really fun to have, you know, conversations with people from a different country where, you know, they have an outside perspective of our country.
This quote highlights the learning that participants often attributed to the experiences of connecting with locals. Many students emphasized learning about the local culture while simultaneously developing new perspectives on the USA. Some students also described gaining a new perspective on how much one can have in common with someone despite cultural differences: It seemed like my world vision definitely grew larger. […] It didn't seem like [country] was so far away. And it made me realize that other countries aren't so far away, and the people aren't so far away.
[…] that's an area that I really grew in.
Overall, experiences in which students were able to connect with people, especially local people, were identified as significant by many of the participants in this study.
Personal growth or awareness A majority of participants highlighted incidents where they experienced personal growth or awareness. This theme describes situations where internal change was the primary event that the participant found significant rather than an external activity. Students described experiences in which they felt uncomfortable, had assumptions overturned, chose to take advantage of an opportunity, or learned something new about themselves. One Study Tour participant described the following: I remember just being like, I can't believe she doesn't speak English. I was like what kind of airport is this that they can't even communicate with people? And then I kind of stepped back and I was like, well why would she, right? We're in a country surrounded by countries where English is not the predominantly spoken language. And then I was literally like [Name], you're in a different country.
[…] Why are you expecting it to be America? Why are you expecting your presence here to change their thousands of years of history and culture and identity just to accommodate you visiting their country? […] I just feel like the entire time I was there I just felt like my privilege and expectations and selfishness were constantly just being checked.
Other students experienced personal growth by learning about themselves, whether gaining confidence in their abilities to navigate while abroad or developing personal opinions and preferences. One student, for example, described how she came to dislike traveling with other students who always wanted to see the next big thing. She ultimately realized that she preferred to pursue experiences where she could become embedded in the culture rather than experience "iconic" locations. Although topics and contexts where students experienced personal growth and awareness varied across incidents, all of these participants emphasized the personal change that occurred as a significant aspect of the story.
Experiencing a foreign culture The experiencing a foreign culture theme describes incidents where participants emphasized being immersed, embedded, or "truly" experiencing a part of the local culture. These experiences ranged from participating in a local festival or traditional activity to interacting with the local government or healthcare system. Many participants talked about getting away from the "touristy" parts of the country and felt they were experiencing the local way of life. In some cases, participants found themselves running into a cultural difference when they became more embedded in a culture. In interpreting these experiences, participants discussed not only learning about the culture of the country but also developing more comfort with being in a foreign environment and interest in future travel. Students also discussed developing empathy for international students and other visitors to the USA, including one Semester abroad student who said: I appreciated what the international students have to go through 'cause I think it's very similar to what they have to go through. So, it makes me appreciate a lot about them. And people who immigrate from other countries and come to the U.S. for work or for school. I think that I respect them a lot more and I understand the struggles that they have to go through, just 'cause for a lot of people it's like they knew their country and they move here forever and they have to just adapt forever.
Incidents where participants found themselves experiencing a foreign culture were significant memories both because of what they learned from the experience and because these experiences often fulfilled the expectations and goals participants had for their time abroad.
Navigating a foreign country The navigating a foreign country theme describes experiences where participants were dealing with the logistics of traveling in a foreign country (compared to experiencing the culture, as described in the previous theme). These experiences included speaking a foreign language, communicating across a language barrier, managing travel logistics, and dealing with unexpected situations. Participants who shared incidents related to languages often discussed moving from initial discomfort with language to becoming more comfortable communicating across the language barrier, either within a specific conversation or over the course of the entire experience. Participants who managed their own travel logistics often had stories about problems, and these situations became significant incidents for them. One participant had several of these issues at once: It was kind of late at night and it was dark and my taxi driver spoke no English. It was the first time that happened. So I gave him the address of the Airbnb that I had printed out and he took me there and it was … there were drug dealers on the sidewalk, there were people on the stoop, there were no lights. It was in the middle of this very uncomfortable place. I was trying to communicate with him that […] this has to be wrong. […] We couldn't communicate with each other, so it was really frustrating, I'm getting emotional. We were driving around, we see this family of three, two parents and I think their son was 10 and they spoke [language] and English. I remember feeling this overwhelming thankfulness.
In contrast to the previous theme, where participants emphasized learning about the local culture, participants made meaning of the incidents in the navigating a foreign country theme by discussing what they learned about themselves and about how to travel. Selfconfidence, flexibility, autonomy, independence, and responsibility were all topics that students discussed in relation to these types of incidents. One Study Tour participant connected an experience of being lost in a large city with a group of students to engineering project work this way: Some of the girls who were with us were blaming the guys for leaving us. And I didn't agree […] I was like, 'Well, we made that decision to leave. And we got back.' I don't want the guys leaving us to be what I remember about that. I want to be like 'I did this.' It's not their fault we left. They went one way, we went another way. I don't want to have me blaming guys because that gives them the power of they're taking care of me. And I was like, 'No, I'm taking care of myself.' I got back and I definitely feel like that helps in engineering too because there's so many guys. I don't want a guy to lead my project. It really helped me to be like, 'No, no. I can do it.' Many students concluded that they were proud of themselves for overcoming a difficult or intimidating situation on their own in a foreign country.
Gaining knowledge or awareness The incidents in the gaining knowledge or awareness theme represent experiences in which participants learned new information or became aware of differences across cultures. Participants talked about learning on a range of topics including local history, local current issues, or how engineering relates to culture. They also became aware of how different cultures approach social issues, received outside perspectives on the USA, and observed poverty in a closer setting than they had previously. Learning about local history or current events often caused participants to realize that their perspectives on the world had been influenced by the way events are portrayed in US education and news sources. One Research program participant noted: One experience I had, which I thought was pretty significant was going to the Hiroshima Peace Memorial. That was a really, really powerful experience because going to U.S. schools, all I learned about the atomic bomb was the rationale behind it and the strategic decision behind it. And so it was really eye-opening to go to the museum and see the full aftermath that it caused and just walking through there with … surrounded by Japanese people, many of whom were crying as they walked through, was extremely powerful. It made me question the rightness or the wrongness of it, based on how much long-term effect it's had.
This theme captures the few cases (outside the research theme) in which participants made connections between their cultural experience abroad and their interest in engineering. Connections to engineering tended to happen in cases where visits with universities or engineering companies were built into the program, but one participant who did a summer internship with a non-profit shared the following: I was mainly surveying because this was the initial site analysis and I actually got to use surveying that I had learned from civil engineering […] But then using it in [country] was probably the most fulfilling experience. I got to explain to almost the entire community, who came out to see what the heck we were doing with a total station instrument and a prism rod with a laser. And I became very good at explaining all of that in Spanish, which is exciting. […] It was really exciting to have been able to use that and have every experience kind of build off each other. And then finally this summer I could connect engineering, Spanish, service, and faith for me, and that was just an incredible experience.
Whereas the previous themes emphasized external experiences or personal growth, the gaining knowledge or awareness theme described cases in which participants interpreted their experiences based on their development of new perspectives in a more cognitive sense.
Being on your own This theme highlights that global programs often offer students one of their first chances to be separated from structured guidance and support and to be responsible for themselves. Several students discussed how going to college provided some freedom, but there was still significant support that made it feel easier. When abroad and alone in a foreign environment (whether for an afternoon or a semester), participants described a moment of awareness that their actions and experiences were now dependent on their own decisions. These incidents differ from the earlier theme navigating in a foreign culture because that theme focuses on the external negotiations with the environment, whereas this theme emphasizes internal experiences of being isolated and needing to step into a role of more responsibility. One Semester abroad participant noted: Traveling by yourself is a really interesting experience because you have to rely on your courage to talk to people. It's very different because […] normally, you're just a sheep following some sort of shepherd or whatever, whether it's like … it doesn't have to be traveling, but a lot of times you just go with the flow. But when you travel to a foreign country by yourself, it can be a really eye-opening experience ... it's difficult.
Participants who were on longer programs discussed that the feeling of being on their own led them to realize that they needed to find a community in their new environment. In several cases, these students had been on shorter programs first and described this aspect of longer programs as a key difference between the two. One Semester abroad student who had previously engaged in a Study Tour said: I really liked being in [country] because it was more like normal life. I went there and I had this mindset that I'm going to put down roots because this is where I'm going to live. This is my home for the next five months. I need to find a community. I need to find normal hobbies that I want to do. I need to find a church and I need to find a Bible study and I need to find my personal groove for what I want my life to be here.
[…] I'd never had this experience of being completely removed from that and placed somewhere where I didn't know anyone.
Although less frequent than the earlier themes, the being on your own theme describes an experience that can be powerful for students. Participants who discussed this theme in their incidents pointed to learning about themselves in ways similar to the navigating a foreign country incidents, but with more connections to their future adult life.
Concluding discussion
Assessment and research of global programs often focus on a limited number of survey instruments and pre-defined learning outcomes to analyze student learning abroad (Streitwieser & Light, 2017;Wong, 2015). Our study addressed critiques of this approach by exploring student experiences in global engineering programs using critical incidentbased interviews. Through interviews with 79 students, we identified 173 critical incidents which we grouped into six main themes: (1) connecting with people, (2) personal growth or awareness, (3) experiencing a foreign culture, (4) navigating a foreign country, (5) gaining knowledge or awareness, and (6) being on your own. A key take-away is that few of the incidents described by participants fell neatly into any of these themes but spanned them and interacted in different ways. As argued by Streitwieser and Light (2017), students' experiences abroad are "messy," which is rarely captured in the typical IDI or GPI studies of global programs but which became immediately clear when we asked students to tell stories about their experiences. This approach also provided insights into the processes by which students responded to and made meaning of their experiences abroad.
The experiences and learning outcomes we identified using the CIT approach align with findings in previous research of study abroad programs. For example, the experiences students described in our CIT interviews can be connected with elements of the Student Conceptions of International Education (SCIE) typology (Streitwieser & Light, 2017). The SCIE feature "Being in the Other Culture" connects to gaining knowledge or awareness, "Relating to the Other Culture" connects to experiencing a foreign culture, and "Changing in the Other Culture" connects to personal growth and awareness. However, our study goes beyond the SCIE model because we did not focus only on students' conceptions of their host cultures but rather on their experiences being abroad holistically. The process through which students made meaning of these incidents encompassed not only being/relating to/ changing in the host culture, but also being/relating to/changing within themselves. This aspect of learning in study abroad programs has been highlighted in previous studies on student identity development (e.g., Dolby, 2004;Miller-Perrin & Thompson, 2010) and growth in self-confidence or tolerance while abroad (e.g., Black & Duhon, 2006;Dwyer, 2004). These studies have typically relied on either survey instruments or student reflective writing to understand this type of development, where the former provides no insight into student learning processes and the latter can provide too much detail with little structure.
The CIT approach gives insights into student learning processes and highlights a wide range of learning outcomes while also collecting a manageable amount of similarly structured data that can be analyzed in a reasonable amount of time. In our experience, the CIT approach helped students talk about their experiences in a meaningful way by asking them to tell a specific story rather than asking open-ended conceptual questions about their experiences. Although some students still struggled to cite specific examples, most students told at least one story, and some told several. We recommend use of CIT or related approaches (e.g., photo-elicitation) to help students move beyond vague statements about their experiences and communicate in a structured, concise way about their experiences and learning abroad. We chose to analyze the critical incidents we collected through an iterative coding process to gain an in-depth understanding of the type of data we had captured using this method. Based on this experience, we believe the data analysis process could be streamlined to make this method more practical for use in program assessment and evaluation. For example, rubrics could be developed that focus on the learning processes students 1 3 demonstrate in describing and making meaning of their critical incidents. Furthermore, although our study uses interviews to collect critical incidents, a similar approach could be used in written reflections or even surveys (e.g., Douglas et al., 2009). We plan to explore these possibilities through future work with the goal of developing a more holistic and developmental approach to assessing study abroad programs. | 8,411.2 | 2022-03-18T00:00:00.000 | [
"Education",
"Engineering"
] |
Update on the status of the ITER ECE diagnostic design
Considerable progress has been made on the design of the ITER electron cyclotron emission (ECE) diagnostic over the past two years. Radial and oblique views are still included in the design in order to measure distortions in the electron momentum distribution, but the oblique view has been redirected to reduce stray millimeter radiation from the electron cyclotron heating system. A major challenge has been designing the 1000 K calibration sources and remotely activated mirrors located in the ECE diagnostic shield module (DSM) in the equatorial port plug #09. These critical systems are being modeled and prototypes are being developed. Providing adequate neutron shielding in the DSM while allowing sufficient space for optical components is also a significant challenge. Four 45-meter long low-loss transmission lines transport the 70-1000 GHz ECE from the DSM to the ECE instrumentation room. Prototype transmission lines are being tested, as are the polarization splitter modules that separate O-mode and X-mode polarized ECE. A highly integrated prototype 200-300 GHz radiometer is being tested on the DIII-D tokamak in the USA. Design activities also include integration of ECE signals into the ITER plasma control system and determining the hardware and software architecture needed to control and calibrate the ECE instruments.
Introduction
Considerable progress has been made on the design of the ITER electron cyclotron emission (ECE) diagnostic since the status of the design was presented at the EC-18 workshop in Nara, Japan in April 2014 [1].The ECE diagnostic will provide important information on the time evolution of the electron temperature profile (T e (R)), magneto-hydrodynamic (MHD) fluctuation spectra, nonthermal electron behavior and the ECE radiated power loss.The design is being carried out through a close collaboration between teams in the USA domestic agency (US-DA), India Domestic Agency (IN-DA) and at the ITER Organization (IO) in France.The design is currently midway through the detailed preliminary design phase that will culminate in a preliminary design review in 2017.A radial and oblique view [2] are still included in the design, but since 2014 the oblique view has been redirected to reduce collection of stray millimeter radiation from the electron cyclotron heating (ECH) system.A major challenge has been designing the frontend ECE diagnostic shield module (DSM) in equatorial port plug #09, and the components located therein.These components include two 1000 K calibration sources and remotely activated mirrors that switch the ECE instruments from viewing the plasma to viewing the insitu calibration sources in the DSM.These components must operate reliably over the 20-30 year lifetime of ITER, they are being modeled and prototypes are being developed in order to guide the design process.In particular, providing adequate neutron shielding in the DSM while allowing sufficient space for the calibration sources and optical components is a major challenge.Another design challenge is developing the four 45-meter long low-loss transmission lines that transport the 70-1000 GHz ECE from the DSM to the ECE instrumentation room in the ITER Diagnostics Hall.Prototype transmission lines are being tested at the IN-DA, as is the polarization splitter module that will separate the O-mode and X-mode polarized ECE at the front of the transmission lines.The ECE diagnostic is required to provide real-time signals to help guide the deposition location for ECH to suppress neoclassical tearing modes (NTMs) and other MHD activity, and for T e (R) control.As part of the preliminary design activities, integration of the ECE signals into the plasma control system (PCS) is being addressed.This work includes determining the hardware and software architecture needed to control and calibrate the ECE instruments, and to store and analyze the ECE data.This paper updates the status of these design and prototyping activities.Section 2 covers the design of front end components, section 3 covers the design of components between the DSM and the ECE instrumentation room, and section 4 describes instrumentation in the ECE room, and provides an overview of the software needs and information flow between ECE instrumentation and the ITER plant, including the PCS.
Front-end components
The DSM design has significantly changed over the past two years in order to address issues related to neutron shielding and structural integrity.There have also been changes in the optical design and in how the mirrors that switch between the plasma views and the calibration sources are controlled.Figure 1 shows a side view of the ECE DSM.The DSM is now comprised of upper and lower shielding modules that include internal spaces for optical components.Previously the DSM had three shielding sections split vertically but this complicated connections for water cooling.It would be even better to have just one shielding module but this cannot be manufactured using conventional machining.It is concievable in the future that such a single shielding module could be created with a 3D printing additive manufacturing technique.The diagnostic first wall (DFW) module is bolted to the two shielding modules at the front of the DSM.The concept for supporting the calibration sources is presently to bolt them to plates intalled on the outside of the DSM (Fig. 2).These plates extend to the back of the DSM and include a channel machined inside to carry cables.
The width of the DSM was reduced in 2014 requiring the oblique viewing angle to be reduced from 13 o to 11.5 o from radial (Fig. 3).Also the oblique view direction was redirected to the opposite toroidal direction to reduce the possibility of stray millimeter wave radiation entering from the ECH system.
During the past two years there has been significant design and prototyping activity in the USA on the hot calibration sources that will be located in the DSM.A prototype design (Fig. 4) is being developed that minimizes the use of brittle materials to mitigate high shock loads, avoids direct water cooling to reduce failure points, and uses indirect cooling from the DSM wall.The design uses a Nichrome heating element embedded in an Inconel heating block for improved mechanical support and protection.Two heating configurations are being explored; indirect heating of the Silicon Carbide (SiC) emitter through radiation to avoid direct mechanical contact between the metallic surface and the ceramic emitter back surface, and direct contact heating where the heater is in contact with the emitter.While indirect heating is the preferred option, since it has mechanical benefits, it was found in early in-vacuum testing of a commercial MeiVac [3] embedded Inconel heater that surface emissivity not only varies significantly with temperature, but also due to the formation and Detailed thermal analysis (Fig. 5) and experimental evaluation of prototype hot sources with direct and indirect heating of the SiC emitter is currently being conducted.So far good agreement has been found between the thermal analysis and experiment.
The design presented at EC-18 had wire rope actuators controlling the remotely activated mirrors that switch between viewing the calibration sources and the plasma.The wire rope concept required four penetrations through the port plug back plate that increased neutron streaming and also required sophisticated remote handling.One concept being considered now is to use piezoelectric actuators mounted on the mirrors.These actuators can be made magnetic field and ultra-high vacuum compatible, and require no mechanical feedthroughs.The piezoelectric actuators can be installed on the remotely actuated mirrors in such a way as to allow the mirrors to act as shutters blocking the calibration sources when they are retracted for plasma ECE measurements, this was not possible with the wire rope design.Furthermore commercial piezoelectric actuators are available that have two piezoelectric drives on the same axle.This arrangement provides redundancy if one of the piezoelectric drives were to fail.Available commercial piezoelectric actuators have a pointing accuracy of < 0.1 o and can operate at up to 200 o C. Prototype testing of a piezoelectric actuator will be conducted during 2016.If the prototype testing shows the concept is viable, the piezoelectric actuator will be subjected to a neutron fluence corresponding to the lifetime exposure it would be subjected to in the ECE DSM.Another concept being considered is a single rod actuator for each remotely activated mirror.The mirror would be held in the plasma viewing position by a spring or counter weight and the rod would push on the mirror to switch the mirror to view the calibration source.
Components between the front-end and the instrumentation room
The vacuum window assemblies at the back of the DSM will be provided by ITER but the design of the interface Figure 6 shows the present concept for the interface between the vacuum window assembly and the polarization splitters.A fire retardant cloth sleeve that is slightly over pressured with either dry air or nitrogen provides a compliant coupling between the vacuum window assembly and the polarization splitters.Controlled leakage of over pressured gas can ensure evacuation of water vapor from the transmission line.A field stop, probably made from SiC, will be located between the double vacuum window assembly [4] and the compliant coupling to filter unwanted high order modes.
A prototype polarization splitter box is being assembled by the IN-DA.Each polarization splitter box contains two Gaussian telescopes constructed from three ellipsoidal mirrors (Fig. 7).The calculated cross polarization loss is 31.1 dB at the 800 mm focal length mirror and 30.4 dB at the two 524 mm focal length mirrors.The calculated higher order mode loss is 34.1 dB at the 800 mm focal length mirror and 33.4 dB at the two 524 mm focal length mirrors.10-15 m lengths of prototype smooth walled circular waveguide transmission line sections with an internal diameter (ID) of 72 mm have also been fabricated for testing the attenuation.The transmission lines will be evacuated to mitigate absorption by water vapor.
ECE instrumentation room
Two, high throughput, reciprocating Martin-Puplett Fourier Transform Spectrometers with a spectral range of 70-1000 GHz, a frequency resolution ≤ 5 GHz and a scanning repetition rate ≤ 20 ms will be provided by the IN-DA.The IN-DA has ordered a prototype Michelson interferometer from Blue Sky Spectroscopy [5].The instrument will operate in vacuum to avoid water vapour absorption.The design of the optical layout and optical components has been completed and a dual channel detector system has been tested.The testing of the scanning engine and laser metrology system is underway.During 2017 the prototype Michelson interferometer will be used with a 500 o C blackbody calibration source to test prototypes of the transmission line waveguide components, including the prototype polarization splitter.The IN-DA will also provide the "Low Frequency" 122-230 GHz heterodyne radiometer system.A hybrid splitter, consisting of quasi-optical diplexer followed by two waveguide diplexers, will distribute the 122-230 GHz radiation into four bands.The quasi-optical splitter unit will use a Gaussian beam telescope and a frequency selective Dichroic beam splitter.The four receivers will have a total of 58 channels: a 16 channel 122-138 GHz receiver with 0.5 GHz bandwidth per channel, a 14 channel 141-168 GHz receiver, a 15 channel 172-200 GHz receiver and 13 channel 205-230 GHz receiver.All three higher frequency receivers will have 1 GHz bandwidth per channel.The IN-DA is developing a high temperature calibration source (6) that will be used in the ECE instrumentation room to allow more frequent calibration of the ECE instruments than will be possible using the calibration sources in the DSM.
The US-DA is responsible for providing the "High Frequency" radiometer system.Originally this system was to cover 244-355 GHz, but to better match the ITER plasma operating scenarios the frequency coverage was changed to 220-340 GHz and the number of standard resolution (1 GHz bandwidth) channels was increased from 48 to 60.There will also be 16 high-resolution (200-250 MHz bandwidth) channels that use YIG filters to tune within a 2-18 GHz frequency range.The mixer bank will have three 2-40 GHz filters.There will be two, temperature-controlled, 30 dB intermediate frequency (IF) amplifiers per bank, with adjustable attenuators covering 0-30 dB in 1 dB steps.Pad attenuators in front of each detector will be used to fine tune sensitivity.The
EC-19
2003 video amplifiers will have a voltage gain of 100-400 and a bandwidth of 500 kHz.An important overall goal of the design for the IF filter bank is to achieve stable response and good sensitivity for each channel and easy access for adjustments and changes.A prototype 200-300 GHz radiometer (Fig. 8), that employs highly-integrated millimeter wave technology, developed by Virginia Diodes [7], is being tested on the DIII-D tokamak at General Atomics in the USA [8,9].Third and fourth harmonic plasma ECE measurements from the prototype radiometer are being compared to data from a Michelson interferometer.
Because there are only two Michelson interferometers in the baseline ECE instrumentation there will need to be switches in the ECE room that switch the inputs of the two Michelson between the radial O-mode, the radial X-mode, the oblique O-mode and the oblique X-mode transmission lines in order to check that the ECE spectrum is thermal and to measure the total ECE power.An ad hoc working group with members from the IN-DA, US-DA and IO was setup to identify the simplest way to achieve this flexibility and agree on the layout of instrumentation in the ECE room.
Another important aspect of the preliminary design phase is to decide the software needed to control and calibrate the ECE instrumentation, and to record and analyze the ECE data.Unlike current fusion experiments, ITER employs a concept referred to as the "Plant Operating Zone" (POZ).Because ITER is a nuclear facility the POZ requires strict safeguards.Figure 9 shows a simplified schematic of the ECE systems and data networks within and outside the POZ.There are two ECE Plant Systems that send and receive data inside the POZ via a relatively slow Plant Operations Network (PON).Signals to the PCS for NTM and T e (R) control will be sent over a faster Synchronous Databus Network (SDN).Another ad hoc working group with members from the IN-DA, US-DA and IO has been setup to determine what real time signals need to be provided for NTM and T e (R) profile control.
For T e (R) control 32 channels of T e (R) data, with channels separated by a/30 (where a is the plasma minor radius) at a digital sampling rate of 100 Hz and with a latency of 2.5 ms should be sufficient, but for NTM control the data requirements are more challenging.An n = 2 NTM may have frequencies up to 10 kHz, requiring digital data sampling rates up to 30 kHz.However the rate of change of NTM parameters would be much slower (~ 1 ms).Radial coverage of the T e (R) data for NTM control will need to be closely spaced with a channel spacing of a/80 over the region between q=1 and q=2.Options for providing T e data to the PCS are under consideration by the ECE and ITER PCS teams.The ECE diagnostic system could supply a set of T e (t) signals at a rate greater than the NTM frequency and/or use high level data processing in the vicinity of the ECE diagnostics to supply NTM parameters at a lower rate of ~1 kHz.These and other details will be determined as the control approaches are decided and the ECE instrumentation and control system designs progress.
Summary
Through close collaboration, design teams in India, the US and at the ITER site in France have made significant progress during the past two years designing the ITER ECE diagnostic system, despite the significant technical challenges presented by the ITER environment.In order to guide the design process prototypes of key system components, including a 200-300 GHz radiometer, a 70-1000 GHz Michelson interferometer, a 1000 K hot source, piezoelectric actuator for the shutter/mirrors in the DSM, and broadband quasi-optical transmission line components are being fabricated and tested.Prototype testing will help guide the preliminary design process that will culminate in a Preliminary Design Review in 2017.
Figure 1 .
Figure 1.Side view of front-end optics in the ECE DSM showing calibration sources and retractable mirrors that switch in the calibration sources to the radial and oblique views and calibration source support brackets (blue).
Figure 3 .
Figure 3. Top view of ECE DSM showing calibration source locations.
Figure 2 .
Figure 2. Calibration source support plate concept.(a) Rear view of ECE DSM, (b) hot source mounting plate, and (c) bottom view of ECE DSM.
Figure 6 .
Figure 6.Concept for the interface between the vacuum window assembly and the polarization splitters.
Figure 5 .
Figure 5. Thermal analysis of a hot source using the directcontact heating and 1.2 kW of heating power.(a) a vertical section through the calibration source.(b) The heater surface and (c) the SiC emitter surface.The color scales indicate the predicted temperatures of source components.
Figure 4 .
Figure 4. Calibration source showing the direct-contact heating option.
Figure 7 .
Figure 7. Schematic diagram of the polarization splitter.
Figure 9 .
Figure 9. Schematic diagram of information flow for major ITER ECE system elements.Data to and from the ECE Plant Systems will be via the Plant Operations Nework (PON, blue), the Synchronous Databus Network (SDN, green), and the Data Archive Network (DAN, purple) | 3,931.2 | 2017-07-01T00:00:00.000 | [
"Physics"
] |
Transcriptome-wide m6A methylation profiling of Wuhua yellow-feathered chicken ovary revealed regulatory pathways underlying sexual maturation and low egg-laying performance
RNA N6-melthyladenosine (m6A) can play an important role in regulation of various biological processes. Chicken ovary development is closely related to egg laying performance, which is a process primarily controlled by complex gene regulations. In this study, transcriptome-wide m6A methylation of the Wuhua yellow-feathered chicken ovaries before and after sexual maturation was profiled to identify the potential molecular mechanisms underlying chicken ovary development. The results indicated that m6A levels of mRNAs were altered dramatically during sexual maturity. A total of 1,476 differential m6A peaks were found between these two stages with 662 significantly upregulated methylation peaks and 814 downregulated methylation peaks after sexual maturation. A positive correlation was observed between the m6A peaks and gene expression levels, indicating that m6A may play an important role in regulation of chicken ovary development. Functional enrichment analysis indicated that apoptosis related pathways could be the key molecular regulatory pathway underlying the poor reproductive performance of Wuhua yellow-feathered chicken. Overall, the various pathways and corresponding candidate genes identified here could be useful to facilitate molecular design breeding for improving egg production performance in Chinese local chicken breed, and it might also contribute to the genetic resource protection of valuable avian species.
Introduction
The Wuhua yellow-feathered chicken (Supplementary Figure S1A) is an exclusive local breed native to Wuhua County in Meizhou City, Guangdong Province.It is known for its excellent meat quality and strong disease resistance ability (Weng et al., 2020).It is a small type of native chicken breed with relatively low growth and reproductive performance.
Currently, it is primarily used as a meat-type breed and significant efforts have been made to improve its performance in meat production.The egg-laying performance, however, has been rarely studied.As a dwarf chicken breed, it possesses innate advantage of being an egg type strain, just like "Nongda No.3" grain-saving laying hens (Ning et al., 2013).Its small body size can help to conserve grain, thus alleviating competition for the grain between humans and livestock.It has been established that egglaying performance is not only an economic trait, but also can serve as a very important reproductive characteristic.However, low reproductive performance has significantly limited both industrial development and genetic resource protection of the Wuhua yellowfeathered chicken.Therefore, there is an urgent need to improve the egg-laying performance of the Wuhua yellow-feathered chicken.
Egg-laying performance can be strongly associated with the chicken follicle development, which is a complex process controlled by the gonadal axis of the reproductive endocrine system (Johnson, 2012).Before attaining sexual maturity chicken ovary contains numerous quiescent primordial follicles.At the onset of laying, however, follicles with different development stages develop in the sexually mature ovary at the same time.Interestingly, a previous study has indicated that a sexually mature ovary in hen contains approximately 12,000 oocytes, but only a few hundred oocytes were selected to reach maturity and ovulate (Onagbesan et al., 2009).During the egg-laying period, one single small yellow-feathered follicle (SYF) was chosen from the cohort of SYFs to develop into a hierarchal follicle every single day, which constitutes a complicated process termed as the follicle selection (Johnson and Woods, 2009).The selected follicles then grow fast and can eventually ovulate in a few days.Thus, the successful follicle selection is the fundament of egg production and reproductive performance.The development of chicken ovary follicle is mainly regulated by multiple genes related to the various biological processes such as steroid hormones biosynthesis, granulosa cell proliferation and differentiation (Woods and Johnson, 2005).In addition, a number of important epigenetic events, such as DNA methylation and RNA modifications, have been reported to be involved in regulation of chicken follicle development (Zhu et al., 2015;Fan et al., 2019).
m6A is the most abundant type of posttranscriptional RNA modification found in eukaryotic species (Dunin-Horkawicz, 2006).m6A is defined as the methylation of adenosine 6 positions in mRNAs and some non-coding RNAs.It is primarily controlled by three different regulators namely, methyltransferases, demethylases, and recognition factors.These three regulators are also known as "writers", "erasers", and "readers" of the m6A.As a kind of important epigenetics modifications, it has been found to be associated with various physiological processes including modulation of various reproductive traits in mammals.For instance, a comprehensive review article elegantly summarized the role of m6A on the female reproductive biology and pathophysiology in recent years (Huang and Chen, 2023).In the review, the authors have discussed all aspects of the physiological function of m6A in female reproductive system and related diseases.Interestingly, in yak, m6A was identified as the key epigenetic modification involved in the sexual maturity process of male yak (Wang et al., 2021) and in the development of follicles in the female yak (Guo et al., 2022).In chicken, there is only one study performed by using the commercial chicken breed to investigate the possible role of m6A in the follicle development so far (Fan et al., 2019).The function of m6A on the development of chicken ovary, especially in Chinese local chicken breed, remains largely unclear.
Thus, to better understand the dynamic changes and the functional relevance of m6A in chicken ovary during sexually maturity, we have profiled the transcriptome-wide m6A methylation in Wuhua yellow-feathered chicken ovary.The results of this study may provide important insights for understanding the underlying molecular mechanisms related to the sexual maturation in Chinese indigenous chicken breeds.The findings may also be important for genetic improvement of egg laying performance.
Ethics statement
All animal experiments were approved by the Animal Ethics Committee of Guangdong Meizhou Vocational and Technical College (GDMZVTC-2022-001).All animal procedures were performed in strict accordance with the guidelines proposed by the Ministry of Agriculture and Rural Affairs of the People's Republic of China.
Animals and sample collection
Six healthy Wuhua yellow-feathered chickens, used for sample collection were originated from Wuhua yellow-feathered chicken reservation farm, Wuhua Country, Meizhou, Guangdong province of the People's Republic of China.Among these, three hens were 14 week old (before laying period, named group BLP) and three were 35week old (during laying peak period, named group LPP).The whole ovary was collected from each animal after euthanizing within 10 min (Supplementary Figure S1B).The yolk in the follicles was carefully removed by phosphate buffered saline.After collection, the tissue samples were stored in liquid nitrogen for further RNA isolation.
RNA isolation and construction of library for MeRIP-Seq and RNA-seq
Total RNA was isolated from each ovary sample using the TRIzol reagent (Invitrogen, Carlsbad, CA, United States) according to the manufacturer's instructions.Both the concentration and purity of the RNA were measured using the NanoDrop 2000 (NanoDrop, Wilmington, DE, United States).
The mRNA was first enriched from the total RNA using the Dynabeads Oligo (dT) and then fragmented.The fragmented RNAs were then divided into two distinct parts, one for isolation of m6A enriched mRNA and the other used for RNA-seq as the input background.The former part of the fragmented RNAs was incubated with the m6A-Dynabeads at room temperature for 1 h to allow them to bind to the beads.Eluted m6A-containing fragments (IP) and untreated input control fragments (the latter part) were then concentrated to generate the final cDNA library, respectively.The libraries were thereafter qualified and absolutely quantified using an Agilent Bioanalyzer 2100 (Agilent Technologies, CA, United States).The prepared libraries were then sequenced on an Illumina Hiseq X Ten.
Analysis of sequencing data
The raw reads, including data from MeRIP-seq and RNA-seq, were thereafter processed using the Trimmomatic (v 0.39) (Bolger et al., 2014) software to obtain the clean reads by removing the reads with adapter or ploy N as well as the reads with relatively low quality (Q < 15, read_length <50 bp).The clean reads were then aligned to the chicken reference genome GRCg7b (https://www.ncbi.nlm.nih.gov/datasets/genome/GCF_016699485.2/) by using the HISAT2 (v 2.1.0)(Kim et al., 2015) with default parameters.Finally, only unique reads with high mapping quality were retained for further analysis.
MeRIP-seq
To access the quality of methylated RNA immunoprecipitation sequencing data, R package Guitar (Cui et al., 2016) was adopted.
After all the quality control procedures mentioned above were performed, MeTDiff (v 1.1.0)(Cui et al., 2018) was used to call the peaks of m6A modification in each group with the input data as the control (p-value < = 0.05, fold change > = 1.5).When calling m6A peaks, all the parameters were set in default (except the option FRAGMENT_LENGTH = 200).To investigate the possible difference in m6A modification between group BLP and group LPP, the MeTDiff was used again (FRAGMENT_LENGTH = 200).The annotation of the identified m6A peaks in each group and differential m6A peaks between groups was carried out by ChIPseeker (v 1.12.1)(Yu et al., 2015).
The sequence motifs of m6A sites were detected using DREME (v 5.5.2) (Bailey, 2011).The functions of the m6A modification genes were investigated using GO and KEGG analysis.GO analysis was performed using the tools offered by the Gene Ontology Consortium (http://geneontology.org/).KEGG analysis was conducted using tools offered by the Kyoto Encyclopedia of Genes and Genomes (https://www.genome.jp/kegg/).
RNA-seq (input data)
The input data were used determine the levels of gene expression and used as the background in m6A peak calling.After the quality control, the HTSeq (v 0.9.1) (Anders et al., 2015) software was employed to obtain the number of reads which were located in each protein coding gene.The number of reads captured was then normalized using the algorithm called Fragments Per Kilobase of Transcript Per Million Fragments Mapped (FPKM) (Roberts et al., 2011) by Cufflinks software (v 2.2.1) (Trapnell et al., 2012).The differential expression genes were thereafter identified using DESeq2 (v 1.18.0)(Anders et al., 2012) with the criteria fold change in FPKM >2.0 or <0.5, and p-value≤0.05.The function of the differential expression gene was investigated using GO and KEGG analysis.
Combination analysis of MeRIP-seq and RNA-seq
To reveal the potential functions of dynamic m6A modification in regulating mRNA function during the development process of Wuhua chicken ovary, we examined the correlation between the gene expression levels and the abundance of m6A peaks based on the calculated fold changes.Generally, the m6A peaks with a Log2 fold change >0.5 or < −0.5, p-value <0.01, and the corresponding genes with a Log2 fold change >0.5 or < −0.5, p-value <0.01 were considered to be significant in the combination analysis.
Morphological characteristics of chicken ovary
To decipher the transcriptome-wide m6A methylation profiles during the process of Wuhua yellow-feathered chicken ovary development, we collected the ovaries at two contrasting stages (BLP vs. LPP, Figure 1) in triplicates for MeRIP-seq.Based on the picture of ovaries sampled at two different stages, we observe the distinct differences in both shape and size of ovaries.In generally, ovaries before the sexual maturity are small, and the follicles present on it are found in the primordial follicle stage.After the sexual maturity, however, the size of ovary is times bigger than that before the maturation.There were follicles of different sizes observed in various development stages in one ovary, which can be roughly divided into two parts, namely, pre-hierarchical follicles and hierarchical follicles (F1-F5).
General features of m6A methylation in chicken ovary before and after sexual maturation
The number of raw reads getting from the MeRIP-seq of each sample ranged from 52.17 M to 62.99 M.After the quality control, we obtained 51.99 M-55.83M clean reads for each sample.For RNA-seq, the number of raw reads varied from 48.19 M to 76.88 M for each sample, and the clean reads number was 47.97 M-54.93 M for each sample.The detailed information of the sequencing data has been listed in the Supplementary Table S1.The mapping statistics of the clean reads has been reported in Supplementary Table S2, Supplementary Figure S2A, and only the unique high-quality reads were used for the following analysis.
We identified 24,830 and 25,293 m6A peaks in chicken ovary of BLP and LPP group, respectively.The statistical analysis of the identified peaks has been displayed in additional files (Supplementary Table S3, Supplementary Figures S2B-D).
To further investigate the distribution of m6A peaks across the transcript, we annotated the peaks that were identified.The results demonstrated that the peaks were markedly enriched in the exon region (CDS) and stop codon in both BLP and LPP groups (Figures 2A, B).
To determine the motifs present in m6A peaks, we scanned each peak, and the top 5 sequence motifs in BLP and LPP have been listed in Figure 2C.We observed that both MCGTR (M = A or C; R = A or G) and GGARRA (R = A or G) were significantly enriched in m6A peak sites in chicken ovary.
Differential m6A methylation analysis
The m6A results of biological replicates within each group exhibited high concordance using the Pearson correlation coefficient (Figure 3A), which suggested that the samples we used is the present study were of good quality and suitable for analysis.To further detect the dynamic changes of the m6A methylation in two distinct physical phases, we assessed the differentially methylated m6A peaks (DMPs).Interestingly, differential peak analysis revealed that 1,476 DMPs between these two groups (Supplementary Table S4), which could be potentially associated with the development of chicken ovary.As it listed in Table 1, the total length of the DMPs were 3,583,532 bp with an average length of 2,427.87bp, which represents about 0.35 percent of the chicken genome.Among the various DMPs identified, it was found that compared to BLP, the number of significantly upregulated methylation peaks and downregulated methylation peaks was 662 and 814 in LPP group, respectively (Figure 3B).The number of genes corresponding to the up DMPs and down DMPs was 580 and 763, respectively.
To better understand the possible functional consequences of m6A methylation, we performed a functional enrichment analysis on the various genes containing DMPs.The significantly enriched GO terms have been listed in Supplementary Table S5; Figure 3C.The significant terms in the biological process category were related to reproduction, reproductive process, rhythmic process, growth, multicellular organismal process etc.The KEGG analysis revealed that the genes associated with DMPs were mainly enriched in pathways like Retinol metabolism, p53 signaling pathway, Apoptosis, Necroptosis, TGF-beta signaling pathway, Cytokine-cytokine receptor interaction, Toll-like receptor signaling pathway, etc (Supplementary Table S6; Figure 3D).
Differentially expressed genes (DEGs) analysis
A principal component analysis (PCA) based on the RNA-seq data (Figure 4A) displayed a high concordance within each group and a clear separation between the two groups, which indicated that the further analysis could be quite reliable.Through analyzing the input data, we found that a total of 4,354 genes were differentially expressed between these two groups.Moreover, in comparison to BLP group, there were 2,282 upregulated DEGs and 2,073 downregulated DEGs found in the LPP group (Supplementary Table S7; Figure 4B).
GO analysis demonstrated that the DEGs were mainly related to reproductive process, reproduction, developmental process, metabolic process and growth, which were vital important for the development of follicles (Figure 4C).Interestingly, KEGG pathways (Figure 4D) found to be significantly enriched were follicles development related pathways, such as mTOR signaling pathway, TGF-beta signaling pathway, Wnt signaling pathway, MAPK signaling pathway, VEGF signaling pathway as well as some cell growth and death related pathways such as Apoptosis and p53 signaling pathway.
Correlation analysis of m6A methylation and DEGs
To determine the potential functions of dynamic m6A modification in regulating mRNA function during the development of Wuhua chicken ovary, we examined the possible correlation between the gene expression levels and the abundance of m6A peaks based on the calculated fold changes.The results of the correlation analysis revealed a positive correlation between the global RNA methylation and gene expression levels (Figure 5).Overall, we found that 713 mRNAs were significant both in m6A levels and gene expression levels (Supplementary Table S8).Among them, 299 mRNAs were "hyperup" type, which means that compared to BLP group, both m6A and the
FIGURE 1
The ovary of Wuhua yellow-feathered chicken before (A) and after (B) sexual maturation.
gene expression levels were upregulated.The number of "hypo-down" type mRNAs was 273, which indicated that both levels were downregulated.The number of "hyper-down" mRNAs and "hypoup" mRNAs were found to be 37 and 104, respectively.
Discussion
Egg laying performance is an important trait in poultry industry, as it is not only an economic trait affecting the egg production in hen industry but also has a reproductive value which might restrict the development of meat chicken industry.Ovary development is an important biological process that can primarily determine the laying performance in hens.However, the molecular regulation mechanisms underlying this development process remains largely unknown, especially for Chinese local chicken breeds which usually exhibit relatively poor egg laying performance.Moreover, previous studies related to chicken ovary development or follicle selection have suggested that the molecular mechanisms underlying this process is relatively complex, and epigenetic modifications such as m6A methylation play a critical role in regulating this physiological process (Fan et al., 2019).In this study, we have investigated the dynamic changes of transcriptome-wide m6A methylation of the Wuhua yellow-feathered chicken ovary before and after the sexual maturation stage.To our best knowledge, this is the first study to decipher the possible role of m6A modification in ovary development of Chinese indigenous chicken breed.Our data indicated that m6A levels of mRNAs changed significantly, thus indicating that m6A might play an important role in the process of ovary development.Further analysis revealed that apoptosis related pathways could be the key molecular regulatory cascades underlying the poor reproductive performance of Wuhua yellow-feathered chicken in comparison to other commercial laying hens breed such as Hy-line Brown chicken.However, the specific methylase and the regulatory mechanisms are still unclear and hence more further studies are required.
In the present study, we have profiled the transcriptome-wide m6A distribution in Wuhua yellow-feathered chicken ovary tissues at different stages of development.The results demonstrated that the transcripts in ovary were extensively methylated, which indicated that m6A may be involved in the development of the ovary.Similar to other studies (Fan et al., 2019;Wang et al., 2021;Guo et al., 2022;Li et al., 2022), we found that m6A peak site was remarkedly enriched in the CDS and 3′UTR, especially in the vicinity of the stop codons.Accumulating evidences suggest that the methylation site on a transcript may affecting the fate of the transcript (Dominissini et al., 2012).It has been hypothesized that the methylation on the internal exons might control the splicing of the transcript and the methylation near the stop codon could influence the translation of the transcript.Taken these together, we speculate that m6A could play a major role in regulating the development of Wuhua chicken ovary.To further investigate the potential role of m6A in the process, we combined the gene expression and m6A levels together to analysis the potential relationship between them.Interestingly, a positive relationship was Percentage of Genome (%) 0.34 observed, which indicated the m6A might regulate the mRNA levels and thus influence the process of follicle development.
The chicken ovary development consists of many complex biological processes, including follicle selection, follicular somatic cells transformations, and angiogenesis (Matzuk, 2000;Fortune et al., 2001;Hillier, 2001).Many studies focusing on the molecular mechanisms of follicle selection were conducted, and some potential pathways controlling follicle selection were found.In the present study, we have found that mTOR signaling pathway, TGF-beta signaling pathway, Wnt signaling pathway, MAPK signaling pathway, and VEGF signaling pathway were enriched by the differential expression genes, indicating these pathways may be important in the development of the ovary.Consistent with our study, TGF-beta signaling, Wnt signaling and Steroid hormone biosynthesis pathways have been reported to be significantly enriched in follicle selection in the hybrids of Huiyang Bearded and White Leghorn chickens (Nie et al., 2022).After the follicle selection, the selected follicle then can rapidly develop and form a mature yolk.The growth in size is accomplished by concurrent increase in the vasculature and blood flow, which can enable the follicle to accumulate large amounts of nutrients to form the lipid-rich mature yolk (Johnson, 2015;Nie et al., 2022).Interestingly, VEGF signaling pathway together with Wnt signaling pathway, significantly enriched in the present study, was identified to play an important role in the angiogenesis in the ovary during sexual maturity (Johnson, 2015).
As a kind of RNA modification, m6A is a common posttranscriptional regulation mode of gene expression.It can affect the fate of the RNA, such as by affecting the stability of RNA to determine whether the RNA is degraded or not, and then affect the corresponding biological processes (Dominissini et al., 2012).Interestingly, a positive relationship between m6A methylation and gene expression was observed in this study, which indicating m6A can regulate the development of ovary through affecting key genes associated with this process.Retinoids, consisting of retinol and its derivatives, can play a vital role in the development and maintenance of the normal physiological functions of the ovary (Liu et al., 2018b).A number of previous studies have revealed that retinoic acid (RA), one derivatives of the retinol, could substantially promote GC differentiation and oocytes maturity and ovulation in female mammalian (Ikeda et al., 2005;Tahaei et al., 2011;Kawai et al., 2016).The retinol level in the fluid of the follicles varies according to the different development stages with highest concentration found in dominant follicles relative to small follicles (Brown et al., 2003).Interestingly, the "Retinol metabolism" pathway was observed to be enriched by the DMPs in the present study.The corresponding gene within this pathway identified was CYP2W1.Moreover, compared to BLP group, the gene expression level of CYP2W1 was significantly upregulated in LPP group.However, there were two m6A peaks on mRNA of this gene, and the changes on m6A modification levels were opposite with one being upregulated and other downregulated.Thus, this fine tuning of m6A modifications of these two sites could be important for the post-transcription regulation of CYP2W1 mRNA, which can regulate retinol metabolism in the ovary and promote the ovary development.Thus, we inferred that m6A might regulate ovary development through tightly controlling the expression of CYP2W1 in retinol metabolism pathway.
Granulosa cells (GC), one of the follicular somatic cells, are critical in the follicle development.The proliferation of GC is a basic process which is required for normal follicular development (Wang et al., 2017).Moreover, impaired GC proliferation or GC apoptosis can lead to selective atresia of certain ovarian follicles.For instance, a previous review (Johnson, 2003) has indicated that the transition from a prehierarchal follicle to the preovulatory stage of the development is associated with dramatically increased resistance to apoptosis and increased cell proliferation in cultured hen GC.The preovulatory follicle viability is largely attributed to the acquired resistance of the GC layer to apoptosis.A recent study in pig ovarian somatic GC further demonstrated that p53 signaling pathway can inhibit GC cell cycle and then result in the follicle atresia (Li et al., 2021).The follicle atresia can be at any stage during development, and it has been identified as main reason for the reduction of egg production in chicken (He et al., 2022).Interestingly, functional analysis of DMPs and DEGs both demonstrated that pathways related to cell apoptosis such as "apoptosis" and "p53 signaling pathway" were enriched.CASP18, a member of caspase family, encoding caspase-18, was presented in this "apoptosis" pathway.The caspase family members play vital roles in the induction, transduction and amplification of intracellular apoptotic signals (Fan et al., 2005).In addition, compared to BLP group, m6A level and the gene expression level of CASP18 were both significantly upregulated in LPP group, which indicated that cell apoptosis in the ovary was highly activated in the LPP group.However, in a similar study conducted by using the commercial chicken breed, the apoptosis related pathways were not found to be significantly enriched (Fan et al., 2019).The high level of m6A methylation of this gene might stabilize the mRNA and increase the levels of this mRNA, which leads to the apoptosis of GCs and further promote follicle atresia in LPP group.Therefore, we speculate that the follicles atresia due to the apoptosis of GC in LPP group could be the possible cellular mechanism underlying the low egg production of Wuhua yellow-feathered chicken in comparison to the commercial hen breeds.As a typical Chinese local chicken breed, Wuhua yellow-feathered chicken can exhibit higher broodiness compared to the commercial laying breeds which were under systematic and intensive selection for egg laying performance.Broodiness can effectively cause shrinking of the fallopian tubes and ovaries, thus inhibiting follicle development, promoting the appearance of atretic follicles and thereby reducing the granular layer and membrane cells in the follicles (Liu et al., 2018a).Overall, our results clearly indicate that the precise regulation of follicle development and follicle atresia in ovary of Wuhua yellow-feathered chicken was controlled by m6A methylation.It was also observed that Conjoint analysis m6A-seq and RNA-seq data (gray dots indicating genes with no significant differences; colored dots indicating genes with significant differences).
the various apoptosis related pathways were the potential molecular mechanisms underlying the low egg production performance and the development of broodiness.
In this study, we generated a dynamic m6A transcriptional map of the Wuhua yellow-feathered chicken ovary before and after sexual maturation.Although some interesting and valuable results were found here, the limitation of current study is extremely clear.First, both the sample size at one stage and number of development stages were small.Only two separate development stages were taken into consideration here, which does not accurately depict the entire development process of chicken ovary and leaves out a wealth of information on the molecular regulatory mechanisms underlying this biological process.In addition, this study is purely omics-based and lacks multiple levels of validation tests.In the future, we aim to collect ovary samples in multiple development stages to analysis the dynamic changes of both m6A methylation level and expression level of candidate genes found in the current study.Besides, to clarify the potential impact of methylation on sexual maturity of Wuhua yellow-feathered chicken, histology tests and gene function validation on cell models were also required.
Conclusion
We profiled the transcriptome-wide m6A methylation in Wuhua yellow-feathered chicken ovary before and after the sexual maturation stage.The precise expressional regulation of various genes related to the follicle development and follicle atresia controlled by m6A during the maturity can result in the poor reproductive performance in the Wuhua yellow-feathered chicken.The findings provide a solid foundation for further investigation of molecular mechanisms of ovary development and egg laying performance in Chinese indigenous chicken breeds.The pathways and corresponding candidate genes reported in this study could be useful for the molecular design breeding for improving egg production performance in Chinese local chicken breed, and it might also be beneficial for the genetic resource protection of the valuable avian species.
FIGURE 2
FIGURE 2 Overview of the m6A methylation profile in Wuhua yellow-feathered chicken ovary.(A) Distribution of m6A peaks along transcripts.(B) Proportion of m6A peaks fallen along transcripts.(C) The top motifs enriched across m6A peaks identified from BLP and LPP.
FIGURE 3
FIGURE 3 Analysis of differential m6A methylation in Wuhua yellow-feathered chicken ovary.(A) Heatmap of the sample correlation matrix based on sequence data.(B) The volcano of differential methylation peaks (DMPs).(C) GO analysis for genes with DMPs.(D) KEGG pathway analysis of genes with DMPs.
FIGURE 4
FIGURE 4 Analysis of differentially expressed genes (DEGs) in Wuhua yellow-feathered chicken ovary.(A) PCA of the samples based on sequence data.(B) The volcano of DEGs.(C) GO analysis for DEGs.(D) KEGG pathway analysis of DEGs.
FIGURE 5
FIGURE 5Conjoint analysis m6A-seq and RNA-seq data (gray dots indicating genes with no significant differences; colored dots indicating genes with significant differences).
TABLE 1
Statistics of the differential m6A peaks identified between BLP and LPP. | 6,315.4 | 2023-10-20T00:00:00.000 | [
"Biology"
] |
Journal of Managerial Psychology Personality perception based on LinkedIn profiles
Purpose – Job-related social networking websites (e.g. LinkedIn) are often used in the recruitment process because the profiles contain valuable information such as education level and work experience. The purpose of this paper is to investigate whether people can accurately infer a profile owner ’ s self-rated personality traits based on the profile on a job-related social networking site. Design/methodology/approach – In two studies, raters inferred personality traits (the Big Five and self-presentation) from LinkedIn profiles (total n ¼ 275). The authors related those inferences to self-rated personality by the profile owner to test if the inferences were accurate. Findings – Using information gained from a LinkedIn profile allowed for better inferences of extraversion and self-presentation of the profile owner ( r ’ s of 0.24-0.29). Practical implications – When using a LinkedIn profile to estimate trait extraversion or self-presentation, one becomes 1.5 times as likely to actually select the person with higher trait extraversion compared to the person with lower trait extraversion. Originality/value – Although prior research tested whether profiles of social networking sites (such as Facebook) can be used to accurately infer self-rated personality, this was not yet tested for job-related social networking sites (such as LinkedIn). The results indicate that profiles at job-related social networks, in spite of containing only relatively standardized information, “ leak ” information about the owner ’ s personality.
Personality perception based on LinkedIn profiles
The fit between an employee and an organization is important for the employee's job satisfaction and turnover intentions (Kristof, 1996;O'Reilly et al., 1991). Because of this, a company is not only looking for someone with the right qualifications, but also for someone whose personality fits the job and organization. Personality assessment, both online and offline, has therefore become an important tool in personnel selection (Barrick and Mount, 1991;Dineen et al., 2002;Salgado, 1998). It is perhaps not surprising that many companies use personality tests in the screening of job candidates. Heller (2005), for example, estimates that 30 percent of American companies use personality assessments. However, extensively testing all applicants can be expensive. In cases when personality is a key criterion for selection but no resources are available to test all applicants, a pre-selection that allows one to only test the most promising candidates would increase efficiency. Companies already use application letters and résumés to infer key aspects of the applicant (including personality) to make better choices in the pre-selection phase (Brown and Campion, 1994). Furthermore, research confirms that this works; application letters and résumés contain valid cues to infer certain personality traits (Burns et al., 2014;Cole et al., 2003a, b).
Similar to inferring personality from an application letter or résumé, one can quite accurately infer someone's personality based on profiles at social networking sites such as Facebook (Back et al., 2010;Tskhay and Rule, 2014), or even predict job performance from such profiles (Kluemper and Rosen, 2009;Kluemper et al., 2012;cf. Van Iddekinge et al., 2013). Profile viewers' estimates of the personality traits of profile owners correlates between 0.22 and 0.41 with the actual personality (a combination of self-and other ratings on a given trait) of the profile owners across the traits of extraversion, agreeableness, openness to experience, and conscientiousness (Back et al., 2010). For example, cues in the profile picture (clothing style, a rebellious pose, etc.) or the number of groups one is a member of can help to predict personality (Stopfer et al., 2014;Gosling et al., 2011). Such online-based personality predictions are more closely related to actual personality than to the ideal personality of the profile owner (a self-rating of how the profile owner would ideally want to score on a given trait; Back et al., 2010). This suggests that profiles provide cues that allow others to estimate the actual personality of a profile owner, rather than how (s)he wants to appear.
In short, personality inferences based on social network profiles (such as Facebook) have been found to be possible. However, job-related social network profiles (such as LinkedIn) differ from social network profiles in a number of ways that makes it necessary to test whether personality can also be inferred from those profiles.
Why study personality perception based on job-related social networking sites? An obvious reason why studying personality inferences based on job-related social networking sites is important, is that these sites are very popular with recruiters, more so than typical social networking sites such as Facebook (Nikolaou, 2014;Roulin and Bangerter, 2013). LinkedIn is for example used by 92 percent of recruiters ( Jobvite, 2012). It is perhaps no surprise that profiles on job-related social networks are used, as they obviously contain relevant information such as work experience. Indeed, Roulin and Bangerter (2013) found that both recruiters and applicants think that profiles at job-related social networking sites are good indicators of person-job fit. Furthermore, the information on job-related social networking sites (such as profiles on LinkedIn) has been found to be more honest than paper résumés (Guillory and Hancock, 2012). Guillory and Hancock indicate that it seems that the openness of the internet forces people to be accurate and not inflate one's résumé.
Another reason to investigate job-related social networks instead of general social networks is that very few general social network profiles are open to the general public. For example, 80 percent of Americans indicated that their Facebook profiles are set to 419 Personality perception private and can only be seen by their friends (Madden, 2012). Even if one could predict job performance based on someone's Facebook profile (Kluemper and Rosen, 2009;Kluemper et al., 2012), recruiters cannot use Facebook for pre-screening candidates if only 20 percent of profiles are accessible. In contrast, people see their profile at a job-related social networking site as an online résumé that they are willing to share with others (including recruiters) for job-related purposes (Roulin, 2014). This makes it possible to use job-related social networks as a tool for general pre-screening purposes.
It is clear that recruiters regularly use job-related social networking sites such as LinkedIn in their screening of candidates. It is also clear that people infer personality traits based on profiles of social networking sites, and that this information subsequently influences evaluations of whether someone is suited for a job (Bohnert and Ross, 2010). However, we do not know how accurate personality inferences from profiles at job-related social networks are. Although the work of Nikolaou (2014) demonstrates that HR professionals prefer using job-related social networks over the more social networks, there are at least three factors that threaten their potential in screening for personality traits.
First, people are likely to post information more deliberately on job-related social networking sites like LinkedIn than they do on social networking sites like Facebook. On social networking sites, someone might for example post information about using excessive amounts of alcohol with friends, which potential employers could interpret in a negative light in the selection process (Roulin, 2014). For job-related social networking sites, it seems likely that profile owners are aware that colleagues, customers, or potential employers will view their profile, which is why they more carefully consider what they put online (Roulin and Levashina, 2016). Self-presentation concerns might be particularly salient for job-related networking sites, which could restrict the range of possible expressions people make, thereby making it more difficult to predict personality (Back et al., 2010).
Second, social networking sites are typically more dynamic than job-related social networking sites. Theories on personality indicate that personality traits leave behavioral residue (Gosling et al., 2002), as individuals who score high on a certain personality trait are more likely to engage in activities indicative of those personality traits. Furthermore, even if the profile owner does not share certain activities, friends might do so (Stoughton et al., 2013). These traces of past behavior are more likely in the more dynamic profiles (that include interactions with others) at social networks, than at the typically more static profiles at job-related social networks that function as an online résumé.
Third, the more static nature of job-related social networking sites also reflects a difference in how much information is typically available. A profile on a social networking site (such as Facebook) can, in theory, be endless as posts can be added at will and this history of posts remains available. For LinkedIn, the amount of text is limited to the categories provided by the profile. Indeed, Tskhay and Rule (2014) conclude in their meta-analysis on inferring personality from social networking sites that more text makes personality inference more accurate (especially for a trait like extraversion, see John and Srivastava, 1999). Because more information allows for more accurate assessment of personality (Funder, 1995), information on general social networking sites might be more predictive of actual personality than information on job-related sites.
To summarize, although job-related social networks are often used in the selection process, little empirical research actually exists on it McFarland and Ployhart, 2015). One reason for this set of studies is therefore to bridge the gap between research on personality inferences from online presence (that typically investigates social networks such as Facebook) and the recruitment practice (that typically uses job-related social networks such as LinkedIn). Furthermore, there are some reasons to expect less accuracy when inferring personality from profiles at job-related social networks than from social networks, so testing this is important.
JMP 32,6
The current studies In two studies, we gathered the samples of LinkedIn profile owners who filled out a personality measure and consented to have raters infer their personality from their profile. Personality impressions were based on the Big Five (Costa and McCrae, 1992), which are: (1) conscientiousness: people who score high on this trait are well-organized and goal-directed; (2) emotional stability: people who score high on this trait are even tempered, calm, and not easily stressed out; (3) extraversion; people who score high on this trait are sociable, enthusiastic, and emotionally expressive; (4) openness to experience; people who score high on this trait are open to new experiences, creative, and unconventional; and (5) agreeableness: people who score high on this trait are sympathetic and warm persons, who prefer to avoid confrontation.
We chose these traits because they are generally considered the core dimensions of personality (Costa and McCrea, 1992), typically used in other research on personality impressions based on social networking sites (Tskhay and Rule, 2014), and are important predictors of various aspects of employee performance (Barrick and Mount, 2005). In Study 2, we extended our analysis to include trait self-presentation, which reflects the eagerness and self-confidence to present oneself (Van der Linden et al., 2011).
Study 1 Method
LinkedIn profile owners were recruited via online posts and asked to participate voluntarily in a study on personality perception. In return for participating, they received a summary of their scores on the personality traits that we measured. The respondents were separated into a student sample (62 current full-time students, 35 females, M age ¼ 23. (Gosling et al., 2003; for Dutch translation, see Denissen et al., 2008). This is a short, ten-item measure that sacrifices reliability (i.e. two items covering a single underlying dimension) for relative scale breadth (i.e. items with different content that potentially cover at least two facets of the underlying dimension). Reliability of those measures was low (r: conscientiousness ¼ 0.49, emotional stability ¼ 0.49, extraversion ¼ 0.52, openness to experience ¼ 0.35, agreeableness ¼ 0.14). This is similar to the values reported in other studies (e.g. Denissen et al., 2008;Gosling et al., 2003). Gosling et al. (2003) explain that the TIPI was created as a short measure with moderate construct breadth, which has the unavoidable consequence of lower reliability. More specifically, given that traits like extraversion are quite broad, if a researcher can only use two questions to measure this trait, the overlap between the two questions should not be too substantial. With too much overlap, it would not be possible to capture the entire construct. For example, trait extraversion could have very reliably been measured with items like "I talk a lot" and "I am talkative" (creating a high Cronbach's α); but this would not capture the breadth of the construct extraversion, as that also includes traits like enthusiasm. This is why for scales with only a few items αs can actually be misleading when evaluating their usefulness (Kline, 2000;Wood and Hampson, 2005). Most importantly, studies showed that the TIPI has content validity: the Big Five traits as measured by the TIPI predict outcomes that it theoretically should, and the test-retest 421 Personality perception reliability is good (Denissen et al., 2008;Gosling et al., 2003). Table I contains the means and standard deviations of self-rated personality on these traits.
Ten psychology students (six females, four males) rated the profiles in our laboratory in return for course credit. Five students rated all profiles from the student sample, the other five rated those from the working sample. Raters saw a profile on one half of the computer screen, and indicated their estimate of each personality trait of the Big Five via a survey program on the other half of the screen. The order of the profiles was randomized for each rater. Traits were scored on a scale from 0 (extremely low score on that trait) to 100 (extremely high score on that trait), with 50 indicating an average score on that trait. A slider scale was used that always started at the midpoint of the scale. The exact description of each trait given to raters was based on the description of traits in the TIPI. Raters were not financially incentivized in this study, but they knew that they would learn whether they had succeeded in correctly predicting the personality traits (they received feedback on whether their impressions correlated with self-rated personality of profile owners, and how their correlations compared to those of other raters). The raters' mean ratings (including standard deviation) for each trait are presented in Table I. Table I provides the interrater reliability of the five raters for each personality trait per sample. The ICCs (intraclass correlation coefficients tested with two-way random model with absolute agreement for average measures, see McGraw and Wong, 1996) were all satisfactory to good ( for reference values, see Landis and Koch, 1977). For each profile, raters' estimates of a trait were combined into an average. Table II contains the correlations between the raters' average personality estimate with the self-rated personality by the profile owner. We found that the raters' estimates of extraversion were significantly related 0.29** 0.29** Notes: Limited profile refers to same profile as full profile, but with name and picture removed. **p o0.01; ***p o0.001 Table II. Correlations between raters' inference of trait and self-rated personality by profile owner in Studies 1 and 2 (accuracy) 422 JMP 32,6 to self-rated extraversion by the profile owner in both samples. In the working sample, but not the student sample, the raters' perceived openness and agreeableness correlated with self-rated personality by the profile owner. As we only found this relationship for openness and agreeableness in one of our two samples, a main goal of Study 2 was to replicate our study to test whether these traits could reliably be inferred from a profile in a new sample.
Study 2
In real-life situations in which a recruiter wants to prescreen candidates, candidates typically have a relatively similar background when they are applying for the same job. The main reason is that people with similar personality traits find similar jobs or organizations interesting (Holland, 1997). Furthermore, personality predicts which education people choose (Humburg, 2017), and having followed certain educational paths makes certain career paths more likely. This homogeneity in personality traits of people applying to a job or organization may make it more difficult to accurately infer personality. Study 2 therefore used a sample from within one organization in order to replicate our initial study in a more homogenous sample. This choice should make it more difficult to replicate the results of Study 1.
To be able to reach this more homogeneous sample we had to use a different Big Five questionnaire (the G5-R; Van der Linden et al., 2011), as this was the questionnaire typically used by the company, we recruited our participants from. In addition to the Big Five traits used in Study 1, the G5-R also includes a measure for trait self-presentation. This trait contains the tendency to be dominant, energetic, achievement-oriented, and self-confident, and is defined as the eagerness and ambition with which one tries to present oneself. Both ambition (Huang et al., 2013) and having a proactive personality (Crant, 1995) relate positively to performance at work, so trait self-presentation is likely to also positively affect job performance and could be a valuable trait to predict when pre-screening candidates.
Two other changes were made as well. First, we incentivized raters to be as accurate as possible. Second, we also had one set of raters infer personality based on the profiles from which we had removed the name and picture of the profile owner. This allowed us to test whether personality inferences accurately predicted self-rated personality without information on gender and outward appearance.
Method
Employees of a large Dutch human resources development company (involved in consultancy, assessments, and training) were asked to participate in a study on personality perceptions (97 employees out of the approximately 250 employees participated; 46 males/ 51 females; age was indicated in categories, with 23 percent being 35 or younger, 35 percent being 36-45, 32 percent being 46-55, and 10 percent being 56+). If they consented, they filled out the Big Five on the abbreviated 36-item G5-R ( Van der Linden et al., 2011). The reliability of the traits was satisfactory for all traits: (α: conscientiousness ¼ 0.70, emotional stability ¼ 0.79, extraversion ¼ 0.78, openness to experience ¼ 0.72, agreeableness ¼ 0.69, self-presentation ¼ 0.84). Descriptive statistics are presented in Table III. In total, 20 psychology students (11 females, nine males; M age ¼ 20.60, SD ¼ 2.21) were recruited to rate the profiles in return for course credits. Ten students rated all profiles in full, the other ten rated all profiles without the picture and name of the profile owner (and thus effectively also without gender information). The raters received printed color versions of the profiles, in a different order for each rater, and rated each profile on each trait on a scale from −4 to +4 with the extremes of each trait on the endpoints. For example, for extraversion, −4 had the label "very introverted" and +4 was labeled "very extraverted."
Personality perception
The slider scale (that measured responses to 1 decimal) always started at the midpoint of the scale. Description of the traits was again based on the personality measure itself, using the description associated with the scale. This time, we also handed out a €50 bonus to each of the two best-performing raters, to provide an extra incentive to work on the task with their full attention. Furthermore, they also knew we would tell them how well they had done compared to the other raters. The raters' mean ratings (including standard deviation) for each trait are presented in Table III. Table III provides the interrater reliability (ICC) of the raters for each personality trait. ICC's are presented separately for the raters of the full profiles and the raters who rated the profiles without the picture and name. The consistency amongst raters was satisfactory to good, in some cases even excellent ( for reference values, see Landis and Koch, 1977). Table II contains the correlations between rater's averaged personality estimates of a trait with profile owner's self-rated personality.
Results and discussion
For extraversion, we again found that the average score of the raters correlated with the self-rated extraversion of the profile owner, but not for the other traits from the Big Five. Even though all profile owners were working for the same company and might thus have been more similar to each other (at least based on their type of work), we still replicated the extraversion finding from Study 1 that self-rated extraversion can be inferred from someone's LinkedIn profile. The other Big Five traits could not be reliably predicted by raters in Study 2. We therefore think it is unlikely that these traits can be accurately inferred from LinkedIn profiles.
In this study, we also included trait self-presentation, which reflects the personality trait eagerness and ambition to present oneself to others. The results indicate that for selfpresentation there was also a correlation between profile owners' self-reports and raters' inferences. This suggests that people can pick up this trait somewhat accurately based on a LinkedIn profile (the effect size being similar to that of trait extraversion).
Finally, as can be seen in Table III, the accuracy of raters did not seem to depend on whether or not a picture and name were present in the profile. Even raters who rated the profile without this information (and thus had no information about gender or outward appearance), were similarly accurate in their predictions for self-rated extraversion and self-presentation.
To conclude, our main finding is that we found correlations between the raters' perception of extraversion with self-rated extraversion (both in Studies 1 and 2). Whether this correlation 92 Notes: Self-rated traits measured on a scale from 1 to 5. Raters' trait estimates on a scale from −4 to +4, with for example the endpoints labeled as extremely introverted (−4) to extremely extraverted (+4). ICC's reflect interclass correlation tested with two-way random model with absolute agreement for average measures. Data separately presented for raters who rated the full profile and those who rated the limited profile (without name and picture) is strong enough to use it in pre-screening candidates is an important question. To get better insight into the feasibility of using inferred extraversion for pre-screening candidates (if one wanted to do so), we tested if raters can accurately identify the more extraverted individual from each possible pair of profiles. With 97 profiles, there are 4,656 possible comparisons between two profiles. From this set, we selected the pairs in which the profile owners differed in their self-rated extraversion (4,244 pairs). The profile rated as higher in extraversion by raters [2] was also the more extraverted person (based on self-ratings of the profile owner) 60.2 percent of the time. This implies that the odds of selecting the person with higher self-rated extraversion from a pair increases to 1.51 compared to a baseline of random guessing. This seems like a sizeable effect that might help in pre-screening candidates if one has a large number of candidates and only limited resources to find extraverted candidates. When looking for introverted or extraverted candidates, having a set of raters look at LinkedIn profiles and estimate scores on trait extraversion, might help in pre-screening (and the same holds for trait self-presentation).
General discussion
The basic question that started this research was whether personality traits can be predicted based on someone's profile on a job-related social networking website (i.e. LinkedIn). Earlier research on social network profiles (such as Facebook; Back et al., 2010) found that traits can be predicted, but it was not clear if this was also possible with job-related social networks. We found that LinkedIn profiles can be used to predict self-rated extraversion (Studies 1 and 2) and self-presentation (Study 2) of profile owners to some degree. Agreeableness and openness to experience were successfully predicted in the Study 1 working sample, but this finding did not replicate in the Study 1 student sample or in Study 2. In general, the Big Five traits besides extraversion could not accurately be predicted from job-related social networking sites.
Practical usefulness
Earlier research found that extraversion is related to job performance of managers and sales executives (Barrick and Mount, 1991), affective organizational commitment (having an emotional attachment to the company you work for; Erdheim et al., 2006), and well-being (Ozer and Benet-Martinez, 2006). Self-presentation is also likely to be of importance for organizations, as facets that are part of trait self-presentation such as ambition (Huang et al., 2013) and having a proactive attitude (Crant, 1995) are important predictors of job performance. As impressions of extraversion and self-presentation based on job-related social networking sites appeared to be (somewhat) accurate, profiles on these sites might therefore be used to prescreen job applications. The chance of selecting the person higher in extraversion from a pair of candidates is 60.2 percent when using a LinkedIn profile, which means that one is 1.5 times as likely to actually select the person with the higher (self-rated) trait extraversion as the person with lower trait extraversion.
Note that our findings indicate that our raters could not reliably infer the other traits from the Big Five (agreeableness, conscientious, emotional stability, and openness to experience) based on a LinkedIn profile. This is also an important insight, as many recruiters use profiles at job-related social networking sites for screening for desired personality traits (Roulin and Bangerter, 2013), which might not be effective for many traits.
Limitations and future research It may be possible to predict the Big Five traits beyond extraversion based on LinkedIn profiles, despite the relatively low accuracy found in the current study. In our studies we tested whether raters could infer personality from a LinkedIn profile, not how they did so.
Personality perception
Although we did not ask them what type of cues they used, we conducted an exploratory analysis of the predictive value of profile cues from Study 2. We coded some possible cues on the LinkedIn profiles and found some relationships between profile cues (e.g. aspects of the picture or group membership) and self-rated personality (see the appendix that can be found via the link provided in footnote 1). For example, exploratory analysis suggests that more conscientious people were more likely to include a picture and had less connections to other profiles. However, we noted that raters did not seem to pick up on this (based on the lack of a correlation of these cues with raters' perceptions of conscientiousness). At the same time, raters seemed to have thought that those who wore formal clothes were more conscientious, which was in fact not true. Future research could specifically test which cues "leak" information about personality, and whether raters can be trained to use more predictive cues to assess a broader range of personality based on job-related social networking profiles.
In our study, the personality of the profile owner was self-rated by the profile owner. We realize that self-rated personality is only one option to assess actual personality, with ratings made by others and behavioral observations being other possibilities (McCrae and Costa, 1987). Self-ratings were found to not always be fully accurate, but they overlap considerably with other possible measures. Future research could test whether the inferred personality of profile owners also relates to, for example, co-worker's perception of the personality of the profile owner.
Another possible limitation is that the raters were untrained psychology students who had followed a course on personality psychology, but had no experience in recruitment and personnel selection. It would be interesting to see whether experienced HR staff or recruitment specialists would be more accurate in their inferences. These experts who administer personality assessments have had the opportunity to learn: when they meet a candidate, they have an initial impression of the candidate's personality, and the outcomes of a personality assessment allow them to learn how accurate their initial impression was. Given that feedback on one's past performance allows improvement (Balcazar et al., 1985), these experts might become better over time at estimating personality of a candidate. However, whether this also holds for inferences based on social media profiles remains an open question.
Conclusion
Earlier research found that people can form accurate personality impressions based on social network profiles, but it was unclear whether this finding extended to profiles on job-related social networking sites (e.g. LinkedIn). This is important because job-related social networks are primarily used in the recruitment process. They contain more relevant information, they are more accessible to recruiters, and using them is seen as more ethical. Using job-related social network profiles for pre-screening might therefore circumvent some problems associated with more purely social networking sites (e.g. Facebook, see Davison et al., 2011;Brown and Vaughn, 2011). Profiles on job-related social networks are, however, created more deliberately and include very little interaction with other people. Still, our research finds that the traits extraversion and self-presentation can be inferred from profiles at job-related social networks: inferences based on profiles at LinkedIn correlated with self-rated scores on those traits. This implies that information about important personality traits (extraversion and self-presentation) leaks through the deliberately and carefully created profiles on job-related social networks.
Notes
1. An online appendix with exploratory analyses, study materials, and an anonymous version of the data of Study 1 can be found at the Open Science Framework, http://doi.org/10.17605/OSF.IO/6CV75 2. For 0.4 percent of cases raters had predicted the exact same extraversion scores for each member of a pair of profile owners. In those cases, we calculated half as correct inferences (assuming that they would be guessed correct at chance level when forced to choose for those 0.4 percent of cases). | 6,787.2 | 2017-09-28T00:00:00.000 | [
"Psychology",
"Computer Science"
] |
Circular chromatic number of signed graphs
A signed graph is a pair $(G, \sigma)$, where $G$ is a graph and $\sigma: E(G) \to \{+, -\}$ is a signature which assigns to each edge of $G$ a sign. Various notions of coloring of signed graphs have been studied. In this paper, we extend circular coloring of graphs to signed graphs. Given a signed graph $(G, \sigma)$ a circular $r$-coloring of $(G, \sigma)$ is an assignment $\psi$ of points of a circle of circumference $r$ to the vertices of $G$ such that for every edge $e=uv$ of $G$, if $\sigma(e)=+$, then $\psi(u)$ and $\psi(v)$ have distance at least $1$, and if $\sigma(e)=-$, then $\psi(v)$ and the antipodal of $\psi(u)$ have distance at least $1$. The circular chromatic number $\chi_c(G, \sigma)$ of a signed graph $(G, \sigma)$ is the infimum of those $r$ for which $(G, \sigma)$ admits a circular $r$-coloring. For a graph $G$, we define the signed circular chromatic number of $G$ to be $\max\{\chi_c(G, \sigma): \sigma \text{ is a signature of $G$}\}$. We study basic properties of circular coloring of signed graphs and develop tools for calculating $\chi_c(G, \sigma)$. We explore the relation between the circular chromatic number and the signed circular chromatic number of graphs, and present bounds for the signed circular chromatic number of some families of graphs. In particular, we determine the supremum of the signed circular chromatic number of $k$-chromatic graphs of large girth, of simple bipartite planar graphs, $d$-degenerate graphs, simple outerplanar graphs and series-parallel graphs. We construct a signed planar simple graph whose circular chromatic number is $4+\frac{2}{3}$. This is based and improves on a signed graph built by Kardos and Narboni as a counterexample to a conjecture of M\'{a}\v{c}ajov\'{a}, Raspaud, and \v{S}koviera.
Introduction
Assume r ≥ 1 is a real number. We denote by C r the circle of circumference r, obtained from the interval [0, r] by identifying 0 and r. Points in C r are real numbers from [0, r). For two points x, y on C r , the distance between x and y on C r , denoted by d (mod r) (x, y), is the length of the shorter arc of C r connecting x and y. Given two real numbers a and b, the interval [a, b] on C r is a closed interval of C r in clockwise orientation of the circle whose first point is a(mod r) and whose end point is b(mod r). the intervals ϕ(u) and ϕ(v) do not intersect and for a negative edge uv, the intervals ϕ(u) and ϕ(v) do not intersect.
Observe that if (G, σ) has no edge, then χ c (G, σ) = 1, and if (G, σ) has an edge, either positive or negative, then (G, σ) is not circular r-colorable for r < 2. As graphs with no edge are not interesting, in the remainder of the paper, we always assume that r ≥ 2.
It follows from the definition that for any graph G, χ c (G, +) = χ c (G). So the circular chromatic number of a signed graph is indeed a generalization of the circular chromatic number of a graph. The circular chromatic number of a graph is a refinement of its chromatic number: for any positive integer k, a graph G is circular k-colorable if and only if G is k-colorable. The same is also true for the chromatic number of signed graphs defined based on the notion of 0-free coloring define by Zaslavsky [22]. Proof. Assume f : V (G) → {±1, ±2, . . . , ±k} is any mapping. Let It is straightforward to verify that g is a circular 2k-coloring of (G, σ) if and only if f is a 0-free 2k-coloring of (G, σ).
The number of colors used in the 0-free coloring is always even. There have been several attempts to introduce an analogue coloring which uses an odd number of colors. The term 0-free indeed identifies this coloring from a similar coloring where 0 is added to the set of colors and the set of vertices colored with 0 induces an independent set. To be precise, a (2k + 1)-coloring of a signed graph uses colors {0, ±1, . . . , ±k}, and the constraint is still the same: for any edge e = uv of G, f (u) = σ(e)f (v). In a (2k + 1)-coloring of a signed graph, the color 0 is different from the other colors. The antipodal of 0 is 0 itself. The set of vertices of color 0 is an independent set of G, and for every other color i, vertices colored by color i may be joined by negative edges. In some sense, circular coloring of signed graph provides a more natural generalization of 0-free coloring to colorings of signed graphs with an odd number of colors, where the colors are symmetric.
In this paper, we shall study basic properties of circular coloring of signed graphs. We shall explore the relation between the circular chromatic number and the signed circular chromatic number of graphs, and prove that for any graph G, χ c (G) ≤ χ s c (G) ≤ 2χ c (G). We prove that the upper bound is tight even when restricted to graphs of arbitrary large girth or bipartite planar graphs. Furthermore, we construct a signed planar simple graph whose circular chromatic number is 4 + 2 3 . Máčajová, Raspaud, and Škoviera [13] conjectured that every signed planar simple graph is 4-colorable. By Proposition 2.10, this is equivalent to say that χ s c (G) ≤ 4 for every planar graph. Kardos and Narboni [9] refuted this conjecture by constructing a non-4-colorable signed planar graph. Our construction improves on the example of Kardos and Narboni. Thus we show that the supremum of the signed chromatic number of planar graphs is between 4 + 2 3 and 6. The exact value remains an open problem.
Equivalent definitions
There are several equivalent definitions of the circular chromatic number of graphs. Some of these definitions are also extended naturally to signed graphs.
Note that for s, t ∈ [0, r), d (mod r) (s, t) = min{|s − t|, r − |s − t|}. So a circular r-coloring of a graph can be defined as follows, which is sometimes more convenient.
If r is a rational number, then in a circular r-coloring of a signed graph (G, σ), it suffices to use a finite set of colors from the interval [0, r). We may assume that r = p q , where p is even and subject to this condition p q is in its simplest form. For i ∈ {0, 1, . . . , p − 1}, let I i be the half open, half closed interval [ i q , i+1 q ) of [0, r). Then ∪ p−1 i=0 I i is a partition of [0, r). Assume f : V (G) → [0, r) is a circular r-coloring of a signed graph (G, σ). Then for each vertex v of G, let g(v) = i q if and only if f (v) ∈ I i . If e = uv is a positive edge, then 1 ≤ |f (u) − f (v)| ≤ p q − 1. This implies that 1 − 1 q < |g(u) − g(v)| < p q − 1 + 1 q . Since q|g(u) − g(v)| is an integer, we conclude that 1 ≤ |g(u) − g(v)| ≤ p q − 1. If e = uv is a negative edge, then either |g(u) − g(v)| < p 2 − 1 + 1 q or |g(u) − g(v)| > p 2 + 1 − 1 q . Since p is even, p 2 is an integer. As q|g(u) − g(v)| is an integer, we conclude that either |g(u) − g(v)| ≤ p 2 − 1 or |g(u) − g(v)| ≥ p 2 + 1. It is crucial that p be an even integer. For otherwise p 2 is not an integer, and we cannot conclude that |g(u) − g(v)| ≤ p 2 − 1 or |g(u) − g(v)| ≥ p 2 + 1. Indeed, if p is odd, then the set {0, 1 q , . . . , p−1 q } is not closed under taking antipodal points.
The above observation leads to the following equivalent definition of the circular chromatic number of signed graphs. For i, j ∈ {0, 1, . . . , p − 1}, the modulo-p distance between i and j is Given an even integer p, the antipodal color of x ∈ {0, 1, . . . , p − 1} isx = x + p 2 (mod p).
Definition 2.2. Assume p is an even integer and q ≤ p/2 is a positive integer. A (p, q)-coloring of a signed graph (G, σ) is a mapping f : V (G) → {0, 1, . . . , p − 1} such that for any positive edge uv, and for any negative edge uv, The circular chromatic number of (G, σ) is : p is an even integer and (G, σ) has a (p, q)-coloring}.
is an edge of H. It is well-known and easy to see that a graph G is k-colorable if and only if G admits a homomorphism to K k , the complete graph on k vertices. Similarly, circular chromatic number of graphs are also defined through graph homomorphism. For integers p ≥ 2q > 0, the circular clique K p;q has vertex set [p] = {0, 1, . . . , p − 1} and edge set {ij : q ≤ |i − j| ≤ p − q}. Then a circular p q -coloring of a graph G is equivalent to a homomorphism of G to K p;q . Circular chromatic number of signed graphs can also be defined through homomorphisms.
3. An edge-sign preserving homomorphism of a signed graph (G, σ) to a signed graph (H, π) is a mapping f : V (G) → V (H) such that for every positive (respectively, negative) edge uv of We write (G, σ) s.p.
−→ (H, π) if there exists an edge-sign preserving homomorphism of (G, σ) to (H, π). For integers p ≥ 2q > 0 such that p is even, the signed circular clique K s p;q has vertex set [p] = {0, 1, . . . , p − 1}, in which ij is a positive edge if and only if q ≤ |i − j| ≤ p − q and ij is a negative edge if and only if either |i − j| ≤ p 2 − q or |i − j| ≥ p 2 + q. If q = 1, then K s p;1 is also written as K s p . Note that in K s p;q , each vertex i is incident to a negative loop. When p q ≥ 4, there are parallel edges of different signs. Furthermore, the subgraph induced by all the positive edges of K s p;q is the circular clique K p;q , which is known to be of circular chromatic number p q , thus we have χ c (K s p;q ) = p q . The following lemma gives another equivalent definition of the circular chromatic number of a signed graph. Lemma 2.4. Assume (G, σ) is a signed graph, p is a positive even integer, q is a positive integer and p ≥ 2q. Then (G, σ) has a (p, q)-coloring if and only if (G, σ) s.p.
−→ K s p;q . Hence the circular chromatic number of (G, σ) is As homomorphism relation is transitive, we have the following lemma.
For a real number r ≥ 2, we can also define K s r be the infinite graph with vertex set [0, r), in which xy is a positive edge if 1 ≤ |x − y| ≤ r − 1 and xy is a negative edge if either |x − y| ≤ r 2 − 1 or |x − y| ≥ r 2 + 1. Then it follows from the definition that a signed graph (G, σ) is circular r-colorable if and only if (G, σ) admits an edge-sign preserving homomorphism to K s r . If r = p q is a rational and p is an even integer, then it follows from the definition that K s p;q is a subgraph of K s r . On the other hand, it follows from Lemma 2.4 that K s r admits an edge-sign preserving homomorphism to K s p;q . Note that if Assume (G, σ) is a signed graph. A switching at vertex v is to switch the signs of edges which are incident to v. A switching at a set A ⊂ V (G) is to switch at each vertex in A. That is equivalent to switching the signs of all edges in the edge-cut E(A, V (G) \ A). A signed graph (G, σ) is a switching of (G, σ ′ ) if it is obtained from (G, σ ′ ) by a sequence of switchings. We say (G, σ) is switching equivalent to (G, σ ′ ) if (G, σ) is a switching of (G, σ ′ ). It is easily observed that given a graph G, the relation "switching equivalent" is an equivalence class on the set of all signatures on G.
It was observed in [23] that if (G, σ) admits a 0-free 2k-coloring then every switching equivalent signed graph (G, σ ′ ) admits such a coloring: If c is a 0-free 2k-coloring of (G, σ), then after a switching at a vertex v one may change the color of v from c(v) to −c(v) to preserve the property of being a 0-free 2k-coloring. The same argument applies to circular r-coloring.
Assume (G, σ) is a signed graph and c is a (p, q)-coloring of (G, σ) (where p is even and subject to this condition p q is in its simplest form). Let A = {v : c(v) ≥ p 2 } and let (G, σ ′ ) be obtained from (G, σ) by switching at A. It follows from the proof of Proposition 2.7 that there is a (p, q)-coloring c ′ of (G, σ ′ ) Thus, in particular, Lemma 2.5 and Lemma 2.6 can be restated with a switching homomorphism in place of edge-sign preserving homomorphism.
Note that in the graphK s p , every pair of distinct vertices are joined by a positive edge and a negative edge, and moreover, each vertex i is incident to a negative loop. Thus we have the following result. Proposition 2.10. A signed graph (G, σ) is (2k, 1)-colorable (equivalently 0-free 2k-colorable) if and only if there is a set A of vertices such that after switching at A, the result is a signed graph whose positive edges induce a k-colorable graph.
In the study of circular coloring of signed graphs, switching-equivalent signed graphs are viewed as the same signed graph. The problem as which signed graphs are equivalent was first studied by Zaslavsky [23]. We define the sign of a cycle (respectively, a closed walk) in (G, σ) to be the product of the signs of the edges of the cycle (respectively, the closed walk). One may observe that a switching does not change the sign of a cycle of (G, σ). A result of Zaslavsky, fundamental in the study of signed graphs, shows that a switching equivalent class to which (G, σ) belongs to is determined by signs of all cycles of (G, σ). Thus we have the following proposition (see [18] for more details). Proposition 2.12. A signed graph (G, σ) admits a switching homomorphism to a signed graph (H, π) if and only if there is a homomorphism f from G to H such that for every closed walk W of (G, σ), W and f (W ) have the same sign.
The following lemma follows from Theorem 2.11. Lemma 2.13. A signed graph (G, σ) admits a switching homomorphism to (H, π) if and only if there is a mapping of vertices and edges of (G, σ) to the vertices and edges of (H, π) which preserves adjacencies, incidences, and signs of closed walks.
For a non-zero integer ℓ, we denote by C ℓ the cycle of length |ℓ| whose sign agrees with the sign of ℓ.
So for example C −4 is a negative cycle of length 4. Observe that the signed graphK 4k;2k−1 is obtained from C −2k by adding a negative loop at each vertex. Note that adding negative loops to a signed graph or deleting them does not affect its circular chromatic number. So we may ignore negative loops in (G, σ). However, as a target of switching homomorphism, negative loops are important, because we can map two vertices connected by a negative edge to a same vertex v, provided v is incident to a negative loop.
Some basic properties
Assume (G, σ) is a signed graph and φ : V (G) → [0, r) is a circular r-coloring of (G, σ). The partial orientation D = D φ (G, σ) of G with respect to a circular r-coloring φ is defined as follows: (u, v) is an arc of D if and only if one of the following holds: • uv is a positive edge and (φ(v) − φ(u))(mod r) = 1.
Definition 3.1. Assume (G, σ) is a signed graph and φ is a circular r-coloring of (G, σ). Arcs in D φ (G, σ) are called tight arcs of (G, σ) with respect to φ. A directed path (respectively, a directed cycle) in D φ (G, σ) is called a tight path (respectively, a tight cycle) with respect to φ. Lemma 3.2. Let (G, σ) be a signed graph and let φ be a circular r-coloring of (G, σ). If D φ (G, σ) is acyclic, then there exists an r 0 r such that (G, σ) admits an r 0 -circular coloring.
As D φ (G, σ) has no arc, it follows from the definition that there exists ǫ > 0 such that for any positive edge uv, and for any negative edge uv, Let r 0 = r 1+ǫ and let ψ : Then ψ is an r 0 -circular coloring of (G, σ). Proof. One direction is proved in Corollary 3.3. It remains to show that if χ c (G, σ) < r, then there is a circular r-coloring φ of (G, σ) such that D φ (G, σ) is acyclic.
Assume χ c (G, σ) = r ′ < r. Let ψ : V (G) → [0, r) be a circular r ′ -coloring of (G, σ). Let φ(v) = r r ′ ψ(v). Then it is easy to verify that φ is a circular r-coloring of (G, σ) and D φ (G, σ) contains no arc (and hence is acyclic). Proposition 3.5. Any signed graph (G, σ) which is not a forest has a cycle with s positive edges and t negative edges such that χ c (G, σ) = 2(s+t) 2a+t for some non-negative integer a.
Proof. Assume χ c (G, σ) = r and ψ : V (G) → [0, r) is a circular r-coloring of (G, σ). By Lemma 3.4, Assume B consists of s positive edges and t negative edges. We view the colors as the points of a circle C r of circumference r, which is obtained from the interval [0, r] by identifying 0 and r. Assume is a positive edge, then traversing from the colors of v i , one unit along the clockwise direction of C r , we arrive at the color of v i+1 . If v i v i+1 is a negative edge, then from the color of v i , by first traversing r 2 unit along the anti-clockwise direction of C r then traversing along the clockwise direction a unit distance, we arrive at the color of v i+1 . Therefore, directed cycle B represents a total traverse along the circle C r distance s − ( r 2 − 1) · t, at end of which one must come back to the starting color. So for some integer a. Hence Since s + t ≤ |V (G)|, and r ≥ 2, given the number of vertices of G, there is a finite number of candidates for the circular chromatic number of (G, σ). Thus we have the following corollary.
Corollary 3.6. Assume (G, σ) is a signed graph on n vertices. Then χ c (G, σ) = p q for some p ≤ 2n. In particular, the infimum in the definition of χ c (G, σ) can be replaced by minimum.
It also follows from Corollary 3.6 that there is an algorithm that determines the circular circular chromatic number of a finite signed graph. Of course, determining the circular chromatic number of a signed graph is at least as hard as determining the chromatic number of a graph, and, hence, the problem is NP-hard and, unless P=NP, there is no feasible algorithm for the problem. Nevertheless, it is easy to determine whether a signed graph (G, σ) has circular chromatic number 2.
with respect to f is also a tight cycle with respect to g. However, for each edge is an arc on the circle of length r 2 along the clockwise direction.
Recall that the core of a graph G is a smallest subgraph H of G to which G admits a homomorphism.
If (G, σ) is a signed graph and H is a subgraph of G, then we denote by (H, σ) the signed subgraph of (G, σ), where σ in (H, σ) is considered to be the restriction of σ to E(H). We define the sp-core of a signed graph (G, σ) to be a smallest signed subgraph (H, σ) such that (G, σ) admits an edge-sign preserving homomorphism to (H, σ). The switching core of a signed graph (G, σ) is a smallest signed subgraph (H, σ) such that (G, σ) admits a switching homomorphism to (H, σ). That the sp-core and the switching core of a finite signed graph is unique up to isomorphism and thus the well-definiteness is shown in [17].
It follows from the definition that the switching core of (G, σ) is isomorphic to a signed subgraph of a sp-core of (G, σ).
Lemma 3.8. Assume r = p q is a rational, p is an even integer and with respect to this condition p q is in its simplest form. ThenK s p;q is the unique switching core of K s r .
Proof. SinceK s p;q is a subgraph of K s r and K s r switch −→K s p;q , it suffices to show thatK s p;q is a switching core, i.e., it is not switching homomorphic to any of its proper signed subgraphs.
Assume to the contrary that there is a switching homomorphism ofK s p;q to a proper signed subgraph, say (H, σ). As (H, σ) switch −→K s p;q andK s p;q switch −→ (H, σ), we have χ c (H, σ) = χ c (K s p;q ) = p q . Let φ be a circular p q -coloring of (H, σ). By Corollary 3.3, there is a tight cycle C with respect to φ. Assume the length of C is l. Since p q is in its simplest form, beside a possible factor of 2, we consider two cases: q is odd, then p | 2l, which implies that l ≥ p 2 ; or q is even then p 2 must be an odd number, thus , which is a proper subgraph ofK s p;q , a contradiction.
Lemma 3.9. Assume r = p q is a rational, p is an even integer and with respect to this condition p q is in its simplest form. Then K s p;q is the unique sp-core of K s r .
Proof. As K s r s.p.
−→ K s p;q , it is enough to prove that K s p;q is a sp-core. Let (H, σ) be the sp-core of K s p;q which is a proper subgraph and let ϕ be an edge-sign preserving homomorphism of K s p;q to (H, σ). Since any edge-sign preserving homomorphism is, in particular, a switching homomorphism and by Lemma 3.8, K s p;q is a subgraph of (H, σ). Observe that for each vertex u ofK s p;q there are two corresponding vertices u 1 and u 2 of K s p;q such that a switching at u 1 gives u 2 . Furthermore, there exists a positive edge and for any other vertex v ofK s p;q , as otherwise we have an edge-sign preserving homomorphism ofK s p;q to its proper subgraph by mapping u to v. It is a contradiction.
Circular chromatic number vs. signed circular chromatic number
The following lemma follows from the definitions. For a graph G and an arbitrary signature σ, with the definition of (G ′ , τ ) given in the previous As adding or deleting negative loops does not affect the circular chromatic number, the signed graph (G, σ) obtained from K p;q by replacing each edge with a pair of positive and negative edges has circular chromatic number 2p q . So Corollary 4.3 is tight. However, this signed graph has girth 2, i.e., has parallel edges. The following result shows that the bound in Corollary 4.3 is also tight for graphs of large girth. Theorem 4.4. For any integers k, g ≥ 2, for any ǫ > 0, there is a graph G of girth at least g satisfying that χ(G) = k and χ s c (G) > 2k − ǫ. The proof of Theorem 4.4 uses the concept of augmented tree introduced in [1]. A complete k-ary tree is a rooted tree in which each non-leaf vertex has k children and all the leaves are of the same level (the level of a vertex v is the distance from v to the root). For a leaf v of T , let P v be the unique path in T from the root to v. Vertices in P v − {v} are ancestors of v. An q-augmented k-ary tree is obtained from a complete k-ary tree by adding, for each leaf v, q edges connecting v to q of its ancestors.
These q edges are called the augmenting edges from v. For positive integers k, q, g, a (k, q, g)-graph is a q-augmented k-ary tree which is bipartite and has girth at least g. The following result was proved in [1].
Assume T is a complete k-ary tree. A standard labeling of the edges of T is a labeling φ of the edges of T such that for each non-leaf vertex v, for each i ∈ {1, 2, . . . , k}, there is one edge from v to one of its child labeled by i. Given a k-coloring f : Proof of Theorem 4.4 Assume k, g ≥ 2 are integers. We shall prove that for any integer p, there is a graph G for which the followings hold: 1. G has girth at least g and chromatic number at most k.
Let H be a (2kp, k, 2kg)-graph with underline tree T . Let φ be a standard 2kp-labeling of the edges of T . For v ∈ V (T ), denote by ℓ(v) the level of v, i.e., the distance from v to the root vertex in T . Let The addition above are carried out modulo 2kp.
Let L be the set of leaves of T . For each v ∈ L, we define one edge e v on V (T ) as follows: Let (G, σ) be the signed graph with vertex set V (T ) and with edge set {e v : v ∈ L}, where the signs of the edges are defined as above. We shall show that (G, σ) has the desired properties.
First observe that θ is a proper k-coloring of G. So G has chromatic number at most k.
Next we show that G has girth at least g. For each edge Then B v has length at most 2k. If C is a cycle in G, then replace each edge e v of C by the path B v , we obtain a cycle in H. As H has girth at least 2kg, we conclude that C has length at least g and hence G has girth at least g.
This is in contrary to the assumption that f is a circular (2kp, p + 1)-coloring of (G, σ).
Remark: The graph constructed above is shown to have chromatic number at most k. However, since 2kp p+1 < χ c (G, σ) ≤ 2χ(G), we conclude that χ(G) = k when p + 1 ≥ 2k. It is not known whether there is a finite k-chromatic graph of girth at least g and with χ s c (G) = 2k. Also it is unknown whether for every rational p q and integer g and any ǫ > 0, there is a graph G with χ c (G) ≤ p q and χ s The following result about circular chromatic number of critical graphs of large girth was proved in [28].
Theorem 4.6. For any integer k ≥ 3 and ǫ > 0, there is an integer g such that any k-crtical graph of girth at least g has circular chromatic number at most k − 1 + ǫ.
As a consequence of Theorem 4.6 and Corollary 4.3, we know that for any integer k ≥ 3 and ǫ > 0, there is an integer g such that any k-critical graph G of girth at least g has signed circular chromatic number at most 2k − 2 + ǫ. However, this bound is not tight. The following proposition follows from Proposition 2.10.
Signed indicator
In the study of coloring and homomorphism of graphs, using gadgets to construct new graphs from old ones is a fruitful tool. In this section, we explore the same idea for signed graph coloring. There is a subtle issue in the above definition. An edge e = xy is an unordered pair. So we can write it as e = yx as well. However, by identifying y with u and identifying x with v, the resulting signed graph is different from the one as defined above. To avoid such confusion, it is safer to first orient the edges of Ω and then replace the directed edge e with I. However, for our usage in this paper, the difference does not affect our discussion, so we just say replace the edge e with I. Observe that for I = (Γ, u, v), Z(I, r) = ∅ if and only if χ c (Γ) ≤ r. One useful interpretation of Z(I, r) is that this is the set of possible distances (in C r ) between the two colors assigned to u and v in a circular r-coloring of Γ.
Let the sign of a path P in (G, σ) be the product of the signs of the edges of P .
Example 5.6. If Γ is a positive 2-path connecting u and v, and I = (Γ, u, v), then for any ǫ, 0 < ǫ < 1, and r = 4 − 2ǫ, If Γ ′ is a negative 2-path connecting u and v, and I ′ = (Γ ′ , u, v), then for any ǫ, 0 < ǫ < 1, and If Γ ′′ consists of a negative 2-path and a positive 2-path connecting u and v, and I ′′ = (Γ ′′ , u, v), then for any ǫ, 0 < ǫ < 1, and r = 4 − 2ǫ, Lemma 5.7. Assume I = (Γ, u, v) is a signed indicator, r ≥ 2 is a real number and for some 0 < t < r 4 . Then for any graph G, Proof. Let r ′ = r 2t . If χ c (G) ≤ r ′ and f is a circular r ′ -coloring of G, then g : V (G) → [0, r) defined as g(x) = tf (x) satisfies the condition that for any edge e = xy of G, So d (mod r) (g(x), g(y)) ∈ Z(I, r), and the mapping g can be extended to a circular r-coloring of the copy of Γ that was used to replace e. So g can be extended to a circular r-coloring of G(I).
Conversely, assume χ c (G(I)) ≤ r. Let g be a circular r-coloring of G(I). By vertex switching, we may assume that g(x) ∈ [0, r 2 ) for every vertex x of G(I). Then for any edge e = xy of G, . Then for any edge e = xy of G, A similar proof implies the following: Proof. Let ǫ = 2 r ′ +1 and r = 4 − 2ǫ. By Example 5.6, Z(I, r) = [ǫ, r 2 − ǫ]. Note that r ′ = r 2ǫ . The conclusion follows from Lemma 5.7.
We note that G(I) here is the same as S(G) defined in [17]. In [17], it is shown that by using S(G) construction and the graph homomorphism, the chromatic number of graphs are captured by switching homomorphisms of signed bipartite graphs. This corollary shows that furthermore χ c (S(G)) also determines χ c (G).
• If i is even, then Proof. We prove the lemma by induction on i. For i = 1, this is trivial and observed in Example 5.6. Assume i ≥ 2 and the lemma holds for i ′ < i.
. Proof. Let 1 2ǫ < i < 1 ǫ . Let Γ ′ i be obtained from the disjoint union of Γ 2i−1 and Γ 2i by identifying u 2i−1 in Γ 2i−1 and u 2i in Γ 2i into a single vertex u ′ i , and identifying v 2i−1 in Γ 2i−1 and v 2i in Γ 2i into a single vertex v ′ i . It follows from the construction that Γ ′ i is a signed bipartite planar simple graph.
So Γ ′ i is not circular r-colorable.
Circular chromatic number of signed graph classes
We have shown that χ s c (G) ≤ 2χ c (G) and this bound is tight even for graphs G of large girth. However, when restricted to some natural families of graphs, the upper bound can be improved.
Given a class C of signed graphs we define χ c (C) = sup{χ c (G, σ) : (G, σ) ∈ C}. In light of Corollary 4.2 and the fact that negative loops do not affect the circular chromatic number, we shall restrict to signed graphs with no negative digons and no loops, i.e., the underlying graphs are simple graphs.
We denote by • SD d the class of signed d-degenerate simple graphs, • SSP the class of signed series parallel simple graphs, • O the class of signed outer planar simple graphs, • SBP the class of signed bipartite planar simple graphs, • SP the class of signed planar simple graphs. Proof. First we show that every (G, σ) ∈ SD d admits a circular (2⌊ d 2 ⌋ + 2)-coloring. Equivalently, (G, σ) admits an edge-sign preserving homomorphism to K s 2⌊ d 2 ⌋+2 whose vertices are labelled 0, 1, . . . , 2⌊ d 2 ⌋ + 1 in a cyclic order. Recall that in K s 2⌊ d 2 ⌋+2 between any pair of vertices x i , x j there are both positive and negative edges, unless i = j or i = j + ⌊ d 2 ⌋ + 1. When i = j, there is a negative loop but no positive loop; when i = j + ⌊ d 2 ⌋ + 1, x i x j is a positive edge but not a negative edge. Thus, given a vertex u of (G, σ) and a partial mapping φ of (G, σ) to K s 2⌊ d 2 ⌋+2 , if at most d neighbors of u are already colored, then φ can be extended to u. This now can be applied on the ordering of vertices of G which is a witness of G being d-degenerate.
To prove that the upper bound is tight, we consider three cases. For d = 2, the signed graphs built in Corollary 5.12 are all 2-degenerate and the claim of this corollary is that the limit of their circular chromatic number is 4. For odd integer d, this bound is tight by considering the signed complete graphs (K d+1 , +). For even integer d ≥ 4, we now construct a d-degenerate graph G together with a signature σ such that χ c (G, σ) = d + 2.
Define a signed graph Ω d as follows. Take (K d , +) whose vertices are labelled x 1 , x 2 , . . . , x d . For each pair i, j ∈ [d] (i = j), we add a vertex y i,j and join it to x i , x j with negative edges, and to all the other x k 's with positive edges. Since each y i,j is of degree d and after removing all of them we are left with a K d , we have Ω d ∈ SD d . We claim that χ c (Ω d ) = d + 2.
Assume this is not true and ϕ is a circular r-coloring of Ω d and r < d + 2. Without loss of generality, we may assume that ϕ(x 1 ), ϕ(x 2 ), . . . , ϕ(x d ) are cyclicly ordered on C r in a clockwise orientation.
We will now show that there is no possible choice for y 1,1+ d 2 . A point between ϕ(x i ) and ϕ(x i+1 ) for . . , d − 1} is at distance less than 1 from one of the two and cannot be the color of , ϕ(x 2 )) ≥ 1, and, furthermore, (ϕ(y 1, , then the same argument shows that which is a contradiction as y 1,1+ d 2 x 1 is a negative edge. It follows from Proposition 6.1 that χ c (G, σ) ≤ 2⌊ ∆(G) 2 ⌋ + 2. It was proved in [17] that every simple signed K 4 -minor-free graph (G, σ) admits a switching homomorphism to the signed Paley graph SP al 5 , depicted in We shall prove the following result. Proof. It suffices to show that χ c (F, σ) = 10 3 for the signed outer planar simple graph (F, σ) of Figure 5. Since (F, σ) contains a positive triangle as a subgraph, its circular chromatic number is at least 3. By the formula of the tight cycle the only possible values are 3 and 10 3 . It remains to show that this graph does not admit a circular 3-coloring, that is to say, (F, σ) does not admit a switching homomorphism to K s 6;2 . Note thatK s 6;2 is equivalent to a positive triangle, with each vertex incident to a negative loop. If φ is a switching homomorphism of (F, σ) toK s 6;2 , then at least one negative edge of the negative triangle xyz is mapped to a negative loop, because inK s 6;2 every negative closed walk contains a negative loop. Whichever edge of xyz is mapped to a negative loop, its two end vertices are identified and the resulting signed graph has a negative cycle of length 2. ButK s 6;2 contains no negative even closed walk of length 2, a contradiction. Hence χ c (F, σ) = 10 3 .
In Section 5, we have seen that χ c (SBP) = 4. However, we do not know if there is a signed bipartite planar simple graph reaching the bound 4. Further improvement based on the length of the shortest negative cycle is given in the forthcoming work [16].
Next we consider the circular chromatic number of signed planar simple graphs. Since planar simple graphs are 5-degenerate, by Proposition 6.1, we have χ c (SP) ≤ 6. It was conjectured in [13] that every planar simple graph admits a 0-free 4-coloring. If the conjecture was true, it would have implied the best possible bound of 4 for the circular chromatic number of signed planar simple graphs. However, this conjecture was disproved in [9] using a dual notion. A direct proof of a counterexample is given in [15]. Extending this construction, we build a signed planar simple graph whose circular chromatic number is 4 + 2 3 .
We shall construct a signed planar simple graph Ω with χ c (Ω) = 4 + 2 3 . The construction is broken down into construction of certain gadgets. Similar to the gadget of [9], we start with a mini-gadget depicted in Figure 6 and state its circular coloring property in Lemma 6.5. Note that the minimality of the length implies that the two end points of the interval Lemma 6.5. Assume φ is a circular (4 + α)-coloring of the signed graph (T, π) of Figure 6 with 0 ≤ α < 2.
Proof. Let r = 4 + α and let φ be a circular r-coloring of (T, π). Without loss of generality we may assume that φ(x), φ(y) and φ(z) are on C r in the clockwise order, and assume the interval . This is a contradiction as [φ(z), φ(x)] is longest among the three. As φ(y) is contained in [φ(z), φ(x)], and as y is adjacent to both z and x with a negative edge, we conclude that [φ(z), φ(x)] is of length at least 2. On the other hand, since z and x are adjacent with a negative edge, one of the two intervals, is of length at most r 2 − 1 = 1 + α 2 . As α < 2, the only option is that [φ(x), φ(z)] is of length at most 1 + α 2 . For the other direction, assume ℓ φ;x,y,z < 1 − α 2 , say I φ;x,y,z = [0, β] for some β < 1 − α 2 . Each of a, b, c is joined by a positive edge and a negative edge to vertices in x, y, z. This implies that φ(a), φ(b), φ(c) ∈ As each of the intervals [1, 1 + β + α 2 ] and [3 + α 2 , 3 + α + β] has length strictly smaller than 1, two of the vertices a, b, c are colored by colors of distance less than 1 in C r . But abc is a triangle with three positive edges, a contradiction.
For the "moreover" part, without loss of generality, we assume that t 3 = 0, It is straightforward to verify that φ is a circular r-coloring of (T, π).
By taking α = 2 3 − ǫ and a switching at the vertex z, we have the following formulation of the lemma which we will use frequently. Corollary 6.6. Let (T, π ′ ) be a signed graph obtained from (T, π) by a switching at the vertex z, and let φ be a circular ( 14 3 − ǫ)-coloring of (T, π ′ ) where 0 < ǫ < 2 3 . Then ℓ φ;x,y,z ∈ [ 2 We defineW to be the signed graph obtained from signed Wenger graph of Figure 7 by completing each of the four negative facial triangles to a switching of the mini-gadget of Figure 6. Next we show thatW has a property similar to signed indicators, more precisely: . For any circular r-coloring φ ofW , ℓ φ;u,v ≥ 4 9 .
The proof of Lemma 6.7 is long, and we leave it to the next section. Let Γ be obtained fromW by adding a negative edge uv. Let I = (Γ, u, v). It follows from Lemma 6.7 that for 4 ≤ r < 14 3 , Theorem 6.8. Let Ω = K 4 (I).
Proof. First we show that Ω admits a circular 14 3 -coloring. For r = 14 3 , there is a circular r- 3 , φ(x 5 ) = 4 and φ(z) = φ(t) = 1. We observe that each of the four negative triangles satisfies the conditions of Lemma 6.5, and that the coloring of its vertices can be extended to the inner part of the mini-gadget.
As x 2 is joined to u and w by positive edges, (2) For a depiction of these cases, see Figure 8.
[III] Assume to the contrary (by 1) that φ(z) ∈ [1+η, 4 contradicting the fact that x 3 z is a positive edge.
[IV] Assume to the contrary (by 1) that φ(z) ∈ [ 10 contradicting the fact that x 3 z is a positive edge.
This completes the proof of Claim 7.2. ✸ To complete the proof of Lemma 7.1, we partition the interval ( 5 3 − ǫ, 3 + η) into three parts and consider three cases depending on to which part δ belongs.
As x 2 z is a negative edge, and the distance between the intervals [ 10 3 − ǫ 2 , 11 3 +η −ǫ] and [1, δ −2] is strictly larger than 4 contradicting the fact that x 3 z is a positive edge.
This completes the proof that φ(w) / ∈ ( 5 3 − ǫ, 3 + η). We observe that in this proof, vertex x 1 played no role. In other words, the conclusion holds for the signed subgraph induced on G \ x 1 . In this subgraph a switching at U = {w, x 2 , x 3 , x 4 , x 5 } results in an isomorphic copy where x 4 and x 5 play the role of x 2 and x 3 . Thus for the mapping φ ′ defined as 2 , then by the Lemma 7.1, we have no choice for φ(w). Thus we assume in the rest of the proof that η ≤ 1 − 3ǫ 2 and The two cases will be consider separately.
]. We will update the ranges of φ(x i )'s as depicted in Figure 10. In this figure the range of each φ(x i ) is shown as an interval partitioned to two parts. The full interval represents the restriction we have As ℓ([1+η, 4 3 and zx 3 is a positive edge, the points 1+η, φ(z), φ(x 3 ), 3− 3ǫ 2 occur in C r in this cyclic order. This implies that As occurs in C r in this cyclic order. This implies that By considering the positive edges x 5 t and then x 4 x 5 , similar arguments show that Considering the positive edge x 1 x 2 and the range of φ(x 2 ) given above, a similar argument shows Now consider the negative triangle , contrary to Corollary 6.6. Also φ(u) = 0 cannot be an end point of the interval I φ;x 1 ,x 5 ,u , as 0 is at distance less than 2 3 + ǫ 2 from each of the four end points of the intervals that are the ranges of φ(x 1 ) and φ(x 5 ).
As φ( Figure 11). Since wx 1 and wx 5 are positive edges, we have The proof is similar to the previous case. The positive edge zx 3 and the negative edge tx 4 further restrict the ranges of φ(x 3 ), φ(x 4 ). Then, the new ranges of φ(x 3 ) and φ(x 5 ), together with the positive edges x 3 x 2 and x 4 x 5 further restrict the range of φ(x 2 ), φ(x 5 ). As the computations are very similar to the previous case, we just list the conclusion of this argument: Next we consider the negative triangle vx 3 x 4 . As Similar analysis as in the previous case shows that We will update the ranges of φ(x 2 ), . . . , φ(x 5 ) as depicted in Figure 12.
, contrary to the fact that x 2 z is a negative edge. Thus If φ(x 5 ) ∈ ( 1 3 − ǫ 2 , 4 3 − ǫ 2 ], then d (mod r) (φ(x 5 ), φ(t)) < 1, contrary to the fact that x 5 t is a positive edge. Therefore Considering the positive edge x 1 x 2 and the range of φ(x 2 ) given above, we obtain that Next we consider the negative triangle Figure 15) and since x 5 w, x 1 w are both positive edges, we have that The positive edge zx 3 and the negative edge tx 4 further restrict the ranges of φ(x 3 ) and φ(x 4 ) respectively. Then the new ranges of φ(x 3 ) and φ(x 4 ), through the positive edges x 3 x 2 and x 4 x 5 , further restrict the ranges of φ(x 2 ) and φ(x 5 ). By similar computation as previous cases, we have Next we consider the negative triangle and as x 4 w, x 3 w are both positive edges, we have that We will update the ranges of φ(x i )'s as depicted in Figure 16.
Recall that The intervals I w , I 2 , I 3 , I 4 , I 5 are each of length less than 1, and except for I 3 and I 4 there is no intersection among them. As ℓ(I 4 ) < 1, φ(x 3 ) / ∈ I 4 (since x 3 x 4 is a positive edge). That is again a contradiction because of the 5-cycle wx 2 x 3 x 4 x 5 all whose edges are positive.
This completes the proof of Lemma 6.7.
Questions and Remarks
A notion of a circular coloring of signed graphs was introduced in [8]. It is different from the definition in this paper essentially because the concept of "antipodal" points are defined differently. Both definitions use points on a circle as colors (the discrete version in [8] uses Z k as colors, and we can view elements of Z k as points uniformly distributed on a circle). In [8], a fixed diameter of the circle is chosen, and the antipodal of a point is obtained by flipping the circle along the chosen diameter. Thus for such a coloring, the colors are not symmetric. In particular, for each of the two end points of the chosen diameter, its antipodal is itself. In some sense, the definition in [8] more faithfully extends the coloring of signed graphs that allows 0 (as opposed to 0-free coloring) introduced by Zaslavsky, where 0 is a special color, whose antipodal is 0 itself. We consider the speciality of a certain color to be an undesirable feature. A circular object should be invariant under rotation. In this sense, the circular coloring of signed graphs in this paper more faithfully extends the circular coloring of graphs.
The circular coloring of graphs has been studied extensively in the literature. Many of the results and problems on circular coloring of graphs would be interesting in the framework of signed graphs. We list some specific problems below and believe that there are many more interesting problems.
Jaeger-Zhang conjecture and extensions
For a positive integer k, we have χ c (C −2k ) = 4k 2k−1 . On the other hand, while for a negative odd cycle C −(2k+1) we have χ c (C −(2k+1) ) = 2, for the positive odd cycle C +(2k+1) we have we have χ c (C +(2k+1) ) = χ c (C 2k+1 ) = 2k+1 k . These two facts can be stated uniformly by the following definition. Given ij ∈ Z 2 , we say a closed walk W of a signed graph (G, σ) is of type ij if the number of negative edges of W (counting multiplicity) is congruent to i(mod 2), and the total number of edges (counting multiplicity) is congruent to j(mod 2). For ij ∈ Z 2 we define g ij (G, σ) to be the length of a shortest closed walk of type ij in (G, σ), setting it to be ∞ if there is no such a walk (see [18] for corresponding no-homomorphism lemma and relation to coloring and homomorphism). It is a well-known fact that a homomorphism of a graph onto an odd cycle gives an upper on its circular chromatic number. The following theorem, whose proof we leave the the reader, is an extension of this fact.
Theorem 8.1. Given a positive integer l and a signed graph (G, σ) satisfying The question of mapping planar graphs of odd girth large enough to C 2k+1 was shown by C.Q.
Hadwiger conjecture and extensions
One of the most intriguing conjectures in graph theory is the Hadwiger conjecture which tries to extend the four-color theorem. It claims that any graph without a K k+1 -minor is k-colorable. The case k ≤ 3 of this conjecture is rather easy, but the case k = 4 contains the four-color theorem. As the case k + 1 would imply the case k, the difficulty of the conjecture only increases by k. Catlin [2] introduced a stronger version of the case k = 3 which we restate below using the terminology of signed graphs and notion of circular coloring that we have introduced here. A signed graph (H, π) is said to be a minor of (G, σ) if it is obtained from (G, σ) by a series of the following operations: 1. deleting vertices or edges, 2. contracting positive edge, 3. switching. This conjecture, which is stronger than the Hadwiger conjecture, is known as the Odd-Hadwiger conjecture and using the development in this work can be restated as follows. To generalize this, one may ask: Observe thatK s 2k is the signed graph whose vertices are 1, 2, . . . k where each pair of distinct vertices are adjacent by both a negative edge and a positive edge, and each vertex has a negative loop. It follows from the structure of these signed graphs, in an edge-sign preserving mapping of a signed graph (G, σ) toK s 2k , negative edges introduce no restriction, while vertices connected by a positive edge cannot be mapped to a same vertex. In other words, any such a mapping is a proper k-coloring of the subgraph G + σ induced by the set of positive edges of (G, σ). Recall that a switching homomorphism of (G, σ) tô K s 2k is to find a signature σ ′ equivalent to σ and an edge-sign preserving homomorphism of (G, σ ′ ) tô K s 2k . Therefore, based on the following definition we have the next theorem. We define Theorem 8.6. Given a signed graph (G, σ), we have 2χ + (G, σ) − 2 < χ c (G, σ) ≤ 2χ + (G, σ).
Let f (k) be the answer to Problem 8.5. By Theorem 8.6 one observes that if Conjecture 8.4 holds, then f (k) ≤ 2k. Similarly, considering the result of [7] we have f (k) = O(n √ log n).
Signed planar graphs
Let D be the signed graph on two vertices u and v which are adjacent by two edges: one positive, another negative. This graph normally referred to as digon. It is mentioned that χ c (D) = 4, moreover, given r ≥ 4, if φ is a circular r-coloring of D where φ(u) = 0, then simply by the definition we have φ(v) ∈ (1, r 2 − 1). Thus, by Lemma 5.7, when D is viewed as an indicator, we have χ c (G(D)) = 2χ c (G) where G is a graph (not signed) (this is a restatement of Corollary 4.2). In particular, we have χ c (K 4 (D)) = 8. Noting this is a signed planar mulitgraph and that, by the four-color theorem, every signed planar multigraph without a loop admits an edge-sign preserving homomorphism to it, we obtain χ c (SPM)) = 8 where SPM denotes the class of signed planar multigraphs. Furthermore, we recall that a signed graph with a positive loop admits no circular coloring and that adding a negative loop to a vertex of a signed graphs does not affect its circular chromatic number.
For the class of signed planar simple graphs, the upper bound of 6 follows from the fact that these graphs are 5-degenerate. With our definition of circular chromatic number and development in this work, one may restate a conjecture of [13] as to "circular chromatic number of the class of signed planar simple graphs is 4". However, this conjecture is recently disproved in [9]. The first counterexample provided in [9] is essentially the subgraph K 3 (I) of the signed graph of Theorem 6.8 (they become a same signed graph after a switching). The work of [9] is based on the dual interpretation of the circular four-coloring of signed planar graphs. The examples build there then are based on non-hamiltonian cubic bridgeless planar graphs. The underlying graph of the signed graph of Figure 7 is the dual of Tutte fragment used to build the first example of a non-hamiltonian cubic bridgeless planar graph and referred to as Wenger graph in some literature. This graph itself is used as a building block in a number of coloring results. Noting that a connection to a list coloring problem and circular 4-coloring (of signed planar simple) graphs was established by the 3rd author, [30], we refer to [10] for recent use of this gadget in refuting a similar conjecture.
We note, furthermore, that since in Theorem 6.8 we give the exact value of the circular chromatic number of K 4 (I), one does not expect to improve the lower bound using this particular gadget.
It remains an open problem to decide the exact value of the circular chromatic number of the class of signed planar simple graphs or to improve the bounds (of 14 3 and 6) from either direction.
Girth and planarity
Some of the questions mentioned above can be generalized in the following way: Given an integer l and a class C of signed graphs, such as signed planar graphs or signed K 4 -minorfree graphs, what is the circular chromatic number of signed graphs in C whose underlying graphs have girth l?
As an example, a result of [3] implies that every signed planar graph of girth at least 10 admits a switching homomorphism to the signed graph (K 4 , e) which is the signed graph on K 4 with one negative edge. As this signed graph has circular chromatic number 3, we conclude that Theorem 8.7. For the class SP g≥10 of signed planar graphsG of girth at least 10, we have χ c (SP g≥10 ) ≤ 3.
We do not know if this bound is tight.
In a more refined version of the question one might be given three values of l 01 , l 10 and l 11 and be asked for a best bound on circular chromatic number of signed graphs in C which satisfy g ij (G, σ) ≥ l ij .
Spectrum
In the previous question one may also be asked for the full possible range of circular chromatic number of a given family of signed graphs. For example it is known [6] that a rational number r is the circular chromatic number of a non-trivial K 4 -minor-free graph if and only if r ∈ [2, 8 3 ] ∪ {3}. As for signed K 4 -minor-free simple graphs we extended the upper bound to 10 3 , it remains an open question whether each rational number between 8 3 and 10 3 is the circular chromatic number of a K 4 -minor-free signed simple graph. Spectrum of the circular chromatic number of series-parallel graphs of given girth and circular chromatic number of planar graphs were studied in [14,19,20,26,27]. Similar questions are interesting for signed planar graphs and other families of signed graphs. | 14,180.6 | 2020-10-15T00:00:00.000 | [
"Mathematics"
] |
Logarithmic corrections for near-extremal black holes
We present the computation of logarithmic corrections to near-extremal black hole entropy from one-loop Euclidean gravity path integral around the near-horizon geometry. We extract these corrections employing a suitably modified heat kernel method, where the near-extremal near-horizon geometry is treated as a perturbation around the extremal near-horizon geometry. Using this method we compute the logarithmic corrections to non-rotating solutions in four dimensional Einstein-Maxwell and $\mathcal{N} = 2,4,8$ supergravity theories. We also discuss the limit that suitably recovers the extremal black hole results.
Introduction
The universal structure of black hole entropy is a powerful property of quantum gravitational theories.In the semiclassical regime, the entropy has a universal form, proportional to the area of the horizon [1,2].The leading-order quantum correction to this area law depends on the logarithm of horizon size [3][4][5][6][7].These logarithmic corrections are semiuniversal in nature, in the sense that they depend only on the infrared data of the theory.Euclidean gravity formalism [8,9] has been proven successful in the computation of black hole entropy.The area law can be reproduced from a saddle point approximation of the Euclidean path integral whereas the one-loop contributions to the path integral capture the logarithmic corrections.A complete microscopic description of black hole entropy is however not understood yet, which requires the UV completion of gravity theories.Nevertheless, any sensible UV-complete theory of gravity should correctly reproduce the area and logarithmic terms in black hole entropy.
Generic black hole solutions can be categorized as extremal and non-extremal, depending on whether their temperature is zero or non-zero respectively.In the near-horizon region of an extremal black hole, an infinitely long AdS 2 factor emerges.This results in an enhancement of symmetries which in turn govern the dynamics of such black holes.This feature does not hold for non-extremal black holes, where the full geometry is required to understand their dynamics.In generic theories, the Bekenstein-Hawking entropy of extremal black holes can be easily computed using Sen's entropy function formalism [10][11][12].Beyond the semiclassical regime, the idea has also been generalized to a quantum entropy function which can be used to compute the logarithmic corrections for extremal black holes [13][14][15][16][17].These formulations depend on the emergent near-horizon AdS 2 factor and its symmetries.Sen and collaborators also developed a technique to extract the logarithmic contributions to the entropy of generic (non)-extremal black holes using the heat kernel of the one-loop action of a gravity theory.For extremal black holes, again the near-horizon geometry was used [18][19][20][21].Whereas for the non-extremal black holes, the results depend on the full geometry [22] 1 .String theory has provided a microscopic realization of entropy for a class of supersymmetric extremal black holes [11,14,[27][28][29][30][31][32][33][34].For extremal black holes appearing in various string theories [15,18,19], the matching of logarithmic corrections from gravitational and microscopic perspectives has been achieved.In the recent works [35,36], the extremal black hole results are also correctly reproduced from the limit of a finite temperature geometry computation.
Black holes with very small temperatures are called near-extremal.Such black holes with very small temperatures still exhibit some interesting features that are characteristic of extremal black holes.Thus they are good candidates to generalize the progress made for extremal black holes in the presence of small but non-zero temperatures.Recent studies have rendered near-extremal black holes to have quite distinct properties by their own virtue.Notably, it has been shown in [37][38][39] that the dynamics of such black holes can be obtained from an effective 1D Schwarzian description (See [40][41][42][43] for generalizations to various systems).[39] shows that the low temperature dynamics is governed by one-loop corrections proportional to the logarithm of temperature, which are different than the usual logarithmic corrections.Recent works [44,45] trace back the origin of these terms from a 4D Euclidean gravity computation, in a language similar to that of the usual logarithmic contributions.It was shown that these terms appear from the one-loop quantization on the near-horizon near-extremal background, which is a small deviation of the extremal near-horizon geometry.The quantization procedure depends on the computation of the eigenvalues of the kinetic operator of small fluctuations around the classical background.The idea of [45] is to use first-order perturbation theory to compute these eigenvalues and extract the logarithm of temperature terms.
The goal of this work would be to analyze the logarithmic corrections for near-extremal black holes, carefully taking care of the differences with (non)-extremal solutions.Our approach would be to employ first-order perturbation theory [45] to correctly modify the usual heat-kernel method for this purpose.The usual approach for (non)-extremal black holes depends on a scaling property of parameters, where all the length scales are of the same order and thus scale uniformly.However, for near-extremal black holes, the issue is subtle as the inverse temperature and charges bring in large independent length scales.
We first separate the logarithmic contributions coming from these two large parameters.We then use this method to compute the logarithmic corrections to near-extremal entropy in N = 2, 4, 8 supergravity theories.We also recover the extremal logarithmic corrections for both supersymmetric and non-supersymmetric theories by appropriately taking the extremal limit of our results.Our modified heat-kernel method correctly reproduces the results for Einstein-Maxwell theory as earlier found in [39,44,45].We also find agreement with the existing results of [44] for N = 2, 4, 8 supergravity theories for a particular nearextremal solution.However, to regulate a certain kind of divergence at a zero temperature limit of the near-extremal result, a sum over different saddle points was considered in [44].In our computation of the logarithmic correction for a non-rotating near-extremal black hole, such a regularization is not required.We elaborate on this issue further in the discussion section.
The paper is organized as follows: In section 2, we discuss the structure of the near-horizon geometry of a near-extremal black hole.In section 3, we first review the heat kernel method to compute logarithmic corrections for extremal black holes and then discuss how to modify the method for computations for near-extremal black holes.This prescription is one of the main results of this paper.We also discuss how to take an appropriate extremal limit of the near-extremal computation.In section 4, we compute the logarithmic corrections to near-extremal black hole entropy appearing in N = 2 supergravity theory.Section 5 contains the results for the same for N = 4, 8 supergravity theories.Finally, we summarize and discuss our results in section 6.
Near-extremal background
In this section, we discuss the properties of a near-extremal black hole solution near the horizon.We will begin by reviewing the classical geometry [39,45] in Einstein-Maxwell theory and then extend the idea to generic theories.Let us consider the 4D Einstein-Maxwell theory with the Euclidean action, We have set the Newton constant to 1/16π.Therefore, the action has the dimensions of length-squared.A spherically symmetric black hole solution in this theory is described by the Reissner-Nördstrom geometry, which is parametrized by mass and charge, 2) For M > Q, we have a non-extremal black hole that has a finite temperature.At extremality i.e. for M = Q, the solution has a horizon radius given by Q and it has zero temperature.We are interested in a near-extremal black hole, which has the same charge but mass slightly greater than the extremal mass.We parametrize the solution with charge Q and temperature T .Since it is a near-extremal black hole, we work in the regime QT ≪ 1, which signifies that we are very close to extremality.The reason for fixing the charge is that finally, we want to compute the entropy in a microcanonical ensemble.The quantum entropy function formalism [10][11][12] for extremal black holes already produces the microcanonical entropy from the gravitational side.Therefore, keeping the charges fixed while introducing a temperature preserves this feature for the charges.In the final expression, an inverse Laplace transform of variables from inverse temperature to energy would directly give us the microcanonical entropy of the near-extremal black hole.
As we move close to the horizon of an extremal black hole, an AdS 2 factor emerges and this results in a symmetry enhancement.The near-horizon throat being very large, these symmetries govern the dynamics of (near)-extremal black holes.The near-horizon geometry [44,45] is given by g AB = g 0 AB + T g B , such that {g 0 , A 0 } denotes the extremal AdS 2 ×S 2 geometry: The small temperature causes deviations from this geometry, given by The near-horizon coordinates range from 0 < η < η 0 and 0 < θ < 2π such that the horizon is located at η = 0. We will denote the coordinates on AdS 2 by x µ and the coordinates on S 2 by x i .ε µν is the Levi-Civita tensor on AdS 2 , with the non-zero component being ε ηθ = Q 2 sinh η.The near-horizon geometry is glued to the asymptotic geometry at the boundary located at η = η 0 , which is very far away from the horizon.Due to a small temperature, the throat is large enough to capture the properties of the black hole.The quantum corrections are large in the near-horizon region but very much suppressed in the asymptotic region.It essentially motivates us to quantize the system on the near-horizon background.This is extensively studied in [39,44,45].
In generic theories, the extremal black hole solution can be parametrized by the charges corresponding to various gauge fields.Similar to the Reissner-Nördstrom scenario, we will keep those charges fixed and introduce a temperature parameter to describe a near-extremal solution.This will again cause first-order temperature deviation from the extremal nearhorizon geometry in a way that the flux corresponding to different gauge field strengths remains the same as the extremal black hole.Hence, schematically we write the nearhorizon geometry as, Here, Ψ denotes all the fields in the theory and Q i denotes the charges that parametrize the extremal solution.The bar is to signify that we are considering a classical background.Ψ0 denotes the field content of the extremal near-horizon geometry, whereas Ψ(c) denotes the corrections in presence of small temperature.The charge dependence in Ψ(c) can be derived from that of Ψ0 .The charges are of the same order i.e.Q i ∼ a, where a characterizes the horizon size of the extremal black hole.The working regime is set by aT ≪ 1.
Heat kernel prescription for logarithmic corrections
Let us consider a theory described by Euclidean action S. We would like to consider the corresponding Euclidean path integral to one-loop order.For this, we turn on small fluctuations (Ψ) around a classical background ( Ψ) and expand the action to quadratic order in fluctuations.The zeroth-order term gives the saddle point contribution and the first-order term vanishes since the background satisfies the equations of motion.Due to the quadratic term, the one-loop path integral can be expressed as a Gaussian integral of bosonic and fermionic fluctuations, Here, the operator ∆ depends on the theory in consideration and can be obtained on any arbitrary background.The i, j indices capture any sort of spacetime or internal indices for the fields.For bosonic variables, the operator is a two-derivative operator whereas for fermionic variables, it is linear in derivative.The one-loop path integral can in principle be obtained from the determinant of this operator ∆ on a particular classical background.The determinant may be ill-defined due to the presence of zero modes of the kinetic operator, but these are dealt with separately.The non-zero mode contributions can be expressed as, The prime indicates that we are summing over non-zero modes only.We have written the bosonic and fermionic (denoted by a superscript f ) contributions separately since the bosonic Gaussian integral gives (det ∆) −1/2 whereas the fermionic integral 2 gives (det ∆) 1/2 .Also, {κ f n } denote the eigenvalues of the squared fermionic operator so that the eigenvalues κ n and κ f n have the same dimensions.This is reflected by the additional factor of 1/2 in front of the fermionic contribution.
Without directly computing the individual eigenvalues, the heat kernel corresponding to the kinetic operator can be used to evaluate (3.2).We work with the bosonic and fermionic parts separately.First, we will discuss the bosonic contributions and later draw an analogy for the fermionic case.The heat kernel for the kinetic operator of bosonic fluctuations is defined as, Here, κ n denotes the eigenvalues of the kinetic operator ∆, corresponding to the eigenfunction f i n (x), which we are taking to be real.For complex eigenfunctions, we should take the complex conjugate for one of the eigenfunctions in the sum.The eigenfunctions are orthonormal with respect to the inner product, Here, G ij is the metric that is induced on the field space.The heat kernel satisfies a 'heat equation' of the form (∂ s + ∆)K(x, x ′ ; s) = 0, where we have suppressed the internal indices.
For fermionic fluctuations, we will consider the heat kernel corresponding to the squared kinetic operator and further define the heat kernel with an additional factor of −1/2 such that3 , To evaluate the logarithm of partition function (3.2) in terms of the heat kernel, the following identity is used, ǫ → 0 is a small cutoff introduced to regulate the integral, which in our case will be provided by the UV cutoff of the theory4 .Using the above identity, the bosonic non-zero modes sector of (3.2) can be expressed in terms of the trace of the heat kernel as, Here, we are using the notation K(x, x ′ ; s) ≡ G ij K ij (x, x ′ ; s).Again, the prime denotes the non-zero contribution only.Since for fermionic fluctuations, the additional factor of −1/2 is absorbed into the definition of the heat kernel, the final form in terms of the trace remains the same.In particular, the fermionic non-zero mode contribution is given by, In general, it is very difficult to compute the full one-loop contribution in an arbitrary theory.In [15,[18][19][20][21][22], it was shown how to extract the logarithmic corrections to (non)extremal black hole entropy from the heat kernel prescription.It was argued that the logarithmic corrections appear from the small s-region of the integration.Although the techniques for extremal and non-extremal black holes are essentially the same in spirit, there are some important differences between these two cases.One of them is that for the extremal black hole, the computation is done on the near-horizon background whereas for the non-extremal black hole, the full geometry must be used.The analysis for a nearextremal black hole will be a close cousin to the analysis for an extremal black hole, due to the presence of a large near-horizon throat.Also, the temperature and charge parameters introduce different scales in the solution in contrast with a generic non-extremal black hole where all the parameters have uniform scaling.Hence, we now briefly review the method for extremal black holes following the works of [18,22].
We first consider the case of N = 2 supergravity theory, which contains the Einstein-Maxwell theory as the bosonic sector.Here the horizon size is equal to the charge and the extremal black hole can be parametrized by charge only.The analysis of the subsequent part of this section can be easily generalized to arbitrary theories with various charges.
Since the charges scale uniformly with the extremal horizon size, we can just work with one of the charges or the horizon size.
Brief review of the approach for extremal black holes
In this section we briefly review the prescription for extremal black holes.Let us consider a spherically symmetric extremal black hole solution parametrized by its electric charge Q only.Its near-horizon geometry5 is AdS 2 ×S 2 with radii Q.The extremal solution has the following scaling dependence, As mentioned earlier, the action must have the dimensions of length squared such that the quadratic fluctuated action should also satisfy this scaling property, Due to the form of the extremal metric, in 4D we have g 0 ∼ Q 4 .Thus we find that the two-derivative kinetic operator must have the dependence ∼ Q −2 since it is constructed out of the extremal background.First, we consider the non-zero mode contribution to the partition function.From the scaling properties discussed above, we can find that the non-zero eigenvalues of the kinetic operator must be of the form, The dimensions of the eigenfunctions for different fields can be read off from the orthonormality condition (3.4).The Schwinger parameter s has dimensions of length squared.Using these dimensional dependencies, we perform a change of variable The hat on various quantities denote their dimensionless variants, after the Q-dependencies are stripped off.Due to the homogeneity of the extremal geometry, the coordinate dependence goes away from the trace of the heat kernel.As argued in [21], the logarithmic corrections appear from the small s-regime, where the heat kernel trace over a background can be expanded in terms of Seeley-de Witt coefficients, is a 2m-derivative local diffeomorphism and gauge invariant term constructed out of the background fields.For extremal background, these coefficients are just constant terms [22].The ŝ-independent part of the trace, in particular, the integration of the coefficient K 0 determines the coefficient of the log Q term in log Z nz , given by, Here, K(0) determines the number of zero modes, given by, Now we consider the contributions coming from the zero modes of the extremal black hole that arise in the near-horizon geometry.Zero modes are associated with the symmetries that are spontaneously broken by the black hole solution.Due to the presence of the AdS 2 boundary, the global symmetries of the solution get enhanced to the infinite-dimensional algebra of asymptotic symmetries.We have an infinite number of zero modes associated with the spontaneous breaking of these asymptotic symmetries.These fluctuations are generated by large gauge transformations and diffeomorphisms, which are non-normalizable 6 .
There is no Gaussian suppression for these modes in the path integral, hence the integral is typically divergent.However, there can be non-trivial logarithmic corrections coming from the path integral measure of these modes and these contributions are fixed by the normalization conditions imposed on these modes.
To understand the contributions, we make a change of variables from the field variables to the large gauge transformation parameters that generate the corresponding fluctuations.These large gauge parameters do not depend on the charge, hence their integration does not give rise to any charge dependence.The non-triviality appears when the change of variables from the fluctuations to the gauge parameters is performed.The corresponding Jacobian due to the change of variable results in a particular dependence in charge.A factor of Q βr is introduced in the measure from the Jacobian for each zero mode.The number β r depends on the particular type of mode in consideration.It can be fixed by analyzing the field fluctuations and the corresponding gauge parameters.We discuss this feature in the later sections when we consider extremal black hole solutions in particular theories.Hence, the zero mode contribution to the logarithm of partition function is given as, Here, we have included a factor of Q βr for each type of zero mode, labeled by r, and the sum characterizes a sum over different types of zero modes.N r zm denotes the number of such zero modes, given by (3.15) where the sum should involve the zero mode eigenfunctions of that particular category.η = η 0 denotes the boundary of the near-horizon AdS 2 throat, which is very large.The boundary cutoff-dependent pieces in the expression of log Z can be interpreted as an infinite shift in the ground state energy.The cutoff-independent part will give the actual contribution to entropy.
Fermionic sector
A similar analysis can be performed for the fermionic fluctuations also.Since we are considering the eigenvalues of the squared kinetic operator, the dimensional dependencies remain the same as that of the bosonic operator as both of these are two-derivative terms.The form of the non-zero mode contribution remains exactly the same i.e. (3.17) Now we consider the zero modes contribution.The number of zero modes in terms of the trace of the heat kernel is given as, (3.18) The fermionic zero modes are spin-3/2 fluctuations, associated with the breaking of the supergroup of asymptotic symmetries and these are generated by non-normalizable spin-1/2 parameters.Since there is no Gaussian suppression, these integrals typically go to zero.
To extract the charge dependence, we again perform a change of variable from the fields to the gauge parameters, which introduces a factor of Q −β f /2 per Majorana zero mode in the measure.Hence, the fermionic zero modes contribution to the logarithm of partition function is given as, The total logarithm of charge contribution to the extremal black hole partition function is given as, ) We will compare this coefficient with the corresponding coefficient for the near-extremal black hole.
Approach for near-extremal black holes
In this section, we will consider the one-loop path integral around a spherically symmetric near-extremal black hole solution parametrized by temperature T and charge Q such that QT ≪ 1.As discussed earlier, this solution also has a finite but very large near-horizon throat having a geometry that can be described as a linear-order temperature correction to the extremal AdS 2 ×S 2 geometry.We will quantize the system on the near-horizon background following the approach of first-order perturbation theory [45].However, we will resort to a heat kernel method that does not depend on explicit eigenvalue computations, in contrast with [45].
The dogma of the first-order perturbation theory technique is that it recasts the quantization procedure around the near-extremal background into finding the spectrum (or in this case, the heat kernel) of a modified operator on the AdS 2 ×S 2 geometry itself.Since the near-extremal background can be described as a linear order temperature deviation from the extremal one, the quadratic action can be rewritten as, We have clubbed all the temperature-dependent corrections into the operator ∆ (c) and our interest lies in finding the heat kernel for the modified operator ∆ ≡ ∆ 0 + T ∆ (c) .In particular, the eigenvalue equation of this operator is given by, Using first-order perturbation theory, we have, The heat kernel is again defined as (3.3) for bosonic fluctuations and as (3.5) for fermionic fluctuations using these eigenvalues and eigenfunctions 7 .
Our goal would be to extract the logarithmic corrections in Q and T since they bring different scales to the solution.For this, we again consider the non-zero and zero modes contributions separately.Further, we have two kinds of non-zero modes, namely those that have have non-zero and slightly non-zero eigenvalues.The proper non-zero modes are the ones that are also non-zero on the extremal background.The proper zero modes are the ones that remain zero modes on the near-extremal background.The contributions are schematically divided into three parts, The treatment of the proper non-zero modes will be done in the same way as the non-zero modes of the extremal black hole, since these eigenvalues still scale like 1/Q 2 .As discussed in [45], these modes give rise to log Q terms only and the temperature dependence in log Z is polynomially suppressed.The coefficient of logarithmic contribution will also remain the same as the non-zero part of the extremal computation.The same is true for the proper zero modes, which give rise to only log Q corrections through the path integral measure.Hence, the coefficient of log Q terms coming from the proper non-zero and proper zero modes will remain equal to their counterparts in the extremal case.
Bosonic slightly non-zero sector
We will consider the contribution of the slightly non-zero modes separately since these modes will give rise to both log Q and log T contributions.Although the proper non-zero and proper zero modes computations are quite similar for bosonic and fermionic fluctuations, the fermionic slightly non-zero modes have important differences from the bosonic ones.Hence, first we will consider the bosonic sector and deal with the fermionic sector later.The slightly non-zero modes were zero modes on the extremal background but got lifted in presence of temperature.Evidently, the eigenvalues of these modes are O(T ) i.e. there is no O(1) piece.The relevant contribution in the partition function is given as, We introduce the notation n to indicate that we are summing over the zero modes of the extremal solution, which get promoted to slightly non-zero modes.From dimensional analysis, we find that, where hat denotes a dimensionless object.Similar to the earlier case, we make the trans- We can recognize that again the logarithmic corrections arise from the small-ŝ integration of the ŝ-independent part of the term in parentheses.Thus we get, We conclude that the coefficient of this correction is given by the number of zero modes8 of the extremal solution, which get promoted to non-zero modes when we switch on a small temperature.In terms of the trace of the heat kernel, we find that, The bar again denotes a sum over the slightly non-zero modes.Since we are considering the quantization problem as finding the spectrum of a modified operator on the extremal background, the boundary cutoff-dependent piece can again be dropped off.Hence, the logarithmic corrections from the cutoff-independent part are given as,
Fermionic slightly non-zero sector
Now we consider the fermionic slightly non-zero modes.As mentioned earlier, the treatment of proper non-zero and proper zero modes will still remain the same.Although these modes have non-zero eigenvalues, we will show that we cannot treat them in a way similar to the bosonic slightly non-zero modes by considering the squared kinetic operator.Instead, we will need to work with the linear derivative kinetic operator itself.To understand the issue better, let us consider the generic form of the eigenvalues of the fermionic kinetic operator, given as9 κ f n = A + BT .For slightly non-zero modes, the O(1) piece is zero i.e.A = 0. Thus the eigenvalues of the squared operator have no O(T ) piece since that is proportional to the O(1) piece itself.Hence the squared kinetic operator has eigenvalues of O(T 2 ).Although we could work with these squared eigenvalues and later take their square root, it would be convenient to work with the eigenvalues of the fermionic kinetic operator and not its square while dealing with these slightly non-zero modes.The contribution of these modes to the partition function is given by, We again bring in an auxiliary variable to express this like (3.6).However, this variable has different dimensions than that of the variable used for the bosonic case.Thus we have, log where ǫ ′ is again a small cutoff which is related to the UV cutoff of the theory.κ c n now denotes the shifts in the eigenvalues of the fermionic kinetic operator (and not its square, to be emphatic) corresponding to the slightly non-zero modes.These corrections are dimensionless since the eigenvalues of the fermionic kinetic operator scale as by virtue of it being a one-derivative operator.To extract the logarithmic contributions, we perform the rescaling s → ŝ = T s so that, log Here, N f snz denotes the number of fermionic slightly non-zero modes, defined through (3.18) and summing over the appropriate modes.It can be easily seen that if we used the eigenvalues of the squared operator, the lower limit of the s-integral would be given by T 2 ǫ, where ǫ is the UV cutoff.This is because the eigenvalues of the squared operator10 scale like 1/Q 2 .From this comparison, we find that ǫ ′ ∼ √ ǫ.Thus we do not get any log Q dependence from this sector.Now, we will write down the complete logarithmic contributions in charge and temperature.The only distinction between extremal and near-extremal results appears from the slightly non-zero modes sector.On the extremal background, these modes would have contributed like ordinary zero modes where the logarithmic contribution comes from the path integral measure.To make a clean comparison, we add and subtract these contributions of the strictly extremal background.Then the logarithmic contribution in the logarithm of partition function can be written as, We have explicitly brought out the difference in the coefficients of log Q terms for the extremal and the near-extremal black holes to draw a proper analogy.Here r, f label different types of bosonic and fermionic slightly non-zero modes respectively.As discussed earlier, β r and β f are the numbers determining the logarithmic contribution for these modes for zero temperature i.e. when these are zero modes.This equation (3.38) is one of the main results of this paper and can be applied to compute the near-extremal logarithmic corrections in arbitrary theories of gravity, an analysis which was so far missing in the literature.
Regime of validity
We analyze the temperature regime where the formula (3.38) holds.Since we are considering a near-extremal black hole, we already imposed an upper bound on the temperature as QT ≪ 1.In this section, we show that there is also a lower bound in temperature till which the final near-extremal partition function is valid.
As discussed earlier, the non-zero and zero mode contributions for a near-extremal black hole can be evaluated similarly to those of an extremal black hole.The difference appears due to the presence of slightly non-zero modes.We want to understand the lowest temperature, till which we can consider these modes to be 'slightly' non-zero and below which the currently available methods start treating them as zero modes.Intuitively, this lowest temperature will signify a point where the Gaussian integral flattens enough that it can no longer be treated as a suppressed integral.For a simple Gaussian integral, this regime can be thought of as a limit where the width of the Gaussian becomes much larger than the domain of the integration.To understand this in the heat kernel formalism, let us note that in the identity (3.6), the parameter A should be much larger than the cutoff ǫ.
If A becomes comparable with ǫ, we could expand the integrand in a series in ǫ.Under an ǫ → 0 limit, the integral would be typically divergent depending on the cutoff only, and the logarithmic dependence on the parameter A will be lost.This is equivalent to the flattening of the Gaussian.Similar arguments hold for the partition function computation, where A is replaced by appropriate black hole parameters.This behavior sets the lower bound on temperature through (3.30) such that it does not become comparable with the UV cutoff of the theory.Hence, our result11 (3.38) is valid in the following temperature range, Here, y can be a certain power of the dimensionless combination ǫ Q 2 .For different kinds of field fluctuations, the explicit power law dependence might change.Thus the lower limit should be the largest of all these bounds.However, the temperature must be much above this bound.Below this temperature bound, we should start treating these modes as zero modes again in the heat kernel prescription.This indicates that the current machinery is not enough fine-tuned to identify a slightly non-zero mode in this regime.Thus in such a low temperature range, the extremal black hole result (3.22) tends to hold.
Extremal limit
Let us consider a near-extremal black hole in the temperature regime where we can distinguish between zero and slightly non-zero modes.We have shown that a near-extremal and an extremal black hole having the same charges may have different log Q coefficients, which is a physically consistent observation.Now we will show that a systematic extremal limit of the near-extremal computations is possible and under this limit, we can recover the log Q coefficient as predicted from the computations on the extremal geometry.Let us consider the example of a simple Gaussian integral to illustrate the limit, We want to match the values of I α under independent b → 0 limits on both sides of the equation.In particular, we would like to compare the α-dependence of the integral 12 .Naively it seems that the right hand side is (i.e. after performing the Gaussian integral) independent of α.However if we take b = 0 on left hand side, the integral seems to pick up a factor of α.Thus, there seems to be an apparent discrepancy in the result.Here we show that it is possible to extract this factor of α even after performing the integral.For this, we need to regulate the integral by evaluating it on an interval instead and the divergences will be coming from the length of this interval.We consider the Gaussian error function, defined as, Evaluating I α on an interval gives, If we take a b → 0 limit on the left side (i.e. in the integrand), we get, The b → 0 limit on the right side (i.e. after performing the integral) amounts to studying the behavior of the error function under b → 0 and L → ∞ limits, which gives Hence, in a strict b → 0 limit, both sides agree on the dependence of α.A similar analysis of such a limit can be performed for fermionic Gaussian integrals, where the results are not divergent.Instead, the result goes to zero when the integrand is 1, as is typical for Grassmann variables.We will discuss the scaling dependence in the next few lines, skipping an explicit derivation.In presence of a Gaussian suppression, any arbitrary scaling of the integration variable keeps the result unchanged.But if we take the Gaussian exponent to zero, the scaling factor in the measure can no longer be canceled from the Gaussian.
These features of Gaussian integrals are crucial in understanding a zero temperature limit of the integral over the slightly non-zero modes.As discussed earlier, the zero modes of an extremal black hole are generated by large gauge transformation parameters that are independent of the black hole scale.To understand the log Q contribution coming from these zero modes, a change of variable is performed from the fields to the gauge parameters [22].The gauge parameters being Q-independent, the measure acquires a factor of Q βr , coming from the Jacobian of change of variable.The number β r depends on the type of zero mode in consideration.This factor nontrivially contributes since there is no Gaussian suppression for zero modes on the extremal background.However, as discussed above, if there is a Gaussian suppression, the result of the integral is independent of the scale factor.
To understand this, we draw an analogy with the integral I α , where Q βr is playing the role of α in our case.Hence, for slightly non-zero modes integral, this normalization does not affect the Gaussian integral result.Since we want to consider an extremal limit, we would first like to make the change of variable to the gauge parameters so that integration limits become Q-independent.Then for each slightly non-zero mode integral, we will get a factor of Q βr in the measure and a factor of Q 2βr in the Gaussian exponent.This does not change the result of the Gaussian integral when T = 0.But when we take the T = 0 limit, our analysis shows that we get a factor of Q βr coming from each mode, resulting in a coefficient: This precisely agrees with the computations on an extremal black hole, where these modes were treated as zero modes.The argument for fermionic integrals also goes through similarly.
This simple analysis indicates that if we consider the result of Gaussian integral of slightly non-zero modes in the presence of temperature and take a zero temperature limit of the result, it should boil down to the extremal result.The caveat here is that we can only take such a limit insofar as we have not made use of any techniques that necessarily require the temperature of our system to be non-zero.For instance, even when the temperature is non-zero, the contributions coming from an infinite number of slightly non-zero modes typically give divergent factors.To regulate this divergence, a Zeta function regularization was performed in [45], which finally gave a non-zero log T contribution.Similar regularization is considered even in the one-loop partition function of the Schwarzian theory [52], which in turn is used to draw the conclusions in [39,44].This regularization explicitly uses that T = 0, however small it is.Hence, it is not correct to take an extremal limit beyond this regularization.In the heat kernel approach, we cannot take a zero temperature limit after we have extracted the scale dependence of the slightly non-zero eigenvalues in (3.30), which in turn changes the lower limit of the s-integral.After this rescaling, it is not possible to take a strict T = 0 limit.The limits of applicability of our analysis was discussed in the earlier section.
The main takeaway point of this analysis is that it is possible to recover the extremal log Q correction from the near-extremal result in any arbitrary theory as long as we do not explicitly assume T = 0, in performing some kind of regularization.Hence, the near-extremal results are perfectly consistent with the extremal black hole result, which are indeed the complete logarithmic corrections at extremality.This agreement is based on an appropriate extremal limit and after the regularization procedures, it is not possible to smoothly go to extremality.Our work provides a systematic computation of logarithmic corrections to near-extremal entropy, which may be different than the corresponding extremal results.
4 Pure N = 2 supergravity In this section, we will compute the logarithmic corrections for a near-extremal black hole in pure N = 2 supergravity theory in four dimensions using the formula (3.38).Our task will be to identify the slightly non-zero modes by analyzing the structure of the near-extremal background, without ever explicitly computing the eigenvalues.The bosonic sector of this theory is the Einstein-Maxwell theory.Hence, the non-rotating black hole solution is described by Reissner-Nördstrom geometry.We will first understand the bosonic sector contributions.
Bosonic sector: Einstein-Maxwell theory
We need to understand which of the bosonic zero modes of the extremal black hole will get lifted in the presence of temperature.As mentioned earlier, these zero modes are associated with the spontaneous breaking of certain asymptotic symmetries near the AdS 2 boundary.To be precise, the SL(2, R) global isometry enhances to the asymptotic symmetries of AdS 2 , which are large diffeomorphisms forming the Virasoro symmetry algebra.The U (1) and SO(3) symmetries enhance to large gauge transformations near the boundary.These asymptotic symmetries are spontaneously broken to the global part, which is the isometry group of the extremal solution.Hence, we have infinite number of zero modes associated with this breaking.These modes are the tensor modes, l = 0 vector modes and l = 1 vector modes [15,18,22].Now we will discuss their fate in presence of small temperature correction to the background.
Tensor modes on near-extremal background
As discussed above, the tensor modes are associated with large diffeomorphisms on AdS 2 .These fluctuations are asymptotically AdS 2 , following a particular falloff behavior near the boundary located at large radial distance η = η 0 .As a result, the corresponding Ricci scalars are well approximated to the negative constant value of the extremal AdS 2 geometry.In fact, deviations from this value are much suppressed in the radial coordiante e −η .The large diffeomorphisms preserve these boundary behaviors, so that they are asymptotic symmetries.
The near-extremal background is not asymptotically AdS 2 , which can be realized from the expression of Ricci scalar, In fact, near the boundary the linear-order temperature correction grows.Hence, the asymptotic symmetries are lost in presence of temperature.As a result, there will be no zero modes in this sector and we conclude that the tensor modes will become slightly nonzero modes on the near-extremal background.This can be thought of as a consequence of the way the near-extremal solution was constructed by introducing additional mass above the extremal mass.
U (1) vector modes on near-extremal background
These fluctuations are generated by large gauge transformations, which are symmetries on the extremal background.Since these modes are pure gauge, their corresponding field strengths are zero.Therefore, the flux in presence of these fluctuations is equal to the flux of the extremal background i.e. proportional to the electric charge of the extremal black hole.While constructing the near-extremal solution, we kept the electric charge fixed.
From the near-horizon perspective, we find that the flux of the near-extremal solution is still equal to the flux of the extremal solution, Note that, if we had considered the near-horizon geometry till deviations of order T 2 , the flux should be corrected at O(T 3 ) since the charge was kept fixed in the full solution.
Therefore we find that there is an infinite number of field configurations having the same charge as the extremal and near-extremal background.This implies that the large U (1) gauge transformations are still symmetries of the background.Hence, the associated modes are still zero modes.
SO(3) vector modes on near-extremal background
To understand the fate of the cross-component fluctuations h µi , we will first put them into the canonical form of Kaluza-Klein ansatz described in [39].Here µ refers to the indices of AdS 2 and i labels the S 2 indices.The form of these fluctuations is given by, Here ξ m i are the vectors on S 2 that generate SO(3) algebra for m = 0, ±1.Φ is the large gauge transformation parameter having a discrete label, that we have suppressed here.On a spherically symmetric background, these fluctuations can be expressed as, p can be computed from the knowledge of the background.The reason behind considering this ansatz is that now the fields V = V m ξ m ≡ v m µ dx µ ξ i m ∂ i can be realized as SO(3)-valued gauge fields living on a 2D geometry specified by metric gµν = g µν − X(η)∂ µ Φ∂ ν Φ.
The field strength H = dV − V ∧ V corresponding to the SO(3) gauge field is again zero.Hence in presence of these fluctuations, the value of flux should not change from the value of flux of the spherically symmetric background.The question now boils down to whether the modified metric g can be well approximated to the actual background g from an asymptotic point of view.Analyzing the Ricci scalar, it can be shown that when the background g is extremal, the modified 2D metric is still asymptotically AdS 2 .Hence in this way, it can be argued that these fluctuations correspond to certain asymptotic symmetries.But when the background is near-extremal, the asymptotic AdS 2 structure is lost, as shown earlier.
Therefore, this analysis shows that the fluctuations generated by large SO(3) transformations are no longer symmetries in presence of temperature and there are no associated zero modes.The l = 1 vector modes are then expected to be slightly non-zero modes on the near-extremal background.
Therefore, we have shown that out of the extremal bosonic zero modes, only the tensor and l = 1 vector modes i.e. all the metric zero modes get promoted to slightly non-zero modes on the near-extremal background.The U (1) vector modes are still zero modes of the black hole.This is in agreement with the direct eigenvalue computations of [45].This result can be traced back to the construction of the near-extremal solution by adding a small mass above extremality while the electric charge was held fixed.Therefore, whenever we keep the charges corresponding to the gauge fields fixed, the corresponding zero modes do not get lifted even in the presence of temperature.
Coefficient of logarithmic terms in Einstein-Maxwell sector
Since the metric zero modes are lifted, we would like to compute the corresponding trace of the heat kernel which in turn gives the number of such modes.As shown in [22], for these modes β m = 2, where we are denoting the zero mode label r by m to indicate metric zero modes.The trace formula gives Using (3.38), we find the logarithmic corrections coming from the bosonic sector to be, This dependence exactly matches with the explicit eigenvalue computations of [45].
Full logarithmic corrections
The extremal black hole solution is half-BPS i.e. it preserves half of the supersymmetry.The preserved supersymmetry gets an infinite extension near the boundary, which in turn results in an infinite number of Goldstone modes.Since the near-extremal black hole has mass above the BPS limit, it does not preserve the supersymmetry that was earlier present in the extremal background.This uplifts the degeneracy of the zero modes.Hence we consider their contribution in the slightly non-zero part.For these modes, β f = 3 and the trace of the heat kernel is given as, (4.8) The fermionic contribution to the logarithmic corrections is given from (3.38), Combining the contributions of (4.7) and (4.9), we get the full logarithmic corrections for a near-extremal black hole in pure N = 2 supergravity theory as follows, Here c ext = c 0 + c f 0 is the coefficient of the logarithmic term for the extremal black hole.
Within the regime of validity of the near-extremal partition function, the density of states can be computed from the inverse Laplace transform of the partition function, Here, we have included the saddle point contribution to the partition function, where S 0 = 16π 2 Q 2 denotes the extremal part and S 1 = 32π 3 Q 3 signifies the near-extremal correction.
The saddle point contribution can be computed using the standard Gibbons-Hawking-York prescription as described in [45].E denotes the energy above extremality.Z(β) denotes the contribution to the one-loop partition function coming from the logarithmic corrections.Substituting Z(β) = Q cext−3 β, we get the expression for the density of states as, The small E expansion of the same goes as, Hence, the logarithmic correction to entropy will be given by the leading term, whereas the subleading terms will give a polynomial contribution.For the N = 2 theory, the half-BPS extremal solution has a coefficient c ext = 23 12 for the logarithmic contribution [22].Thus for the near-extremal solution that we are considering, we have Z(β) ∼ Q − 13 12 β.Corresponding to this, we obtain the density of states as follows.
This result is valid in the following regime of energy above extremality owing to (3.39),Our result (4.10) agrees with the findings of [44] for a particular saddle point.
N = 4, 8 Supergravity
In this section, we will discuss the logarithmic corrections to the entropy of non-rotating, near-extremal black holes in N = 4, 8 supergravity theories.We will consider the nearextremal solutions to be small temperature deviations of the 1 4 -BPS and 1 8 -BPS solutions.As discussed earlier, the extremal solutions are parametrized by multiple charges, unlike the N = 2 case.All these charges scale uniformly with the horizon size a.Thus we will consider a near-extremal solution having temperature such that aT ≪ 1.In this temperature regime, the near-horizon geometry can be expressed as in (2.6).
Hence, the analysis of section 3 can be carried out similarly by replacing the charge parameter Q by the extremal horizon size a.As discussed, the coefficient of log a contribution from fully non-zero and fully zero modes should remain the same as the corresponding extremal results presented in [15,18].In order to find the complete logarithmic contribution, we focus on the slightly non-zero mode sector.
We begin with the zero modes of the extremal solutions in these theories.The N = 4 theory consists of gravity and matter multiplets.The matter multiplet is comprised of U (1) gauge fields, scalars and spin-1/2 fields.Thus zero modes only appear from the gauge fields [15].The gravity multiplet has metric and Rarita-Schwinger fields which coincide with the N = 2 theory field content such that the zero mode structure of this sector is also identical.In addition, there are some U (1) gauge fields that give rise to zero modes [18].The N = 8 theory contains additional gauge fields and scalars compared to the N = 4 field content [18].Therefore, the considerations of [15,18] show that the metric and fermionic zero mode spectra of N = 4, 8 theories are identical to that of the N = 2 theory [22].The only difference appears due to additional gauge field zero modes.Since we are keeping the charges corresponding to the gauge fields to be fixed to their extremal values, these gauge field zero modes should remain degenerate even in the presence of a non-zero temperature.Thus although the extremal logarithmic corrections are different in these theories, the slightly non-zero mode contribution remains the same.The density of states, and hence the entropy correction again has the form of (4.13), with a coefficient c ext +3 for the logarithmic correction.Here, c ext is to be substituted by the appropriate value of logarithmic correction coefficient for the corresponding extremal solutions in N = 4, 8 theories [15,18].
Discussions
In this paper, we have constructed a methodology to suitably modify the heat kernel prescription to compute the logarithmic corrections for near-extremal black hole entropy.Since the usual methods for (non)-extremal black holes cannot be directly applied in the near-extreme regime, our work provides an important framework to study the logarithmic corrections for these black holes.Our computation is a close cousin to that of an extremal black hole and there are interesting differences with non-extremal black hole computations even though we have turned on a non-zero temperature.For instance, the non-extremal computations are performed with the full geometry, whereas the near-extremal computations are performed with the near-horizon geometry.Our analysis recasts the quantization problem into finding the eigenvalues of a modified operator on the extremal background itself using first-order perturbation theory.The only subtlety is that the throat is no longer effectively infinite but large.Further due to the decoupling feature of the near-horizon and asymptotic regions, we have a clear notion of near-horizon modes and taking their contribution directly gives the entropy of the near-extremal black hole.For non-extremal black holes, such isolation of near-horizon modes is not possible and one has to appropriately subtract thermal gas contributions to obtain the black hole entropy [20].
Unlike (non)-extremal black holes, the parameters of a near-extremal black hole do not scale uniformly, which is a unique feature of near-extremal black holes.Using our method, it is possible to suitably separate the logarithmic contributions coming from inverse temperature and the charges.We show that such a separation depends on appropriately extracting the scale dependence of the contribution to the heat kernel coming from slightly non-zero modes.These modes are zero modes of the extremal black hole that become slightly non-degenerate on the near-extremal background.Thus the logarithmic contributions in near-extremal entropy depend on the appropriate identification of extremal zero modes and understanding their fate as we turn on a small temperature.The extremal zero modes are associated with the spontaneous breaking of asymptotic symmetries, localized near the near-horizon AdS 2 boundary and the decoupled asymptotic region does not give rise to additional zero modes.The form of the near-extremal perturbations of the extremal geometry governs the uplift or anomalous breaking of these asymptotic symmetries.The near-extremal solution is obtained by introducing a mass/temperature parameter above extremality, keeping the charges fixed.This results in an uplift of the metric zero modes whereas the large gauge symmetries corresponding to the charges should still remain zero modes.An important point is that the logarithmic contributions computed at the level of first-order perturbation theory are robust.From the discussions of section 3.2 we note that the modes, which are slightly non-zero at first-order in temperature, only give rise to polynomial in temperature corrections at a higher order of perturbation theory.Apparently, it might seem that the modes which remain zero at first-order might get lifted at a higher order and bring in additional logarithmic contributions.However, this is ruled out by the symmetries.Since the charge(s) of the extremal and near-extremal black holes are completely fixed, the corresponding large gauge symmetries always remain preserved.Thus our results correctly capture the logarithmic corrections in the near-extremal regime.
Another important aspect is that our computations are valid in a regime where the temperature parameter not only has an upper cutoff in terms of the charges, but also a lower limit set by the UV cutoff of the theory.The lower cutoff appears to be a limitation of the current techniques.The method can be applied to compute the logarithmic correction to the entropy of a near-extremal solution in any arbitrary theory of gravity.
We also show that it is possible to correctly obtain the extremal black hole result as a limit of the near-extremal black hole computation.The limit should be taken systematically at the level of the Gaussian integrals of slightly non-zero fluctuations as long as the nearextremal results are not regularized.Such non-zero temperature regularization is typically performed in (super)-Schwarzian theories.In the heat kernel formalism, the lower limit of the Schwinger parameter integration is rescaled by a non-zero temperature and this limit cannot be set strictly to zero.Such regularizations or rescalings explicitly impose a nonzero temperature condition.Thus it is not appropriate to take a zero temperature limit of the final result.This proves that the logarithmic corrections computed from extremal near-horizon geometry are robust results in any arbitrary theory.Thus there is no apparent contradiction between the extremal and near-extremal logarithmic corrections as long as an ill-defined zero temperature limit of the near-extremal result is not taken.
Our current analysis can be naturally extended to higher dimensional spherically symmetric near-extremal black holes as well.The scaling property of the eigenvalues of the kinetic operator remains the same.Thus the generic expression (3.38) for the partition function should remain to hold.Depending on the theories in consideration, the coefficient of the logarithmic correction will change due to the count of slightly non-zero modes.Since the extremal black hole solution always has an AdS 2 factor, the large diffeomorphism symmetries will always get lifted in the presence of temperature.The AdS 2 factor is present even for rotating black holes such that this argument again goes through.These black holes do not have rotational symmetries at extremality, meaning the only slightly non-zero modes coming from the metric will be the large diffeomorphisms of AdS 2 .Implementing the logic of section 4.1, we can conclude that the logarithm of temperature correction in the logarithm of partition function for 4D Kerr and Kerr-Newman black holes will have a coefficient 3 2 .This is in agreement with the results found in the recent works [53,54].
Using the modified heat-kernel method we compute the logarithmic corrections to the entropy for near-extremal, non-rotating black hole solutions in 4D Einstein-Maxwell theory and in N = 2, 4, 8 supergravity theories.We find a perfect agreement of our result with the existing one in the literature for the Einstein-Maxwell theory.For the supergravity theories, the logarithmic contributions agree with that of [39,44] when they consider only a particular black hole saddle.However as per the results of [39,44], which is derived from an effective theory prescription of these black holes, the partition function of each saddle diverges at T = 0 and they propose to regulate the same by taking contributions from an infinite number of saddle points, that modifies the net logarithmic contributions to the entropy.We essentially differ from them in this interpretation.As elaborated in section 3.4, a zero temperature limit cannot be taken from the regularized near-extremal result.Thus there is no apparent divergence in the result and a regularization is not required.However, it would be interesting to compare the computations of [39,44] with a direct 4D Euclidean action computations vide [45] for the same systems.One may hope to try to understand the implications, if any, of the sum over saddle points of the effective theory in the 4D picture in the context of the black hole solutions under consideration.This could
. 15 )
Thus the log Q correction to the entropy of a near-extremal, non-rotating black hole in N = 2 supergravity is given by, | 12,793.2 | 2023-11-16T00:00:00.000 | [
"Physics"
] |
Electromagnetic cloaking devices for TE and TM polarizations
In this paper, we present the design of an electromagnetic cloaking device working for both transverse electric (TE) and transverse magnetic (TM) polarizations. The theoretical approach to cloaking used here is inspired by the one presented by Alù and Engheta (2005 Phys. Rev. E 72 016623) for TM polarization. The case of TE polarization is firstly considered and, then, an actual inclusion-based cloak for TE polarization is also designed. In such a case, the cloak is made of a mu-near-zero (MNZ) metamaterial, as the dual counterpart of the epsilon-near-zero (ENZ) material that can be used for purely dielectric objects. The operation and the robustness of the cloaking device for the TE polarization is deeply investigated through a complete set of full-wave numerical simulations. Finally, the design and an application of a cloak operating for both TE and TM polarizations employing both magnetic inclusions and the parallel plate medium already used by Silveirinha et al (Phys. Rev. E 75 036603) are presented.
Introduction
Cloaking is one of the most attractive applications of metamaterials. The unusual interaction between the electromagnetic field and the micro/nano-scale inclusions constituting the artificial materials produces new interesting scattering phenomena, which can be useful to reduce the observability of a given object.
Different approaches to cloaking have been developed by different research groups worldwide [1]- [12]. Even if a proper comparison between all these approaches has not been developed so far, we may say that each of them, though based on different physics, clearly show advantages, but also undoubted limitations.
For instance, the theoretical approach based on the coordinate transformation [5]- [10], which is very elegant from the mathematical and physical points of view, works quite well even for large objects, and is independent of the object being cloaked, may find some problems at the fabrication stage, due to the employment of the reduced parameters and to the inhomogeneity of the cloak material. Nevertheless, some experiments have been already conducted with a certain degree of success [10]. As for any metamaterial design, anyway, losses play an important role, as these experiments clearly reveal. Apart from these difficulties, even from the fundamental point of view there are some problems, especially if we are interested in cloaking devices working not only at a single frequency. In the coordinate transformation approach, in fact, the paths of the electromagnetic field circumventing the object are covered with a phase velocity which is greater than the speed of light. Anyway, when a pulsed electromagnetic field impinges on the object covered with the cloak, since the group velocity does not exceed the speed of light, the resulting cloak cannot have the desired functionality over a broad range of frequencies, even if we are able to realize broadband metamaterials. On the other hand, as is well known from microwave circuit theory, losses are inherently related to any impedance transformation, even in the case of continuous transformation. For this reason, cloaks based on this approach must be quite large in order to have a very smooth variation of the parameters and reduce the losses as much as possible.
The approach based on plasmonic materials proposed in [1] is also characterized by advantages and disadvantages. One limitation is that this approach is, to some extent, object-dependent. Though some new results have been presented recently, showing that the shape of the object can be changed a little, while keeping the operation of the cloak [3], the object cannot be changed substantially. Another potential drawback of this approach resides in the dimensions of the objects that can be cloaked. Even if the approach as such works for electrically small objects, recently some results have been presented showing how it is possible to increase the object dimensions [4]. As to the losses, this approach has the important advantage of employing homogeneous materials (i.e. there is no need to synthesize an electric/magnetic profile) possibly having a real part of the permittivity close to zero (i.e. the so-called epsilon-near-zero (ENZ) metamaterials) [2]. Looking at the dispersion of the material, in fact, since the operation frequency is close to the plasma frequency and far away from the resonance of the inclusions, losses can be assumed to be rather low. In addition, the dispersion curve close to the plasma frequency has a slow slope and, thus, this approach is characterized by relatively good performance in terms of bandwidth. Finally, from the fabrication point of view, this approach leads to easier practical designs as compared to the one based on coordinate transformation. In this paper, we consider this second approach, which exploits ENZ metamaterials and works for transverse magnetic (TM) polarization [2], for the case of transverse electric (TE) polarization, by using mu-near-zero (MNZ) metamaterials. At first, we recall the design principles of a cloak for TE polarization made of an ideal homogeneous material. Then, we present the design of the same cloak implemented through real-life magnetic inclusions at microwaves and, finally, we propose the design of an actual cloaking device working for both polarizations (TE and TM) by employing both magnetic inclusions for TE polarization and the parallel-plate medium, as proposed for TM polarization in [2].
Electromagnetic formulation of the problem for the TE polarization
The geometry under consideration is reported in figure 1. We have an infinite cylindrical object characterized by a radius a and a given permittivity ε and permeability µ. We cover the object with a cylindrical shell made of a material having permittivity ε c ε 0 and permeability µ c µ 0 . The radius of the shell is b. We consider a time harmonic variation of the kind exp [ jωt] and we 4 assume that a TE polarized plane wave impinges on the structure. The magnetic field of the impinging wave is directed along the axis of the cylinder.
The idea is to reduce the observability of the object, which means to reduce the reflection, the scattering and the absorption. A useful figure of merit for the reduced observability of the object can be, thus, the reduction of the total scattering cross section (SCS) of the structure with respect to that of the bare object. Since the total SCS of an object is given by the sum of the absorption cross section and the SCS and the forward scattering amplitude is related to the total cross section of the scatterer through the application of the optical theorem [13], it is straightforward to design the cloak just considering the reduction of the SCS of the covered object.
As already presented in [1,2] for the case of TM polarization, the approach is based on the analytical derivation of the SCS of the covered cylinder of figure 1. In the case of TE polarization, the latter quantity can be written in the form: and c (TE) n the scattering coefficients of integer order n. In order to reduce the observability of the cylindrical object in figure 1, this expression of the SCS has to be minimized. When the object is relatively large compared to the wavelength the minimization is usually performed numerically. On the other hand, following the procedure outlined in [1,2] for the case of the TM polarization, under the assumption that the object is electrically small, the conditions to minimize the SCS of the infinitely long cylinder can be written in closed analytical form as [1]: These conditions relate the ratio between the radius of the object and the radius of the cylindrical shell with the constitutive parameters of both the object and the cloak. The results presented so far for the infinitely long cylindrical structure of figure 1 are also valid in the case of a finite length cylinder. It is easy to show, in fact, that the SCS of the finite length cylinder of figure 2 is proportional in the far-field region to the one of the unbounded cylinder of figure 1 as: Sketch of a finite length cylinder and definition of the geometrical parameters.
TE polarization cylindrical cloak made of an ideal MNZ metamaterial: analytical results
Let us consider now the following example. The object to cloak is a cylinder with radius a = 10 mm and with constitutive parameters ε = 2ε 0 , µ = 2µ 0 , ε 0 and µ 0 being the permittivity and the permeability of free-space, respectively. The design frequency is assumed to be f 0 = 3 GHz and the shell radius b = 1.8 a. Following the procedure outlined in the previous section, the numerical minimization of the SCS can be obtained using a cloak made of an ideal homogeneous material with the following set of constitutive parameters: ε c = 1, µ c = 0.1. In order to show the results in a proper way, we use as a figure of merit for the cloak the ratio between the SCS of the bare object and of the object with the cloak put on it: In the following, we show the variation of the quantity σ TE as a function of different parameters in order to outline the main features of the designed cloak, including the robustness to the change of the electrical and geometrical parameters, and the effect of the dispersion and the losses, which cannot be neglected in any metamaterial design.
In figure 3, we show the variation of σ TE as a function of the relative permittivity of the cover material. When the permittivity of the object becomes larger and larger, the cross section of the covered cylinder is no longer electrically small compared to the wavelength and, thus, the SCS increases, exceeding the value of that of the bare object. Another interesting aspect to point out from figure 3 is that a slight variation of the cover permittivity from the design value does not affect the cloak performances too much.
In figure 4, we present the variation of σ TE as a function of the frequency. The cover material is described as an ideal isotropic and homogeneous material having relative permittivity equal to one and a permeability following the Lorentz model, so that at the design frequency The cover material has a relative permittivity equal to one and a relative permeability described by a Lorentz model such that the relative permeability at 3 GHz is µ c = 0.1.
f 0 = 3 GHz it exhibits the required value µ c = 0.1. In this case, we have also some losses, due to the imaginary part of the permeability following the Lorentz model. Therefore, the peak of σ TE at the operating frequency is slightly lower with respect to the one in figure 3. It is worth noticing that, as expected, at the resonant frequency of the permeability (around 2 GHz) the SCS of the object with the cover is very much increased compared to that of the bare object.
In figure 5, we show the variation of σ TE with the losses. The relative permeability of the cover material is defined as µ c = µ c − jµ c and the plot shows the variation of σ TE with µ c , while µ c is kept unchanged and equal to the design value (µ c = 0.1). We point out again that, since we are considering MNZ materials, the losses are expected to be anyway rather low. Finally, in figure 6 we show the variation of σ TE as a function of the ratio α between the radius of the cylindrical shell b and the radius of the cylindrical object a. This plot has been obtained keeping all the setup parameters to their design values and varying only the radius of the external cover. The robustness of the cloaking layout here proposed is similar to the one already proposed for TM polarization in [2]. Since the values of σ TE are still significantly high for slight changes of α, in principle we may conclude that slight geometrical variations at the operating frequency do not affect the behavior of the cloak. The cover material has a relative permittivity equal to one and a relative permeability following the Lorentz model such that at the design frequency of 3 GHz it is µ c = 0.1. The two graphs are in the same scale.
TE polarization cylindrical cloak made of an ideal MNZ metamaterial: full-wave simulations
In this section, we verify the analytical results presented in the previous section through fullwave numerical simulations performed through a numerical code based on the finite integration technique [14]. The cover material follows again the Lorentz dispersion, while the cylindrical object, although characterized by the same cross section and constitutive parameters as in the previous section, is assumed this time to have a finite length L. The incident field is a TE plane wave, according to the definition given in figure 1. In figure 7, we show the maximum SCS versus frequency of both the bare cylinder ( figure 7(a)) and the cylinder with the cloak put on it ( figure 7(b)). Of course, in this case, the observability of the cylinder is not reduced as much as in the infinite case presented in the previous section, but, as previously anticipated, the correct design of the cross section of the cylindrical structure (object + cloak) reduces the observability of the object for any length of the cylinder. From the results in figure 7(b), in fact, a reduction of the SCS around the design frequency of 3 GHz is clearly evident for any of the cylinder lengths considered.
Obviously, when we increase the length of the cylinder, the results should approach those presented in the previous section. In figure 8, we present the variation of the reduced observability figure of merit σ TE with the length L of the cylinder.
TE polarization cylindrical cloak made of magnetic inclusions
In this section, we present a real life implementation of the MNZ cover using magnetic inclusions. In order to obtain the desired value of the permeability for the cover material, we have used spiral resonators (SRs), following the design presented in [15,16]. The object considered in this example is again a cylinder with radius a = 10 mm and length L = 50 mm, made of a material with constitutive parameters given by ε = 2ε 0 , µ = 2µ 0 . In order to design the MNZ cover we have considered four columns of SRs disposed as shown in figure 9(a). The single SR inclusion has two turns and an external length of l SR = 3.5 mm, the separation between two adjacent turns is 0.6 mm, whereas the metallic strip has a width of 0.3 mm. The dimensions of the SRs have been designed not only to satisfy the permeability requirement at the cloaking frequency, but also to properly fit the cover thickness. Therefore, the SRs are radially placed exactly at the center of the ideal cylindrical shell.
We have excited the structure through the same plane wave as in the previous section and the full-wave simulations return the results shown in figure 9(b) and in figure 10. From figure 9(b), the reduction of the object observability around the desired frequency is clearly evident. In addition, as expected, at the resonant frequency of the SRs the covered structure scatters a lot and, thus, the observability of the object is indeed increased. Far from the resonance of the SRs and the design frequency, then, the SCS of the covered object approaches that of the bare cylinder. The pattern of the SCS in linear units ( figure 10(a)) clearly shows that the dominant scattering term for the bare cylinder is the dipolar one, due to its electrically small dimensions. In figure 10(b) it is clearly evident how the cloak works: the dipolar term is almost suppressed and the higher order terms, which have a significantly lower amplitude, become the dominant ones.
For the sake of completeness, we report in figure 11 also the bi-dimensional plots showing the magnetic field pattern at the cloak frequency in the cases of bare and covered cylinders. In the case of the bare cylinder, the shadow effect is very evident, whereas in the covered case it is reduced, though the field is not perfectly uniform. This is mainly due to the fact that we have used only four columns of SRs here. As shown in the field map of figure 12, in fact, more uniform cloaking patterns can be obtained by increasing the number of SR columns. The reason for the choice to use only four columns will be more clear in the next section, when we will discuss how to implement a cloaking device that works for both polarizations.
As a countercheck of the effectiveness of the proposed design, we have extracted through a standard method based on the reflection and transmission coefficients the permeability function of the four columns of SRs when the same TE plane wave of the examples in figures 9-11 impinges on the structure. The results are shown in figure 13, where it is evident that at the cloak frequency the value of the effective relative permeability of the cover is close to zero. In addition, in figure 14 it is shown that, when increasing the size of the SRs, the cloaking frequency shifts towards lower values, the lower frequencies being those at which the corresponding effective permeabilities exhibit values close to zero.
Finally, in order to check the robustness of the cloak against geometrical variations, we show in figure 15 the cloaking cover response to a reduction of the radius a of the inner cylinder. The simulation has been performed keeping the cloak and all the setup parameters unchanged, while progressively reducing a.
TE and TM polarization cylindrical cloak
In order to obtain a cloaking device working for both TM and TE polarizations, we should mix the design we have here proposed for TE polarization with the one already proposed in [2] for TM polarization. The latter is based on the employment of a parallel plate medium [17], which consists of metallic plates extended along the axis of the cylinder and radially placed all around the object. In order to match the behavior of the required ENZ material, in [2] it has been shown that additional gaps are needed. In order to host the magnetic inclusions, working for TE polarization, in the layout based on the parallel plate medium and working for TM polarization, it is necessary to reduce the number of plates to leave room for the SRs. The TE polarization layout we have proposed in the previous section (see figure 9(a)) can be easily merged with a TM polarization cloak made of just four plates, as shown in figure 16(a). Of course, in this case the reduced number of both plates (four) and SR columns (four) does not lead to an optimal design for either polarization. Anyway, the interesting advantage of this setup is that the cloaking device in figure 16(a) works now for both polarizations, as shown in figure 16(b). For sake of completeness, we report in figure 17 the maps of the field amplitudes at the cloaking frequency of 3.25 GHz for both polarizations in the case of the bare cylinder and in the case of the cylinder surrounded by the cloak.
Conclusions
In this paper, we have presented the design of electromagnetic cylindrical cloaks for TE polarization. At first, the cloak has been considered as an unbounded cylindrical shell made of an isotropic and homogenous ideal MNZ material. Then, the effectiveness of the design has been verified through full-wave simulations considering also cylindrical objects of finite lengths. Furthermore, a proper implementation of the MNZ cloak through SR magnetic inclusions has been proposed and verified through a set of numerical simulations. Finally, a possible layout working for both TE and TM polarizations at microwave frequencies has been suggested, by employing both the parallel plate medium layout already presented in the literature and working for TM polarization and the SR based design for TE polarization proposed in this paper. | 4,664.8 | 2008-11-01T00:00:00.000 | [
"Physics"
] |
Energetics and Electronic Structure of Triangular Hexagonal Boron Nitride Nanoflakes
We studied the energetics and electronic structures of hexagonal boron nitrogen (h-BN) nanoflakes with hydrogenated edges and triangular shapes with respect to the edge atom species. Our calculations clarified that the hydrogenated h-BN nanoflakes with a triangular shape prefer the N edges rather than B edges irrespective of the flake size. The electronic structure of hydrogenated h-BN nanoflakes depends on the edge atom species and their flake size. The energy gap between the lowest unoccupied (LU) and the highest occupied (HO) states of the nanoflakes with N edges is narrower than that of the nanoflakes with B edges and the band gap of h-BN. The nanoflakes possess peculiar non-bonding states around their HO and LU states for the N and B edges, respectively, which cause spin polarization under hole or electron doping, depending on the edge atom species.
Mina Maruyama & Susumu Okada
We studied the energetics and electronic structures of hexagonal boron nitrogen (h-BN) nanoflakes with hydrogenated edges and triangular shapes with respect to the edge atom species. Our calculations clarified that the hydrogenated h-BN nanoflakes with a triangular shape prefer the N edges rather than B edges irrespective of the flake size. The electronic structure of hydrogenated h-BN nanoflakes depends on the edge atom species and their flake size. The energy gap between the lowest unoccupied (LU) and the highest occupied (HO) states of the nanoflakes with N edges is narrower than that of the nanoflakes with B edges and the band gap of h-BN. The nanoflakes possess peculiar non-bonding states around their HO and LU states for the N and B edges, respectively, which cause spin polarization under hole or electron doping, depending on the edge atom species.
Hexagonal boron nitride (h-BN) is known to be a prototypical layered material in which each layer is composed of B and N atoms alternately arranged in an hexagonal network similar to that of graphite [1][2][3][4] . Along the direction normal to the layer, in sharp contrast to graphite, each layer is weakly bound in an AA' arrangement, in which N atoms are situated just above/below B atoms in adjacent layers, and vice versa, owing to the interlayer Coulomb interaction between B and N atoms. According to the layered structure, an individual atomic layer of h-BN can be synthesized as in the case of other layered materials, such as graphite 5,6 and transition metal chalcogenide compounds 7,8 . The chemical difference between B and N atoms make h-BN an insulator with a large energy gap of approximately 5 eV between the top of the valence band and the bottom of the conduction band, localized on N and B atoms, respectively [9][10][11][12] . Analogous with graphene, a h-BN sheet can form various derivatives with different shapes and dimensions by imposing appropriate boundary conditions. Nanoscale tubes have been synthesized and their physical properties have been investigated. These studies showed that the insulating electronic properties and band gap of such nanotubes are insensitive to their chirality and diameter [13][14][15][16] , which is in sharp contrast to carbon nanotubes 17,18 . h-BN can also form polycyclic structures with nanometer sizes, as can carbon 19,20 . In addition to small molecules, such as borazine and small polycyclic borazine derivatives, h-BN with a triangular shape and a size of several nanometers has been synthesized on appropriate substrates by chemical vapor deposition (CVD) [21][22][23] . Scanning transmission electron microscopy experiments have clarified that the triangular nanoflakes possess zigzag edges of N atoms, even though the edge formation energy of hydrogenated h-BN nanoflakes is insensitive to their edge angle and edge atom species 24 .
From a topological view, polycyclic materials consisting of hexagonal networks have been attracting much attention owing to their electronic structures for two decades. Graphene nanoribbons with hydrogenated zigzag edges possess peculiar edge localized states at the Fermi level and in the one-dimensional Brillouin zone, because of the delicate balance of electron transfer among the atomic sites near the edges [25][26][27][28][29] . In addition to the edge structure, the shapes of the C nanoflakes causes further variation in their electronic structures 30 . The sp 2 C nanoflakes with triangular shapes and zigzag edges have non-bonding states at the Fermi level leading to the high spin ground state, in which the number of unpaired electrons corresponds to the number difference between two sublattices: phenalenyl, consisting of three benzene rings (C 13 H 9 ), possesses a S = 1/2 ground state and triangulene, consisting of six benzene rings (C 22 H 12 ), has a S = 1 triplet ground state [31][32][33][34][35][36][37][38][39] . In addition to the isolated molecules, the triangular sp 2 C domain embedded into h-BN also exhibits a similar spin polarized state as their ground state, which slightly depends on the border atom species 40 . Because of the structural similarity of h-BN to graphene, h-BN nanoflakes with triangular shapes may also possess a similar non-bonding state near their occupied state and unoccupied state edges, depending on the edge terminations 21 . However, the detailed electronic structure of such h-BN nanoflakes with triangular shapes is still unclear with respect to the edge terminations. In this work, we aim to investigate the energetics and electronic structure of triangular h-BN nanoflakes with respect to their sizes and edge terminations to provide theoretical insight into the formation mechanisms of triangular h-BN with hydrogenated N edges during CVD experiments. Furthermore, we also explore the possibility of spin polarization of triangular h-BN nanoflakes associated with the saturated non-bonding states by injecting carriers under an external electric field. Our calculations showed that triangular h-BN nanoflakes with hydrogenated N edges are more stable than those with hydrogenated B edges for all the flake sizes studied here and for any B sources. The energy gap between the highest occupied (HO) and the lowest unoccupied (LU) states of the flakes with hydrogenated N edges are narrower than not only those with hydrogenated B edges but also the band gap of the two-dimensional (2D) h-BN, because the LU state possesses a nearly free electron state 41-44 distributed outside and alongside the edge atomic sites. The electron states of the nanoflakes with N edges around the HO states possess a non-bonding nature for the flakes with hydrogenated N edges, which causes spin polarized ground states under hole doping by the gate electrode. Figures 1 and 2 show the optimized structure of triangular h-BN nanoflakes with hydrogenated N and hydrogenated B edges, respectively, consisting of 3∼45 hexagonal BN rings. The flakes with both N and B edges were found to retain their triangular shape after structural optimization, irrespective of their sizes and edge terminations. Furthermore, each BN ring in the nanoflakes also kept its hexagonal shape for both the inner and edge regions. The detailed bond lengths of the triangular nanoflakes with hydrogenated N and B edges are summarized in Table 1. Bond alternation induced by the edges occurred in the nanoflakes, even though the BN bonds located at three corners of the flakes were slightly shorter than the other BN bonds. The optimum bond length of corner bonds was 0.143 nm for all edge lengths. In contrast, the BN bond at the edges and inner region of the nanoflakes was 0.144∼0.146 nm, which was close to the bond length of h-BN.
Results
where E total is the total energy of the triangular nanoflakes, N N , N B , and N H are the numbers of N, B, and H atoms, respectively, and μ N , μ B , and μ H are the chemical potentials of the N, B, and H atoms, respectively. The chemical potentials of B, N, and H were calculated by the total energies of ammonia borane (H 3 BNH 3 ), N 2 molecule, and H 2 molecule, respectively. Note that the chemical potential of H is not a tunable parameter for investing the formation energy of the nanoflakes, because all edge atomic sites must be perfectly terminated by H atoms. The formation energy of the nanoflakes with hydrogenated N edges were lower by approximately 0.4 eV/atom than those with the hydrogenated B edges for all nanoflakes studied here. Thus, the preferential formation of the N edge of the triangular BN nanoflakes was ascribed to their energetic stability. Note that the formation energy of the nanoflakes with B edges was insensitive to their size. The energy retained a constant value of approximately 0.8 eV/atom except the energy of the smallest flake (B-I). For the N edges, the energy slightly increased with increasing the flake size and seemed to saturate at the energy of 0.45 eV/atom for the flakes with an edge length of 2.5 nm or longer. This indicated that the size selectively was absent for the formation of h-BN nanoflakes under the hydrogen-rich conditions. It is worth investigating how the formation energy depends on the chemical potential of source molecules. Figure 3(b) shows the formation energy of the largest triangular h-BN flakes studied here with hydrogenated N Figure 4 shows the energy gap between the HO and LU states of the triangular h-BN nanoflakes as a function of their sizes. All nanoflakes had a large gap of approximately 4~5 eV between the HO and LU states. The gap was sensitive to the flake size and edge termination. The gap of the nanoflakes with N edges was narrower than that of the nanoflakes with B edges and the band gap of the 2D h-BN. Furthermore, the gap gradually decreased with increasing the number of atoms and saturated at approximately 3.92 eV for the flakes of N-VII or larger. In contrast, the gap of the nanoflakes with B edges monotonically decreased and asymptotically approached the band gap value of 2D h-BN with increasing the flake size. The anomalous gap profile of the triangular nanoflakes with hydrogenated N edges implied that the nanoflakes possessed an unusual electronic structure around the HO and LU states, which was completely different from those of h-BN and the nanoflakes with hydrogenated B edges. Figure 5 shows the electronic structure of the triangular h-BN nanoflakes with hydrogenated N and B edges. For the nanoflakes with N edges, the number of states near the HO state increased with increasing the size of the nanoflake. Furthermore, the number of states corresponded to the difference between the numbers of N and B atoms: The non-degenerated LU state emerged for the nanoflake N-I. It has been clearly seen that two, three, four, and five states bunch up in the valence band edges including HO states for the nanoflakes N-II, N-III, N-IV, and N-V, respectively. For the nanoflakes with longer edges (the nanoflakes of N-VI, N-VII, and N-VIII), the number of bunching states was larger than that of the difference between the numbers of N and B atoms, because of the increase of the electron states near the gap. The LU state possessed a non-degenerate and isolated nature for all the nanoflakes with N edges.
In contrast to the nanoflakes with N edges, the electronic structure of the triangular nanoflakes with hydrogenated B edges exhibited an opposite nature: The electron states at and near the LU state exhibited a bunching nature while the states at and near the HO states were insensitive to the flake size. In the case of the valence states in the flakes with N edges, the states at and near the LU state bunched up, which corresponded to the difference between the numbers of B and N atoms. These facts implied that these bunched states in the valence and conduction state edges for the nanoflakes with N or B edges were associated with the non-bonding states of hydrocarbon molecules with triangular shapes.
To clarify the physical mechanism of the anomalous band gap profile of the nanoflakes with N edges as well as the bunched states near the band edges, we investigated the squared wavefunction near the band edges of the smallest and the largest triangular nanoflakes with N (N-I and N-VIII in Fig. 6(a) and (b)) and B (B-I and B-VIII in Fig. 6(c) and (d)) edges. For the smallest nanoflakes with N edges, the HO state exhibited a non-bonding nature where the wavefunction was distributed on the edge N atoms. In contrast, the HO − 1 state exhibited a bonding nature that extended throughout the flake. For the largest nanoflakes with N edges, the HO and near the HO states also exhibited a non-bonding nature. The HO state was a doubly degenerate state which was distributed on N atoms throughout the flakes with a non-bonding nature. The states gradually increased their localized nature on the edge N atoms with a decrease of their eigenvalue: The distribution of the non-bonding state of the HO − 2 and HO − 3 states were slightly dislodged to the edge atomic sites, and the deeper occupied states (HO − 6 and HO − 7) were perfectly localized at the edge N atom, as is observed for the edge state of a graphene ribbon with zigzag edges. As for the LU state of the nanoflakes with N edges, the state exhibited a peculiar nature: The state was However, for the nanoflakes with B edges, the LU and lower unoccupied states exhibited a non-bonding nature, which were distributed on the B atom: The LU state was completely localized at the edge B atomic site, while the states gradually penetrated the inner B atomic site with an increase of their eigenvalues. The state 1 eV above the LU state was distributed on all B atomic sites with a non-bonding nature. The HO state exhibited a different nature to that of the nanoflakes with an N edge. The HO state of the nanoflakes with B edges was primarily localized on an N atomic site with a bonding nature. Therefore, in terms of the electron state around the gap, the electronic states of the triangular nanoflakes with hydrogenated B edges exhibited similar characteristics as those of h-BN except for unoccupied non-bonding states, which led to the fact that the HO-LU gap of the nanoflakes asymptotically approached the band gap of h-BN.
The non-bonding nature of the electron state near the valence band edge of the triangular h-BN nanoflakes with N edges implied that the flakes may possess spin polarized states as the ground state upon hole injection, as is the case of the half-filled state of polycyclic hydrocarbon molecules with a triangular shape. Table 2 summarized the number of unpaired electrons Δρ ( = ρ α − ρ β , where ρ α and ρ β are the electron density of the α and β spins) of the triangular h-BN nanoflakes with hydrogenated N edges, N-I, N-II, N-III, and N-IV, under the hole concentrations from partial to full doping of the occupied states with a non-bonding nature. The number of unpaired electrons is proportional to the number of holes injected into the non-bonding states up to half-filling: The ground states of the nanoflakes N-I, N-II, N-III, and N-IV were S = 1/2, 1, 3/2, and 2, respectively, under the hole concentration corresponding to the half-filling of the non-bonding states near the valence band edge. Furthermore, the polarized spin gradually decreased with the further increase of the hole and vanished when the electron was fully removed from the non-bonding states. Figure 7 shows the isosurfaces of the spin density ρ Δ → r ( ) of a triangular h-BN nanoflake (N-II) under hole concentrations of 1 h, 2 h, 3 h, and 4 h. For all hole concentrations, the nanoflakes possessed magnetic spin ordering which depended on the hole concentration. The polarized electron spins were ferromagnetically aligned
Conclusions
In this work, we investigated the geometric and electronic properties of triangular h-BN nanoflakes with hydrogenated N or B edges using density functional theory with the generalized gradient approximation, to provide theoretical insight into the preferential formation of triangular flakes with N edges in CVD experiments. Our calculations showed that triangular h-BN nanoflakes with hydrogenated N edges are more stable by approximately 0.5 eV per atom than those with hydrogenated B edges for all flake sizes studied here and for any B source. The preferential synthesis of the nanoflakes with N edges in the CVD experiment is ascribed to their energetic stability. The electronic structure of the triangular h-BN nanoflakes strongly depends on the edge termination. The energy gap between the highest occupied (HO) and the lowest unoccupied (LU) states of the flakes with hydrogenated N edges are narrower than not only those with hydrogenated B edges but also the band gap of the 2D h-BN sheet, because the LU state possesses a nearly free electron state distributed outside and alongside the edge atomic sites. In contrast, the HO-LU gap of the nanoflakes with B edges asymptotically approaches the band gap of the 2D h-BN sheet with an increase of their flake size. Detailed electronic structure analysis near the occupied and unoccupied state edges clarifies that the triangular nanoflakes possesses a number of non-bonding states corresponding to the number difference between B and N atoms. For the nanoflakes with N edges, the non-bonding state emerges in the HO and just below it with a fully occupied nature. In contrast, for the nanoflakes with B edges, the non-bonding state emerges in the LU state and above it with an unoccupied nature. According to the fully occupied non-bonding states, the triangular h-BN nanoflakes may exhibit spin polarization upon carrier injection into these states. To explore this possibility, we investigated the magnetic properties of the nanoflakes with hydrogenated N edges in terms of the hole doping in the FET structure using the ESM method. Under the hole injection, the nanoflakes exhibit spin polarized states as their ground state, where the number of unpaired electrons is proportional to the number of holes injected into non-bonding states up to a half-filling. The largest spin moment of the nanoflakes is S = n/2, where n is the number of nonbonding states, under the hole concentration corresponding to a half-filling of the non-bonding states. We also found a peculiar spin polarized state in the nanoflakes under a high hole concentration.
Methods
All calculations were based on density functional theory 45,46 as implemented in the program package Simulation Tool for Atom TEchnology (STATE) 47 . We used the generalized gradient approximation with the Perdew-Burke-Ernzerhof functional 48 to describe the exchange-correlation potential energy among interacting electrons. Ultrasoft pseudopotentials generated by the Vanderbilt scheme were adopted as the interaction between electrons and nuclei 49 . Valence wavefunctions and the deficit charge density were expanded in terms of plane wave basis sets with cutoff energies of 25 and 225 Ry, respectively, which provided a sufficient convergence in the total energy and electronic structure of the h-BN related nanostructures. Brillouin zone integration was performed using Γ point sampling. The geometric structures of monolayer h-BN nanoflakes were fully optimized until the force acting on each atom was less than 1.33 × 10 −3 HR/au. To simulate an isolated triangular h-BN nanoflake, each nanoflake was separated from its adjacent periodic images by at least 0.49 or larger and 0.70 nm for the lateral and normal directions. We considered triangular h-BN nanoflakes (Figs 1 and 2) with edge lengths from 0.71 to 2.47 nm for hydrogenated N edges and from 0.67 to 2.43 nm for hydrogenated B edges (Table 1), which corresponded to the flakes containing 3~45 hexagonal rings. This enabled a quantitative analysis of the energetics and electronic structures of triangular h-BN nanoflakes with hydrogenated zigzag edges with respect to their size and edge atom species. The effective screening medium (ESM) method was adopted to investigate the electronic structure of triangular h-BN nanoflakes under an external electric field 50 . To inject holes into the triangular flakes with hydrogenated N zigzag edges, we considered a field-effect-transistor structure, in which a planar counter metal electrode described by the ESM having an infinite relative permittivity was separated by a 0.35 nm vacuum spacing from the nanoflakes. In contrast, an open boundary condition was imposed at the opposite cell boundary described by a relative permittivity of 1 with a vacuum spacing of 0.35 nm from the center the flakes. During the electronic structure calculations under an electric field, the geometric structures of the triangular h-BN nanoflakes were fixed to their optimized structure without an electric field. | 4,736.2 | 2018-11-09T00:00:00.000 | [
"Physics"
] |
Global Existence and Large Time Behavior for the 2-D Compressible Navier-Stokes Equations without Heat Conductivity
In this paper, we consider an initial value problem for the 2-D compressible Navier-Stokes equations without heat conductivity. We prove the global existence of a strong solution when the initial perturbation is small in H 2 and its L 1 norm is bounded. Moreover, we derive some decay estimate for such a solution
Introduction
The 2-D compressible Navier-Stokes equations for ðx, tÞ ∈ ℝ 2 × ℝ + are rewritten as which govern the motion of gases, where ρ, u, P, θ stand for the density, velocity, pressure, and absolute temperature functions, respectively. E = ð1/2Þjuj 2 + E is the specific total energy with E as the specific internal energy, T = μð∇u+∇ u T Þ + λðdiv uÞI is the stress tensor, k is the coefficient of heat conduction, I is the identity matrix, and μ and λ are the coefficients of viscosity and second coefficient of viscosity, respectively, satisfying As one of the most important systems in fluid dynamics, there are lots of results on the well-posedness, blow-up phenomenon, large time or asymptotic behavior, and optimal decay rates of solutions based on different assumptions in different cases and function spaces. Among them, for the case with a positive coefficient of heat conduction k > 0, Kazhikhov and Shelukhin studied the global existence in one dimension [1,2]. The global existence of multidimensional case was established in [3][4][5][6]; more results on global existence for different kinds of solutions can be found in [3][4][5][6][7][8][9][10]. For the study of the large time behavior, asymptotic behavior, and optimal decay rates of solutions, one can refer to [4,[11][12][13][14][15]. The references [15][16][17] and [10,18] restricted the systems under the case of k = 0 and k > 0, respectively. Danchin [8,9] proved the existence and uniqueness of strong solutions to the compressible Navier-Stokes equations in hybrid Besov spaces, and Tan and Wang [6] studied the global existence of strong small solutions in H l , l ≥ 4. For the case of k = 0, Tan and Wang [5] proved the global solvability in three-dimensional space for the less regular solutions to the compressible Navier-Stokes equations in the H 2 -framework; however, they needed to assume that the L 1 -norm of the initial perturbation is bounded which is important in the proof of global existence. Later, ref. [3] removed this assumption by using some techniques with regard to the homogeneous Besov space and the hybrid Besov space.
Compared to the Cauchy problem, the equilibrium state of pressure increases with time. Xin et al. [16,17,19] investigated the blow-up phenomenon of the compressible Navier-Stokes equations in inhomogeneous Sobolev space; they proved that the smooth or strong solutions will blow up in any positive time if the initial data have an isolated mass group, no matter how small they are. On the other hand, we would like to introduce some research on the Serrin-type regularity (blow-up) criteria for the incompressible Navier-Stokes system. These criteria are obtained from [20,21]; later, many authors successfully extended these blow-up criteria to the compressible flow (for example, [22][23][24][25][26][27][28][29] and references therein).
In this paper, we consider this problem in ℝ 2 with the case k = 0 and assume that the gas is ideal and polytropic, i.e., P = Rρθ, E = c v θ, P = Ae S/c v ρ γ , where R > 0 and A > 0 are the universal gas constant, γ > 1 is the adiabatic exponent, S is the entropy, and c v = R/ðγ − 1Þ is a constant which represents the specific heat at a constant volume. Furthermore, we also assume that R = A = 1 without loss of generality. Then, we have and system (1) in terms of variables ρ, u, E can be reformulated in terms of variables P, u, S: where Ψ½u = ðμ/2Þj∇u+∇u T j 2 + λðdiv uÞ 2 is the classical dissipation function. We are concerned with the initial value problem to system (4) with initial data satisfying where p ∞ > 0 and s ∞ are the given constants. For the global existence and the decay estimate in the case of two dimensions, we have the following theorem.
are bounded, then there exists a unique global solution ðP, u, SÞ of the initial value problems (4) and (5) satisfying Finally, there is a constant C 0 such that for any t ≥ 0, the solution ðP, u, SÞ has the decay properties Remark 2.
(1) The L q decay estimates of (7) for 2 ≤ q ≤ 4 are optimal which coincide with the L q decay of the heat equation and are much slower than the decay rate in ℝ 3 : In [3,5], they obtain that kuðtÞk L 2 ðℝ 3 Þ ≤ C ð1 + tÞ −ð3/4Þ , which ensures that Ð ∞ 0 kuðtÞk 2 L 2 ðℝ 3 Þ dt is bounded. But in ℝ 2 , we only have kuðtÞk L 2 ðℝ 2 Þ ≤ C ð1 + tÞ −ð1/2Þ , and therefore, Ð ∞ 0 kuðtÞk 2 L 2 ðℝ 2 Þ dt is unbounded. This is the main difficulty about the proof of existence in ℝ 2 (see Section 3 below) (2) Due to k = 0, we cannot gain any diffusion of S, and the L ∞ decay estimate of (7) is slower than the decay of the heat equation 1.1. Notation. In this paper, we use L p , H m to denote the L p and Sobolev spaces on ℝ 2 with norms k·k L p and k·k H m = k·k m , respectively. We use C to denote the constants depending only on physical coefficients and C 0 to be constants depending additionally on the initial data. This paper is organized as follows: In Section 2, we reformulate problems (4) and (5), introduce two main propositions, and illustrate that we only need to prove Proposition 4. The rest of the paper is devoted to proving Proposition 4. The proof of the energy estimate part is in Section 3, and the decay rate part is in Section 4.
Reformulated System
We reformulate system (4) by setting where Changing variables as ðP, u, SÞ ⟶ ðp + p ∞ , α 1 v, s + s ∞ Þ, initial value problems (4) and (5) are reformulated as Abstract and Applied Analysis where the nonlinearities are given by For any T > 0, we define the solution space by and the solution norm by Taking a standard contraction mapping argument, we have the following propositions for the local existence (see [30]).
Proposition 3 (see [3]). Suppose that the initial data satisfy Then, there exists a positive constant T 0 depending on Xð0Þ such that Cauchy problem (9) has a unique solution ðp, v, sÞ ∈ Xð0, This, together with the proposition below, is sufficient to derive Theorem 1; the proof is based on the standard continuity argument.
Then, this solution is unique with the energy estimates and the decay properties The rest of paper is used to prove Proposition 4.
Energy Estimate
For later use, we introduce some useful analytic results.
Lemma 5. Let f ∈ H 2 ðℝ 2 Þ; then, we have the following Sobolev inequalities: Lemma 6 (lower-order energy estimate for ðp, vÞ). Under the assumption of Proposition 4, there exists a δ 1 > 0 arbitrarily small and independent of ε, such that Proof. Multiplying ð2:1Þ 1 , ð2:1Þ 2 by p, v, respectively, integrating them over ℝ 2 and then adding them together, we obtain hp, f i and hv, gi can be estimated as follows. For the first term, Lemma 5 together with (14) and the Hölder inequality implies
Abstract and Applied Analysis
For the second term, Similar to the proof of (24), by Lemma 5,(14), the Hölder inequality, and the fact that which is derived from (3) and (14), Hence, combining (23), (24), and (28) yields Next, we shall estimate k∇pk 2 L 2 . Multiplying ð2:1Þ 2 by ∇p and integrating them over ℝ 2 , we get which are based on the similar calculation and Young's inequality. In brief, we obtain Finally, multiplying (32) by δ 1 that is small but fixed and adding it to (29), we can derive (22). This completes the proof of the lemma.
Next, we turn to estimate the higher-order energy for ðp, vÞ.
Lemma 7 (higher-order energy estimate for ðp, vÞ). Under the assumption of Lemma 6, there is a small enough but fixed δ 2 > 0, which is independent of ε, such that Proof. Applying ∇ to ð2:1Þ 1 , ð2:1Þ 2 and multiplying by ∇p, ∇v, respectively, integrating them over ℝ 2 and using the same calculation technique as before, we get Next, applying ∇ 2 to ð2:1Þ 1 , ð2:1Þ 2 and multiplying by ∇ 2 p, ∇ 2 v, respectively, we can deduce Abstract and Applied Analysis where The terms on the right-hand side are estimated as below: Thus, Now applying ∇ to ð2:1Þ 2 and multiplying by ∇ 2 p, we have Similar to the estimate of k∇pk 2 L 2 , we transform the formula h−∇v t , ∇ 2 pi as below: and derive Multiplying (42) by δ 2 that is small but fixed, and combining it with (34) and (39), we finally obtain (33). This completes the proof of the lemma.
The lemma below gives the energy estimate for the entropy s.
Proof of (15) and (16). The first step is to prove (15), energy estimate for ðp, vÞ. By defining and adding (22) and (33) together, one has Integrating the above equation directly in time gives Hence, which implies when ε is small enough. In other words, (15) is valid. The second step is to prove (16), the energy estimate for s. Summing up (22), (33), and (43) gives where Therefore, By Grönwall's inequality and (15), we obtain which implies (16).
Decay Rates
In this section, we prove the decay rates in Proposition 4.
Proof of (17)- (20). Letting we rewrite (33) as follows: Next, by adding k∇ðp, vÞk 2 L 2 to both sides of the above inequality, we deduce that for some constant α > 0, Therefore, To deal with k∇ðp, vÞk 2 L 2 , we rewrite the solution of system (9) as where HðtÞ = ðpðtÞ, vðtÞÞ and A is a matrix-valued differential operator given by 6 Abstract and Applied Analysis and the solution semigroup e −tA has the following property (see [31,32]).
Abstract and Applied Analysis
We use the estimates above and (9) to deduce that Therefore, (20) is finally proven.
Conclusion
The motivation of this paper is to refine the previous works of [3,5]. As mentioned above, to prove the global existence of the compressible Navier-Stokes equations, ref. [3] needs to use some complicated techniques based on the notions of the homogeneous Besov space and the hybrid Besov space to remove a condition in [5]. However, in this paper, we use a much simpler method to achieve this, to complete a prior estimate on entropy S which is important to prove the global existence of the compressible Navier-Stokes equations by some simple analysis.
The results are in the H 2 -framework; it is possible to consider similar problems in functional space with lower regularity.
Data Availability
All data generated or analysed during this study are included in this published article.
Conflicts of Interest
The authors declare that they have no conflicts of interest. | 2,696.4 | 2023-05-04T00:00:00.000 | [
"Mathematics"
] |
Partially reduced graphene oxide based FRET on fiber-optic interferometer for biochemical detection
Fluorescent resonance energy transfer (FRET) with naturally exceptional selectivity is a powerful technique and widely used in chemical and biomedical analysis. However, it is still challenging for conventional FRET to perform as a high sensitivity compact sensor. Here we propose a novel ‘FRET on Fiber’ concept, in which a partially reduced graphene oxide (prGO) film is deposited on a fiber-optic modal interferometer, acting as both the fluorescent quencher for the FRET and the sensitive cladding for optical phase measurement due to refractive index changes in biochemical detection. The target analytes induced fluorescence recovery with good selectivity and optical phase shift with high sensitivity are measured simultaneously. The functionalized prGO film coated on the fiber-optic interferometer shows high sensitivities for the detections of metal ion, dopamine and single-stranded DNA (ssDNA), with detection limits of 1.2 nM, 1.3 μM and 1 pM, respectively. Such a prGO based ‘FRET on fiber’ configuration, bridging the FRET and the fiber-optic sensing technology, may serve as a platform for the realization of series of integrated ‘FRET on Fiber’ sensors for on-line environmental, chemical, and biomedical detection, with excellent compactness, high sensitivity, good selectivity and fast response
Microscope image of the GSMS in the buffer. The interferometric section packaged in the buffer is fixed straightly to avoid bending. Here the bar is 100 μm. a, Fabrication process. A singlemode-multimode-singlemode structure (SMS) was fabricated by using two section of single mode fiber (SMF-28e, Corning) and a section of multimode fiber (MMF, core diameter 105 µm, Corning), with multimode cavity length of ~3.2 cm. Then, the silica clad of the SMS was etched off by HF, with keeping a 90 μm core.
Firstly, GO was fabricated as following: Graphite powder (2 g) and NaNO 3 (1 g) were mixed, then add into concentrated H 2 SO 4 (80 mL) with an ice bath. Under vigorous stirring, KMnO 4 (8 g) was added slowly to keep the temperature of the mixture below 20℃. After removing the ice bath, the mixture was stirred at 35 ℃ in a water bath for 2 h.
Successively, 240 mL of H 2 O was slowly added to the pasty and brown mixture.
Addition of water into the concentrated H 2 SO 4 will release a large amount of heat; therefore, water should be added slowly so that the temperature of the mixture in the ice bath was below 50 ℃. After adding 240 mL of H 2 O, 5 mL of 30% H 2 O 2 was added to the mixture, then the diluted mixture color changed to brilliant yellow. After continuously stirring for 2 h, the mixture was filtered and washed with 10% HCl aqueous solution(250 mL), DI water, and ethanol (anhydrous) to remove other ions. Finally, the resulting solid was dried by vacuum. Then, the dried GO was dissolved in 40mL DI water with sonication for 2 h to form a uniform dispersion. Then the etched SMS was immersed in the GO dispersion, on a substrate. The water of the GO dispersion was evaporated 4 naturally in air at room temperature after 24h therefore the thin GO film coated on the fiber was formed. Then the fiber coated by the GO thin film were immersed in 100 mL VC aqueous solution (30g/L), which was heated in a water bath to 80 o C. After reduced by the hot VC solution for 20 min, the fiber was washed by DI water for several times, and finally dried on a hotplate at 50 o C.
b, By using a 633 nm laser, it is obvious that the light energy transmits from the core of the SMF to the surface of the etched MMF. During this process, multimode interferences occur. By launching the 633 nm laser from left side and right side of the SMS respectively, the evanescent light transmitting out of the fiber is obvious, which makes it sensitive to local environment. By using this method, the length of the MMF section could also be conveniently characterized. c, Picture of the GO dispersion (yellow) and the partially reduced GO (prGO) dispersion reduced by VC (black). Then, DA with different concentration is added in, afterwards the fluorescence and spectra are measured. c, Using Sensor 3 samples to detect ssDNA. Firstly, the GSMS 8 samples are immersed in 5% Na 2 CO 3 solution for 10 min. Then they are cleaned by using enough distilled water to remove excess Na + . The samples are measured first by immersing it in Rh6G. Then, Cd 2+ with different concentration is added in, afterwards the fluorescence and spectra are measured. with producing H + . In e, similar to d, Rh6G binds on the prGO first, when DA added in, binding competition occurring, Rh6G-prGO turns to be DA-prGO and Rh6G, with fluorescent recovery. In f, the prGO is functionalized by Na + first, the Rh6G binds the functionalized prGO-Na + via ionic bond, with fluorescent quenching. However, the binding of the Rh6G and DNA would be much stronger on the prGO, so that the Rh6G will be took away from the prGO by DNA, with restoring the fluorescence. In Rh6G + DI water (0), in Rh6G + analytes (1), washed by water (2), in Rh6G + analytes again (3), washed by water again (4), in Rh6G + analytes again (5). | 1,251.6 | 2016-03-24T00:00:00.000 | [
"Physics"
] |
Influence of Using Metallic Na on the Interfacial and Transport Properties of NaIon Batteries
Na2Ti3O7 is a promising negative electrode for rechargeable Na-ion batteries; however, its good properties in terms of insertion voltage and specific capacity are hampered by the poor capacity retention reported in the past. The interfacial and ionic/electronic properties are key factors to understanding the electrochemical performance of Na2Ti3O7. Therefore, its study is of utmost importance. In addition, although rather unexplored, the use of metallic Na in half-cell studies is another important issue due to the fact that side-reactions will be induced when metallic Na is in contact with the electrolyte. Hence, in this work the interfacial and transport properties of full Na-ion cells have been investigated and compared with half-cells upon electrochemical cycling by means of X-ray photoelectron spectroscopy (conventional XPS and Auger parameter analysis) and electrochemical impedance spectroscopy. The half-cell has been assembled with C-coated Na2Ti3O7 against metallic Na whilst the full-cell uses C-coated Na2Ti3O7 as negative electrode and NaFePO4 as positive electrode, delivering 112 Wh/kganode+cathode in the 2nd cycle. When comparing both types of cells, it has been found that the interfacial properties, the OCV (open circuit voltage) and the electrode—electrolyte interphase behavior are more stable in the full-cell than in the half-cell. The electronic transition from insulator to conductor previously observed in a half-cell for Na2Ti3O7 has also been detected in the full-cell impedance analysis.
Introduction
Rechargeable Na-ion batteries (NIBs) are becoming one of the most promising technologies for stationary applications, while Li-ion batteries (LIBs) are more focused on the consumer electronics market and electric vehicle industry [1,2].However, in NIBs, to find an optimum negative electrode is a challenge [3].Among negative electrode materials for rechargeable NIBs, the most studied are hard carbons (HC), phosphorus/carbon composites and Na alloys with Si, Ge, Sn and Sb elements [3][4][5][6].However, they exhibit safety hazards and/or high cost which reduces the interest to use them as negative electrode in NIBs.Metal oxides are another possible negative electrode and, amongst them, the Na 2 Ti 3 O 7 is one of the most promising ones because of its good specific capacity close to 200 mAh/g, non-toxicity, abundant resources, fast processing and low cost.Moreover, it is the oxide with the lowest Na + insertion/extraction potential at 0.3 V vs. Na + /Na [7,8].The main advantage of this low insertion voltage is the possibility of delivering both high energy densities and high power densities; the former when used as a negative electrode vs. a positive electrode with a high Na + insertion/extraction potential in a full-cell, the latter by precluding Na plating at high C-rates.However, its main drawback is the capacity retention [9][10][11] for which the best result was achieved by controlling the parameters of the solid-state synthesis method, achieving ~78% of capacity retention after 100 cycles [12].There are several factors proposed to contribute to this underperformance: (i) surface corrosion, owing to the formation of Na 2 CO 3 during the synthesis; (ii) instability of the Solid Electrolyte Interphase (SEI) upon electrochemical cycling; (iii) polyvinylidene fluoride (PVdF) degradation, due to the Na + -induced dehydrifluorination reaction of the PVdF, which will lead to hydrofluoric acid (HF) formation; and (iv) poor electronic conductivity of Na 2 Ti 3 O 7 which is an insulator [12][13][14].Besides, the instability of metallic Na in organic electrolytes has been recently reported and, since metallic Na is employed as counter and reference electrodes in the so-called half-cells, this could be another factor which can influence the capacity fading [15].Iermakova et al. showed that in a symmetric Na/Na cell, the measured charge during reduction was larger than during oxidation, which has been related with irreversible electrolyte decomposition and/or electrical contact loss between metallic Na and current collector.Moreover, electrochemical impedance spectroscopy (EIS) experiments concluded that the interfacial resistance associated to the SEI and charge transfer (R SEI +R CT ) increased upon time in the Na/Na cell, in contrast with the interfacial resistance stability displayed by equivalent Li/Li cells.This means that the SEI layer in the Na/Na cell was continuously growing up and/or had an unstable behavior.Additionally, when the same experiments were carried out in a HC/Na cell, a similar behavior was observed and the HC-electrolyte interface became more resistive upon time, while the same interface remains constant in a HC/Li cell [15].This suggests that metallic Na can influence the stability and composition of the SEI layer as well as the electrochemical properties of the Na-intercalation material under study.Indeed, a different behavior of the studied Na-insertion material between half-and full-cell configurations was found in several works [16].In fact, in the last few years, most of the published works were focused on full-cells, often using HC as negative electrode [17][18][19][20][21]. Nevertheless, there are also a few full-cell studies using Na 2 Ti 3 O 7 as negative electrode [22,23].The most relevant result was published by Xu et al., for which a C-coated Na 2 Ti 3 O 7 / P2-Na 0.80 Li 0.12 Ni 0.22 Mn 0.66 O 2 full-cell was assembled delivering a 105 mAh/g anode after 25 cycles and 100 Wh/kg anode+cathode with an average voltage of 3.1 V.
In the current work, in order to observe the influence of metallic Na on the formed SEI/SPI (Solid Permeable Interphase, which is formed in the positive electrode and also often called SEI) layers and on the ionic/electronic conductivity of C-coated Na 2 Ti 3 O 7 , the composition and stability of the SEI/SPI layers and transport properties of the C-coated Na 2 Ti 3 O 7 have been studied by means of X-ray photoelectron spectroscopy (XPS) and EIS and compared to previous results obtained in a half-cell [13].
Galvanostatic Experiments in a Full-Cell
The galvanostatic experiments in a C-coated Na 2 Ti 3 O 7 /NaFePO 4 full-cell (the former called NTO-C-FC hereinafter) were performed using a three-electrode configuration.Figure 1a shows the voltage profile of the NTO-C-FC (black curve), NaFePO 4 (green curve) and the curve of the full-cell (blue curve).In Figure 1b, the comparison between the discharge and charge capacity of the C-coated Na 2 Ti 3 O 7 electrodes in half-cells and full-cells is gathered.The irreversibility of the first cycle is larger in a full-cell (233.7 mAh/g anode ) than in a half-cell (100.1 mAh/g), which might be because a thicker SEI layer is formed; however, this will be later discussed on the basis of the XPS results.Nevertheless, in the following cycles, the negative electrode delivers similar capacities for the full and half-cell; showing a stable capacity around 100 mAh/g anode in the 15th cycle.The energy density of the full-cell has been calculated to be 112 Wh/kg anode+cathode in the 2nd cycle.Regarding the coulombic efficiency, the full-cell consistently reported slightly lower values than the half-cell.However, the full-cell coulombic efficiency increased gradually upon cycling until >90%, which can be related with the cell configuration: the pressure and contact between negative and positive electrode is more critical in three-electrode cell configuration than in two-electrode one.
Batteries 2017, 3, 16 3 of 12 configuration: the pressure and contact between negative and positive electrode is more critical in three-electrode cell configuration than in two-electrode one.
Study of the SEI/SPI Layers by Conventional XPS Experiments
The SEI and SPI layer evolution upon electrochemical cycling of NTO-C-FC and NaFePO4 electrodes, when they are assembled in a full-cell, has been investigated by XPS at different charge states (open circuit voltage (OCV) (green), 1st Na + insertion (orange) and 1st Na + extraction (pink)) as highlighted in Figure 1a.
Figure 2 shows the C 1s (a) and O 1s (b) photoemission lines of NTO-C-FC and NaFePO4 electrodes.Both photoemission lines provide information about the electrolyte decomposition products and stability of the SEI (NTO-C-FC)/SPI (NaFePO4) layers upon electrochemical cycling.The different components of the C-based compounds in the C 1s spectra have been assigned on the basis of previous XPS studies of C-based materials [24].The main component of the C 1s peak in the OCV electrode of NTO-C-FC appears at 284.4 eV and corresponds to a graphitic-like compound, in contrast with the C-coated Na2Ti3O7 electrode at OCV when it is assembled in a half-cell (see Figure S1 in the Supplementary Information) for which the graphitic component is less intense due to the formation of a surface layer result of electrolyte decomposition.This behavior is in agreement with our previous results in not C-coated Na2Ti3O7 electrode at OCV when it is assembled also in a halfcell (hereinafter called NTO-HC) [Error!Bookmark not defined.].Hence, this result means that in the NTO-C-FC electrodes, when metallic Na is avoided, the decomposition of the electrolyte at OCV is almost negligible on the surface of the Super C65 and/or C-coating, but when metallic Na is in the media, the electrolyte decomposition already starts before applying any current to the cell, which can influence negatively on the electrochemical properties of Na2Ti3O7.
After the 1st Na+ insertion in the NTO-C-FC electrode, the intensity of the graphite peak at 284.4 eV is reduced due to the SEI layer formation on top of Super C65 and/or C-coating.Nevertheless, the graphite component does not completely vanish as happens in the NTO-HC electrode [13].Hence, there are two possibilities: (i) the thickness of the SEI layer formed in NTO-C-FC is lower (<5 nm) than in the NTO-HC electrode and/or (ii) the SEI layer does not cover completely the carbon-based species, namely Super C65 and C-coating.Besides, upon Na+ extraction, the graphitic-like signal slightly increases, suggesting the possible dissolution of some SEI species and/or cracking of the SEI layer.
Study of the SEI/SPI Layers by Conventional XPS Experiments
The SEI and SPI layer evolution upon electrochemical cycling of NTO-C-FC and NaFePO 4 electrodes, when they are assembled in a full-cell, has been investigated by XPS at different charge states (open circuit voltage (OCV) (green), 1 st Na + insertion (orange) and 1 st Na + extraction (pink)) as highlighted in Figure 1a.
Figure 2 shows the C 1s (a) and O 1s (b) photoemission lines of NTO-C-FC and NaFePO 4 electrodes.Both photoemission lines provide information about the electrolyte decomposition products and stability of the SEI (NTO-C-FC)/SPI (NaFePO 4 ) layers upon electrochemical cycling.The different components of the C-based compounds in the C 1s spectra have been assigned on the basis of previous XPS studies of C-based materials [24].The main component of the C 1s peak in the OCV electrode of NTO-C-FC appears at 284.4 eV and corresponds to a graphitic-like compound, in contrast with the C-coated Na 2 Ti 3 O 7 electrode at OCV when it is assembled in a half-cell (see Figure S1 in the Supplementary Information) for which the graphitic component is less intense due to the formation of a surface layer result of electrolyte decomposition.This behavior is in agreement with our previous results in not C-coated Na 2 Ti 3 O 7 electrode at OCV when it is assembled also in a half-cell (hereinafter called NTO-HC) [13].Hence, this result means that in the NTO-C-FC electrodes, when metallic Na is avoided, the decomposition of the electrolyte at OCV is almost negligible on the surface of the Super C65 and/or C-coating, but when metallic Na is in the media, the electrolyte decomposition already starts before applying any current to the cell, which can influence negatively on the electrochemical properties of Na 2 Ti 3 O 7 .Moreover, during Na + insertion, when the SEI layer is forming, peaks at ~286 eV, 288 eV and 290-291 eV appear, which correspond to poly(ethylene oxide) oligomer (PEO) from ethylene carbonate (EC) polymerization, sodium alkyl carbonates (NaCO3R, R = different long-chain alkyl groups), sodium carbonate (Na2CO3) and/or -CF2 group from PVdF (the last two compounds are overlapped), respectively (see Table 1) [Error!Bookmark not defined., 25,26].During Na + extraction, the signal of PEO (~286 eV) decreases only marginally in NTO-C-FC when compared with the behavior observed in the NTO-HC electrode, which displays a large PEO decrease.This difference could be due to the fact that in the NTO-C-FC case, PEO is dissolving in the electrolyte at a much lower rate, therefore leading to a higher stability of the SEI layer when a full-cell is considered.After the 1 st Na + insertion in the NTO-C-FC electrode, the intensity of the graphite peak at 284.4 eV is reduced due to the SEI layer formation on top of Super C65 and/or C-coating.Nevertheless, the graphite component does not completely vanish as happens in the NTO-HC electrode [13].Hence, there are two possibilities: (i) the thickness of the SEI layer formed in NTO-C-FC is lower (<5 nm) than in the NTO-HC electrode and/or (ii) the SEI layer does not cover completely the carbon-based species, namely Super C65 and C-coating.Besides, upon Na + extraction, the graphitic-like signal slightly increases, suggesting the possible dissolution of some SEI species and/or cracking of the SEI layer.
Moreover, during Na + insertion, when the SEI layer is forming, peaks at ~286 eV, 288 eV and 290-291 eV appear, which correspond to poly(ethylene oxide) oligomer (PEO) from ethylene carbonate (EC) polymerization, sodium alkyl carbonates (NaCO 3 R, R = different long-chain alkyl groups), sodium carbonate (Na 2 CO 3 ) and/or -CF 2 group from PVdF (the last two compounds are overlapped), respectively (see Table 1) [17,25,26].During Na + extraction, the signal of PEO (~286 eV) decreases only marginally in NTO-C-FC when compared with the behavior observed in the NTO-HC electrode, which displays a large PEO decrease.This difference could be due to the fact that in the NTO-C-FC case, PEO is dissolving in the electrolyte at a much lower rate, therefore leading to a higher stability of the SEI layer when a full-cell is considered.On the other hand, the C 1s photoelectron spectra of NaFePO 4 electrodes (Figure 2a, right panel) proves that the SPI layer is also formed on the positive electrode, although NaFePO 4 operates inside the electrochemical stability window of the electrolyte [27].However, if compared with the SEI layer, the thickness of the SPI layer is almost negligible, at least on the top of Super C65 additive, since the intensity of the graphitic-like signal (284.4 eV) remains almost constant upon cycling as confirmed by the atomic concentration values of graphitic-like signal gathered in Table S1 of Supplementary Information.Moreover, analogously to the SEI layer of the NTO-C-FC electrode, the thin SPI layer is mainly composed by PEO (~286 eV) Na 2 CO 3 (290-291 eV) and NaCO 3 R (288 and 290-291 eV).During Na + insertion into NaFePO 4 , the concentration of graphitic-like compounds, as well as the signal corresponding to PEO, remain almost constant, demonstrating that the SPI suffers a small variation upon electrochemical cycling on the surface of Super C65 [28].The trend of Na 2 CO 3 concentration is difficult to be observed due to the overlapping with the -CF 2 signal from PVdF.After quantification of the different species, it seems that the NaCO 3 R is the compound which suffers more change upon cycling, displaying a concentration increase upon Na + extraction from NaFePO 4 .
Regarding the O 1s photoelectron spectra of NTO-C-FC electrode (Figure 2b, left panel), the peak at ~531 eV, which corresponds to Na 2 Ti 3 O 7 , is detected in all states of charge for NTO-C-FC electrodes with no changes in the atomic concentration upon electrochemical cycling.The photoemission peak deconvolution is illustrated in Figure S2 of Supplementary Information and the obtained concentration values are gathered in Table S2.Hence, taking into account the evolution of the Na 2 Ti 3 O 7 concentration in the O 1s spectra and also the behavior of the graphitic-like component in the C 1s spectra, it can be concluded that the SEI layer is predominantly formed on the surface of Super C65 and/or C-coating with a thickness lower than 2.5 nm, which was estimated from the inelastic mean free path (IMFP) of C 1s photoelectrons [29], and it is in agreement with the values calculated by Tanuma et al. [30].This phenomenon is already observed in other intercalation materials, for which the SEI layer is mainly formed on the surface of carbon and not on the active material [28].Additionally, such a thin SEI layer does not fully account for the high irreversible capacity observed in Figure 1a.Therefore, additional side-reactions in the cell are governing the observed irreversibility.
The NTO-C-FC electrode at OCV shows a peak at ~532 eV from residual Na 2 CO 3 due to the interaction with moisture.That component increases during Na + insertion on C-coated Na 2 Ti 3 O 7 along with the formation of the SEI layer [17].Furthermore, some NaCO 3 R (~534 eV and ~532.5 eV) and PEO (~533 eV) species can be observed [17,25].However, it is difficult to unambiguously determine the PEO and alkyl-carbonate evolution due to the small differences in binding energy and overlapping of the peaks.Even so, the quantification of the species suggests that Na 2 CO 3 is formed upon cycling whilst the NaCO 3 R concentration remains almost constant and the PEO is dissolved in the electrolyte.
According to the O 1s photoemission line, the surface of NaFePO 4 electrode at OCV (Figure 2b, right panel) shows a peak at ~531 eV, which corresponds to PO 4 3-groups from the active material and a less intense contribution from Na 2 CO 3 [25,31].During the electrochemical cycling, the peak corresponding to Na 2 CO 3 increases, while the PO 4 3-component slightly decreases, being still visible, suggesting that the formed SPI is very thin (much lower than 2.5 nm; also estimated from the IMFP of C 1s photoelectrons) [30].F 1s spectra are shown in Figure 3a, where the experimental results prove that the main contribution from F-based species at OCV in the NTO-C-FC electrode (Figure 3a, left panel) corresponds to -CF 2 (688 eV) from PVdF, which confirms the stability of the electrolyte at this state of charge, in agreement with the conclusions extracted from the C 1s data [26].During Na + insertion, the PVdF component is reduced and a peak at ~685 eV is developed, which is assigned to NaF and its formation is due to the dehydrofluorination of PVdF when it reacts with Na + [32][33][34].However, the -CF 2 signal is still present, suggesting that the SEI layer is thin, in agreement with the previous results obtained from C 1s and O 1s peaks.Moreover, the intensity of the NaF component slightly increases during Na + extraction in Na 4 Ti 3 O 7 , probably due to the fact that more NaF is formed upon Na + extraction than upon insertion, which would agree with the behavior that is observed in other Na-and Li-based cathode materials [35,36].
more NaF is formed upon Na + extraction than upon insertion, which would agree with the behavior that is observed in other Na-and Li-based cathode materials [35,36].Regarding the F 1s spectra of NaFePO4 electrodes (Figure 3a, right panel), the main component is the -CF2 signal (688 eV) at all states of charge and has an almost constant intensity which agrees with a stabile SPI layer.
Finally, Cl 2p spectra (Figure 3b) are where eventual NaClO4 reduction reactions are detected.In contrast to what is observed for NTO-HC electrodes, in the case of NTO-C-FC electrodes, NaClO4 does not exhibit any reduction reaction at OCV [Error!Bookmark not defined.].Besides, during Na + insertion into Na2Ti3O7, NaCl is formed in the SPI layer, while during Na + extraction, NaCl is found in the SEI layer.This is proof of the NaCl migration from one electrode to the other upon electrochemical cycling.
Auger Parameter Determination
The Na Auger parameter has been determined for NTO-FC-C and NaFePO4 electrodes, so it can be compared with the results already obtained for NTO-HC electrodes [Error!Bookmark not Regarding the F 1s spectra of NaFePO 4 electrodes (Figure 3a, right panel), the main component is the -CF 2 signal (688 eV) at all states of charge and has an almost constant intensity which agrees with a stabile SPI layer.
Finally, Cl 2p spectra (Figure 3b) are where eventual NaClO 4 reduction reactions are detected.In contrast to what is observed for NTO-HC electrodes, in the case of NTO-C-FC electrodes, NaClO 4 does not exhibit any reduction reaction at OCV [13].Besides, during Na + insertion into Na 2 Ti 3 O 7 , NaCl is formed in the SPI layer, while during Na + extraction, NaCl is found in the SEI layer.This is proof of the NaCl migration from one electrode to the other upon electrochemical cycling.
Auger Parameter Determination
The Na Auger parameter has been determined for NTO-FC-C and NaFePO 4 electrodes, so it can be compared with the results already obtained for NTO-HC electrodes [13].The Auger parameter allows to have a complete picture of the SEI/SPI layers which cannot be obtained from conventional XPS analysis due to the overlapping of components and the partial surface charging of the electrodes under study [37].In this case, the Auger parameter is calculated for Na; therefore, it will only provide information about the composition of the outermost surface region.This limitation relies in the definition of the Auger parameter itself, which is the energy difference (eV) between the Na 1s photoemission line and the Na KL 23 L 23 Auger line.Owing to the high binding energy of the Na 1s photoemission line, which will result in low kinetic energy photoelectrons with a very short IMFP, only the first 1-5 nm of the surface will be proved [30].The exact positions of the Na 1s (binding energy) and Na KL 23 L 23 (kinetic energy) peaks have been determined using the CasaXPS software [38].
The Auger parameters determined at OCV, and after the 1 st Na + insertion and extraction on NTO-C-FC and NaFePO 4 , are shown in Figure 4.This figure reveals that, at all states of charge, the main component of the outermost SEI and SPI layers is Na 2 CO 3 (~2061.3eV) [39,40], with some minor traces of NaCO 3 R, in agreement with analyzed data from O 1s photoelectron spectra (Figure 2b).Hence, the composition of the outermost region of SEI and SPI layers is found to be stable, which is in agreement with a slight variance of the spectra between the 1 st Na + insertion and extraction states and with the almost constant values of SEI resistance (R SEI ) obtained by EIS that will be discussed in the following section.
The Auger parameters determined at OCV, and after the 1st Na + insertion and extraction on NTO-C-FC and NaFePO4, are shown in Figure 4.This figure reveals that, at all states of charge, the main component of the outermost SEI and SPI layers is Na2CO3 (~2061.3eV) [39,40], with some minor traces of NaCO3R, in agreement with analyzed data from O 1s photoelectron spectra (Figure 2b).Hence, the composition of the outermost region of SEI and SPI layers is found to be stable, which is in agreement with a slight variance of the spectra between the 1st Na + insertion and extraction states and with the almost constant values of SEI resistance (RSEI) obtained by EIS that will be discussed in the following section.
Study of the Ionic/Electronic Properties by EIS
The Nyquist plots of the NTO-C-FC electrode upon the first Na + insertion into Na2Ti3O7 are shown in Figure 5, where the same behavior previously measured for NTO-HC electrode can be observed [Error!Bookmark not defined.].The semicircle observed at frequencies below 25 Hz corresponds to the bulk electronic resistance (Relec).This electronic resistance reveals a very large arc (Relec = 46921.2Ohm/mg, black squares) when no Na + is inserted into the electrode.However, as Na + ions are inserted into Na2+xTi3O7 (0 < x ≤ 2) and the voltage decreases, Relec is reduced (see Table 2), getting closer to the real axis and developing the straight line which corresponds to the diffusion of Na + into the crystal, as confirms the low radius of the arc (see Figure 5, dark blue hexagon).This behavior was linked to the electronic transition from insulator to conductor during the insertion of Na + in NTO-HC and, consistently, also NTO-C-FC exhibits the same transition [Error!Bookmark not defined.]. Figure 5b is a zoom-out of the impedance data presented in Figure 5a, for which the chargetransfer (RCT) and SEI layer (RSEI) resistances appear overlapped above 25 Hz, leading to a slightly distorted feature, at least at low potential values.[39,40] and measured in our laboratory (*).
Study of the Ionic/Electronic Properties by EIS
The Nyquist plots of the NTO-C-FC electrode upon the first Na + insertion into Na 2 Ti 3 O 7 are shown in Figure 5, where the same behavior previously measured for NTO-HC electrode can be observed [14].The semicircle observed at frequencies below 25 Hz corresponds to the bulk electronic resistance (R elec ).This electronic resistance reveals a very large arc (R elec = 46921.2Ohm/mg, black squares) when no Na + is inserted into the electrode.However, as Na + ions are inserted into Na 2+x Ti 3 O 7 (0 < x ≤ 2) and the voltage decreases, R elec is reduced (see Table 2), getting closer to the real axis and developing the straight line which corresponds to the diffusion of Na + into the crystal, as confirms the low radius of the arc (see Figure 5, dark blue hexagon).This behavior was linked to the electronic transition from insulator to conductor during the insertion of Na + in NTO-HC and, consistently, also NTO-C-FC exhibits the same transition [14].Figure 5b is a zoom-out of the impedance data presented in Figure 5a, for which the charge-transfer (R CT ) and SEI layer (R SEI ) resistances appear overlapped above 25 Hz, leading to a slightly distorted feature, at least at low potential values.The impedance data shown in Figure 5 have been fitted by Boukamp software [41] with the same equivalent circuit used to fit the EIS data collected from NTO-HC electrode [Error!Bookmark not defined.].However, for a good fit of the impedance of NTO-C-FC electrode, it was necessary to introduce an extra parallel line between a resistance and a capacitor, labelled as (RC) in Boukamp's notation.Such RC will be related with an additional resistance associated to Na + diffusion through the C-coating layer [42].The obtained values are gathered in Table 2. Relec = bulk electronic, RC-coating = C-coating, RCT = charge-transfer, RSEI = SEI layer and Rsol = electrolyte resistances.
The used equivalent circuit perfectly fits the impedance data in Figure 5, and the fitted values show that (i) Relec abruptly decays during Na + insertion and/or when the voltage is decreasing due to a transition towards electronic conductor; (ii) RC-coating exhibits higher values at the beginning of the electrochemical cycling than at the end of the Na + insertion, since the SEI layer is forming on the surface of the C-coating and, hence, a high resistance will be observed during this formation process (although at further lower voltage the SEI layer will become more homogeneous and/or smooth decreasing the RC-coating as confirmed by the α value which goes from 0.88 to 0.96, corresponding α = 1, to an ideally smooth surface); (iii) RCT is more stable in NTO-C-FC than in NTO-HC electrodes [Error!Bookmark not defined.],most probably due to the fact that the C-coating protects the active material and favors the charge-transfer kinetics [Error!Bookmark not defined.];(iv) RSEI shows almost constant resistance values during the first insertion in NTO-C-FC, in contrast to what is observed for the NTO-HC electrode, suggesting that the SEI layer in a full-cell is more stable than The impedance data shown in Figure 5 have been fitted by Boukamp software [41] with the same equivalent circuit used to fit the EIS data collected from NTO-HC electrode [14].However, for a good fit of the impedance of NTO-C-FC electrode, it was necessary to introduce an extra parallel line between a resistance and a capacitor, labelled as (RC) in Boukamp's notation.Such RC will be related with an additional resistance associated to Na + diffusion through the C-coating layer [42].The obtained values are gathered in Table 2.The used equivalent circuit perfectly fits the impedance data in Figure 5, and the fitted values show that (i) R elec abruptly decays during Na + insertion and/or when the voltage is decreasing due to a transition towards electronic conductor; (ii) R C-coating exhibits higher values at the beginning of the electrochemical cycling than at the end of the Na + insertion, since the SEI layer is forming on the surface of the C-coating and, hence, a high resistance will be observed during this formation process (although at further lower voltage the SEI layer will become more homogeneous and/or smooth decreasing the R C-coating as confirmed by the α value which goes from 0.88 to 0.96, corresponding α = 1, to an ideally smooth surface); (iii) R CT is more stable in NTO-C-FC than in NTO-HC electrodes [14], most probably due to the fact that the C-coating protects the active material and favors the charge-transfer kinetics [42]; (iv) R SEI shows almost constant resistance values during the first insertion in NTO-C-FC, in contrast to what is observed for the NTO-HC electrode, suggesting that the SEI layer in a full-cell is more stable than when metallic Na is used, in agreement with XPS results; and, finally, (v) R sol exhibits constant values.Besides, as occurs with the NTO-HC electrode, NTO-C-FC suffers an electronic transition from insulator to conductor as demonstrated by the relationship between R elec and R CT .At potentials above the insertion plateau (~0.3 V vs. Na + /Na), the ratio R elec :R CT is larger than 1, whilst below the insertion plateau, the R elec :R CT ratio is smaller than 1.This means that at the beginning of the intercalation process, kinetics are limited by the bulk electronic conductivity showing an insulator behavior, but once the amount of inserted Na + increases significantly, the material becomes electronic conductor and the kinetics are limited by the interfacial charge-transfer step.
Synthesis of C-Coated Na 2 Ti 3 O 7 and NaFePO 4
Na 2 Ti 3 O 7 was synthesized by solid-state method, mixing TiO 2 anatase (Alfa Aesar) and NaOH (Fisher Chemical) in stoichiometric amount and heated up to 750 • C for 20 h (sample NTO) [9].The synthesis was carried out in air but the atmosphere was changed to Ar while cooling.The sample was coated with carbon by mixing NTO with phthalocyanine (Acros Organic) in a 1:1 weight ratio and was pyrolyzed at 700 • C during 5 h under Ar atmosphere (sample NTO-C-FC).More details can be found in ref [12].
NaFePO 4 was synthesized by chemical delithiation of commercial C-coated LiFePO 4 , mixing with NO 2 BF 4 (Sigma Aldrich) in a molar ratio of 1:2.5 in CH 3 CN (Sigma Aldrich) obtaining FePO 4 .The chemical sodiation of FePO 4 was carried out by mixing with NaI (Sigma-Aldrich) in a molar ratio of 3:1 also in CH 3 CN [43].
Electrochemical Experiments
For the electrochemical study of the full-cell with C-coated Na 2 Ti 3 O 7 /NaFePO 4 : NTO-C-FC electrodes were prepared by mixing 70% active material with 20% carbon Super C65 (Timcal) and 10% PVdF (Solef) in N-methyl-2-pyrrolidone (NMP, Sigma Aldrich).NaFePO 4 electrodes were prepared in the weight ratio of 80% active material, 10% carbon Super C65 (Timcal) and 10% PVdF (Solef) also dissolved in NMP.Both slurries were casted on battery grade Aluminium foil and the laminates were dried under vacuum overnight at 120 • C. The electrodes were punched, taking into account a mass ratio of 1:4 (NTO-C-FC: NaFeO 4 ) and pressed at 5 tons before assembling in a full-cell in an Ar-filled glove box.The galvanostatic performance was tested firstly in three-electrode Swagelok cells in order to check if the mass ratio was the correct one using as reference electrode metallic Na.For the following galvanostatic measurements, only two electrode cells were considered, without reference electrode.The employed separator was glass fibre (Whattman GF B 55) and the electrolyte 1 M NaClO 4 in EC:PC (propylene carbonate).The applied current density was 0.1C, taking into account the theoretical capacity of Na 2 Ti 3 O 7 (1C = 178 mA/g) and in the voltage window of 0.2-3.9V vs. Na + /Na.
EIS was performed through potentiostatic intermittent titration technique (PITT) for which the impedance data were collected every 25 mV; a sinusoidal perturbation of 5 mV was applied in the frequency range of 100 kHz-5 mHz and 4 h of equilibrium conditions were employed at constant potential.The impedance dispersion data were fitted by Boukamp's Equivalent Circuit software [41].
All electrochemical measurements were carried out at room temperature in a Bilogic VMP3 Multi-Channel Potentiostat/Galvanostat.
XPS Experiments
The SEI and SPI layers of NTO-C-FC and NaFePO 4 , respectively, were studied by XPS with a Phoibos 150 spectrometer at different states of charge (OCV, 1 st Na + insertion and 1 st Na + extraction states) using a non-monochromatic Mg Kα (hν = 1253.6eV) X-ray source and analyzing the C 1s, F 1s, O 1s, Cl 2p and Na 1s photoemission lines as well as the NaKL 23 L 23 Auger peaks.The electrode preparation was carried out by stopping the full-cell at the required potential, rinsing the electrodes with PC and drying them in an Ar glove box before being inserted into the XPS vacuum chamber by an Ar-filled transfer system.The spectra were recorded with high resolution scans at low power (100 W, 20 eV pass energy, and 0.1 eV energy step).The calibration of the binding energy was performed taking into account as reference the graphitic signal at 284.4 eV and the corrections from the Auger parameter analysis [44,45].
Conclusions
The promising Na 2 Ti 3 O 7 negative electrode material delivers 112 Wh/kg anode+cathode in the 2nd cycle and 100 mAh/g anode after 15 cycles when it is assembled in a full-cell using NaFePO 4 as positive electrode.The formed SEI layer on the C-coated Na 2 Ti 3 O 7 , in absence of metallic Na, is composed by PEO, Na 2 CO 3 and NaCO 3 R with an overall thickness below 2.5 nm.The SEI is more stable upon electrochemical cycling when tested in a full-cell than in a half-cell.The main difference is observed at OCV, where the electrolyte decomposition reactions do not appear in the full-cell in contrast to the half-cell, for which the SEI is already formed once the electrode is in contact with metallic Na.Hence, metallic Na is a catalyzer for the decomposition reactions of the electrolyte.Regarding the SPI layer formed on NaFePO 4 , it can be concluded that it has the same composition as the SEI layer, being, however, more stable and thinner.The ionic/electronic properties were also studied and the stable R SEI values demonstrate the stability of the SEI layer in the full-cell, in agreement with the XPS results and in contrast to what is observed for the half cell.Moreover, EIS experiments showed also the electronic transition from insulator to conductor that the Na 2 Ti 3 O 7 negative electrode suffers upon Na + insertion reaction.Hence, taking into account the results presented here, it can be concluded that metallic Na is not a good counter electrode and it is necessary to find new Na-intercalation materials to use as counter electrode in order to investigate the real electrochemical, interfacial and transport properties of the studied Na-intercalation electrode materials.
Figure 1 .
Figure 1.(a) Voltage profile of the full-cell NTO-C-FC/NaFePO4 (blue curve), negative electrode NTO-C-FC (black curve) and positive electrode NaFePO4 (green curve); orange/pink points highlight the charge states where X-ray photoelectron spectroscopy (XPS) experiments have been performed.(b) Comparison between half-cell (red) and full-cell (black) of the capacity and coulombic efficiency determined by the C-coated Na2Ti3O7 negative electrode active material.
Figure 1 .
Figure 1.(a) Voltage profile of the full-cell NTO-C-FC/NaFePO 4 (blue curve), negative electrode NTO-C-FC (black curve) and positive electrode NaFePO 4 (green curve); orange/pink points highlight the charge states where X-ray photoelectron spectroscopy (XPS) experiments have been performed.(b) Comparison between half-cell (red) and full-cell (black) of the capacity and coulombic efficiency determined by the C-coated Na 2 Ti 3 O 7 negative electrode active material.
Figure 4 .
Figure 4. Na Auger parameter at different states of electrochemical cycling (OCV and 1st Na + insertion and extraction) of NTO-C-FC and NaFePO4 samples.Reference values are obtained from [Error!Bookmark not defined.,Error!Bookmark not defined.]and measured in our laboratory (*).
Figure 4 .
Figure 4. Na Auger parameter at different states of electrochemical cycling (OCV and 1 st Na + insertion and extraction) of NTO-C-FC and NaFePO 4 samples.Reference values are obtained from [39,40] and measured in our laboratory (*).
Figure 5 .
Figure 5. Nyquist plots of NTO-C-FC upon 1st Na + insertion at 1.0 V, 0.90 V, 0.77 V, 0.65 V, 0.52 V, 0.40 V, 0.27 V, 0.15 V and 0.05 V vs. Na + /Na.(a) Impedance data in all frequency ranges (100 kHz to 5 mHz) and (b) zoom-out of the impedance data selected in the red area of the left panel.
Figure 5 .
Figure 5. Nyquist plots of NTO-C-FC upon 1 st Na + insertion at 1.0 V, 0.90 V, 0.77 V, 0.65 V, 0.52 V, 0.40 V, 0.27 V, 0.15 V and 0.05 V vs. Na + /Na.(a) Impedance data in all frequency ranges (100 kHz to 5 mHz) and (b) zoom-out of the impedance data selected in the red area of the left panel.
R
elec = bulk electronic, R C-coating = C-coating, R CT = charge-transfer, R SEI = SEI layer and R sol = electrolyte resistances.
Table 1 .
Assignments of the binding energy of the Solid Electrolyte Interphase (SEI) layer species of C 1s and O 1s spectra.
Table 1 .
Assignments of the binding energy of the Solid Electrolyte Interphase (SEI) layer species of C 1s and O 1s spectra.
Table 2 .
Resistance values from the impedance fits upon the 1st Na + insertion into Na2Ti3O7 and the χ 2 values.
Table 2 .
Resistance values from the impedance fits upon the 1 st Na + insertion into Na 2 Ti 3 O 7 and the χ 2 values.
Table A1 .
The calculated concentration values (at.%) of the species observed in the C 1s spectrum at different charge states of NaFePO 4 electrodes.
Table S1 :
The calculated concentration values (at.%) of the species observed in the C 1s spectrum at different charge states of NaFePO4 electrodes.
Table S2 :
The calculated concentration values (at.%) of the species observed in the O 1s spectrum at different charge states of NTO-C-FC electrodes. | 8,943.2 | 2017-05-10T00:00:00.000 | [
"Materials Science"
] |
Dataset on coherent control of fields and induced currents in nonlinear multiphoton processes in a nanosphere
We model a scheme for the coherent control of light waves and currents in metallic nanospheres which applies independently of the nonlinear multiphoton processes at the origin of waves and currents. Using exact mathematical formulae, we calculate numerically with a custom fortran code the effect of an external control field which enable us to change the radiation pattern and suppress radiative losses or to reduce absorption, enabling the particle to behave as a perfect scatterer or as a perfect absorber. Data are provided in tabular, comma delimited value format and illustrate narrow features in the response of the particles that result in high sensitivity to small variations in the local environment, including subwavelength spatial shifts.
Background & Summary
Recently several groups have been able to enhance light-matter interaction processes by controlling the near and far field optical response of nanostructures. Control methods include nonlinear 1 and linear control based on pulse shaping 2,3 , combination of adaptive feedbacks and learning algorithms 4 , as well as optimization of coupling through coherent absorption 5 , time reversal 6 and phase and polarization control 7 . Spatiotemporal control of surface plasmons in nanosystems has been described using ultrashort pulses [8][9][10][11] . Interference between fields was proposed in quantum optics as a way to suppress losses in beam splitters 12 and has been recently applied to show control of light with light in metamaterials 13,14 , and in graphene films 15 . Coherent control of second-harmonic generation using a second pump beam has been recently demonstrated numerically in particles with cylindrical symmetry 16 . For spheres, it was shown in ref. 17 that the directionality of the emission obtained combining two pump beams results from selection rules that depend on the order of specific process and on the size of the particles.
In a recent paper 18 , we model a scheme for the coherent control of scattering and absorption patterns in a nanosphere in a uniform background which applies independently of the multiphoton processes at the origin of scattering and absorption, as long as the pump beam is not depleted. We use a control beam coherent with the radiation produced by the nonlinear process: a simple way to realize this is by driving two nonlinear processes of the same order with the same pump, using the output of one of them to control the other. Using the Huygens-Fresnel principle, formally proved in the Stratton-Chu theorem 19 , we can understand this scheme in terms of the formation of equivalent surface currents, which are combination of physical surface currents proportional to surface polarizations and tangent field components. These are due to the control field, incident to the surface of the sphere from the outside, and to the field generated by the nonlinear volume polarization, when this is present, which is incident to the surface from the inside. Forming equivalent surface currents that can radiate only outside or inside the particle we induce the particle to behave as a perfect scatterer or a perfect absorbed on the controlled modes. These equivalent surface currents depend linearly on the control field, so this is a linear control scheme.
The control is extremely sensitive to phase variations and produces a reduction of the absorption and variations in the scattered energy of several orders of magnitude. These features can be applied to detection of changes in the position of the particle far smaller than the particle itself, suppression of radiative losses, sensing of variations in the electric permittivity, ϵ, and magnetic permeability, μ, and optical switching.
For applications in which substrates are used, the theory as it stands can be applied only when the index between the substrate and the medium that contains the spheres is matched, and the thickness of the substrate is such that reflections from the lower face of the substrate and guided modes in the substrate can be neglected. When these conditions are not met, substrates remove the reflection symmetry and perturb the modes of the particle reducing the degeneracy among them 20,21 . From the point of view of applications, this is actually a beneficial effect, as degeneracy makes selecting the correct angles of incidence and observation more difficult and requires the use of a larger number of control beams. On the other hand, the theory will have to be performed using the modes of the particles in presence of the substrate and not the Mie's modes used in this paper.
When pump and control beams with broad spatial profiles are used, the relative phase differences are (almost) spatially periodic over the cross section of the pump, so that the optimal control conditions will be formed on an array of spatial points. On points in this array that are separated by at least a wavelength, the theory used here can be extended also to the control of arrays of spheres in which the interaction among different spheres is negligible. Control of arrays of interacting particles is also possible, but in that case the theory will have to be adapted by considering the modes of the array.
With appropriate control beams and pump, one can control the directionality of nonlinearly generated electromagnetic waves not only in a single sphere, but also in a regular array of spheres, for which both the radiation patterns and the spatial positions could be determined. This can be very useful for applications such as optical antennae and for surface enhanced spectroscopy, providing a reference of regularly spaced optical nano beacons for the localization of molecules.
The data stored in the repository, access details are provided in Data Citation 1, allows one to verify and test the results shown in the figures published in the Scientific Report paper 18 and in this paper.
Methods
The theory behind this work is explained in detail in a Scientific Reports paper by the same authors 18 and relies on the ability of determining the effect of both surface and volume nonlinearities by considering the boundary conditions at the surface of the sphere. Here we give the equations necessary to reproduce the results published in that paper. Surface 22,23 and volume nonlinearities appear in the boundary conditions at frequency ω as analogously for the other fields. E i and E s are the combination of particles modes (solutions of the homogeneous equations without nonlinear polarizations) that fulfill the boundary conditions. The modes' amplitudes depend upon the left-hand sides of equations (1,2,3) which, for any E B , H B and P S , enable us to find the form of E c , H c necessary to control the interaction of light with the particle through the amplitudes of the internal and scattering modes, regardless of the nature of the underlining nonlinear processes.
For sake of simplicity, we concentrate here the control of two modes and outline later how the theory generalizes to an arbitrary number of modes. As a consequence of the rotational invariance, the only modes that are spatially correlated at the surface of a sphere are internal and scattering electric or magnetic multipoles with the same value of l (total angular momentum) and m (angular momentum along the direction of propagation of the pump). Electric (magnetic) multipoles have magnetic (electric) fields with null radial component 24 . We recall that there are another two types of multipolar waves for the external medium that are relevant to this work: incoming, which propagate inward and have a divergence at the center, and regular, which are used to expand waves with amplitudes bounded everywhere, as the plane waves. All types of electric or magnetic multipoles with the same indexes l and m have the same angular dependence in spherical coordinates 24 , but different radial dependence. In our notation are the surface vector functions of the control field (f c ) and of the nonlinear (NL) sources that appear in the boundary conditions, equations (1,2,3), for a pump of amplitude a p = 1 in arbitrary units. The real amplitude and phase of f c are encoded in the complex amplitude a c . For any pair of internal and scattering modes, i lm , s lm , for which we adopt the same notation as for f c , the amplitudes a i lm ; a s lm are given by where the scalar product indicates the sum of the overlap integrals (i.e., the spatial correlations) of all the components with a NL = (a p ) N the amplitude of f NL and N the order of the nonlinear process. Note that s lm , i lm , are either transverse electric or transverse magnetic, but for ease of notation we do not specify which type they are. The biorthogonal mode 25 s 0 lm ði 0 lm Þ is orthogonal to all modes other than s lm (i lm ). For spheres the biorthogonal modes can be found analytically and depend on all internal and scattering modes with the same l and m, correlated at the surface of the sphere, according to the formula where u 1 = s lm , u 2 = i lm , G − 1 is the inverse of the (Gram) matrix with elements u ij = (u i ·u j ) and we sum over repeated indexes. When longitudinal modes are present 26 , we can include them simply by defining u 3 as the longitudinal mode spatially correlated to s lm and i lm . Generalizing equation (6) to include any number of modes and external incident waves is straightforward, as the amplitude of each mode requires only the scalar product of its biorthogonal mode with the sum of all the fields incident on the surface and the surface polarization. For any set of incident electromagnetic waves, ff ex j g, the first column of the matrix in equation (6) is replaced by two matrices: the matrix S with elements S ij ¼s 0 i Uf ex j and the matrix I for the internal modes with elements m). When f NL = 0, the amplitudes of the modes are given by the product of these two matrices with the amplitudes of the incident waves. When f NL ≠0, the amplitudes of the modes are given by the product of the augmented matricesS andĨ , withS ðĨ Þ obtained by adding to S (I) the columns 0 i Uf NL ði 0 i Uf NL Þ, with a column vector containing the amplitudes of the incident waves and of f NL . Control of the amplitudes of N modes can be achieved with N − 1 control beams when f NL ≠0 and the matrix ½Ĩ ;S T is invertible.
Code availability
We used the Fortran90 code Sphere.f90, version 347, which calculates the formulae given above evaluating spherical Bessel and Hankel functions using subroutines supplied with the book A. Doicu et al., Light Scattering by Systems of Particles, Springer (2006) 27 . We are happy to pass on the part of this code that we wrote to other researchers who can then use it if they have the required subroutines. We cannot provide these subroutines due to copyright restrictions.
Data Records
Numerical data have been generated with a custom Fortran90 code and are available on PURE, see Data Citation 1, the repository of the University of Strathclyde. For the control of a gold sphere, we have used the previous analytical equations and the Lorentz-Drude model for the for the dielectric function of gold. Data are given in tabular, comma delimited value format, with columns named according to the quantity plotted in the corresponding figures of the Scientific Report paper 18 and in this paper.
LD.csv contains data for the dielectric function of gold calculated with a Lorentz-Drude model 28 . We control the internal and scattering modes i 10 and s 10 of the electric dipole to generate data in Fig2a. cvs and Fig2b.cvs. In Fig2a.cvs the amplitude of the control beam is chosen so that the amplitude of s 10 , a s 10 , can vanish at the appropriate phase; data columns are the intensity of the field scattered in a direction orthogonal to both pump and control: other multipoles do not emit in this direction so the intensity has the same dependence of the amplitude a s 10 and shows an extremely sharp variation. The ratio of the amplitudes a s 10 and a i 10 shows that we find the condition for a perfect scatterer in Fig2a.cvs and for a perfect absorber in Fig2b.cvs, while the amplitudes of the other modes are not affected by the control beam. By removing the dominant internal mode, we can minimize the total absorption, which is very useful to reduce heating and, as a consequence, increase stability in experiments.
Data in Fig3.cvs shows the radiation patterns with and without control in the equatorial plane θ = 90°o f the sphere.
In Fig4a.cvs, we give data for the intensity of the field scattered in a direction at π/2 with respect to the control beam and at π/4 with respect to the pump. In Fig4b.cvs we give the amplitudes on the modes excited, showing that the control beam affects only the modes l = 2, m = ± 2. Even in this case we can observe a subwavelength variation of the intensity.
In Fig5a.cvs we give the intensity scattered in the same direction as for data in In Fig4a.cvs, but using an incoming multipolar wave with l = 2, m = 2 as control beam. In this case the variation of the intensity is smaller than in Fig4a.cvs because the multipolar control wave affects only the l = 2, m = 2 mode, as can be seen by plotting data in Fig5b.cvs. This shows that using incoming multipolar waves (which are extremely hard to realize experimentally) is not necessarily more effective than using plane waves. Finally, plotting data in Fig6a.cvs and Fig6b.cvs shows how the sensitivity to phase variation can be applied to monitor small variations in the dielectric permittivity of the host medium; similar results could be achieved with variations of the magnetic permeability. With the intensity and phase of the pump and control beams optimised to suppress the s 10 mode for a particular environment, ϵ ex , (corresponding to Δϵ ex = 0 in Fig. 6) we observe a strong sensitivity to small changes in ϵ ex in the scattered intensity. As the modes of the system depend upon the local environment, the relative phase and amplitude of the control beam required to maintain suppression of the modes change with it. When we vary the optimised amplitude of the control field by ±20% we observe in Fig6a.cvs that the curve of the scattered intensity drifts, so that the minima no longer occurs at Δϵ ex = 0, and the sensitivity decreases slightly. In Fig6b.cvs we observe that the sharpness of the feature in the scattering intensity reduces significantly when the relative phase of the control beam, Φ c , is changed from the optimised value, but the position of the minima in this case does not drift. Finally, we give the data used to validate the numerical code and plotted in the figures in this data descriptor.
spheres-dielec.csv contains values for the complex dielectric function of gold 29 , found using a fit to data in P.B. Johnson and R.W. Christy 30 , used to generate the following files.
spheres.csv contains the data for the extinction efficiencies against wavelength for gold spheres of radii r = 10, 25, 50 and 400 nm, in a host medium of water (n = 1.3), calculated using the code bhmie.f, see below, with the fitted dielectric function for gold.
spheres-test.csv contains the data for the extinction efficiencies against wavelength for gold spheres of radii r = 10, 25, 50 and 400 nm, in a host medium of water (refractive index n = 1.3), calculated using our sphere code with the fitted dielectric function for gold.
Technical Validation
The datasets referenced in this descriptor were validated via comparison with results from literature. All calculations were performed for gold particles using a Lorentz-Drude oscillator model for the complex dielectric function 28 , where ω is the frequency, ω p is the plasma frequency, k is the number of oscillators and f j is the oscillator strength, ω j is the oscillator frequency and 1/Γ j is the oscillator lifetime. The terms with j = 0 are associated to the intraband transistions. This model, including the relevant values for the oscillator strength, frequency and damping, were taken from 28 and written as a custom Fortran90 code. The output of this code was validated by reproducing the values of the complex dielectric function, plotted against oscillation energy, in ref. 28. The results are presented in Fig. 1. The linear part of the numerical code used to calculate the fields for a system with local response was validated against the Mie code written by C.F. Bohren and D.R. Huffman (bhmie.f) in 'Absorption and Scattering of Light by Small Particles', New York, Wiley, (1983) 31 , which is widely available online. We calculated the extinction efficiencies defined as, where λ is the wavelength of the incident field, r is the particle radius and σ ext is the extinction crosssection. For a direct comparison of the results, we normalize the values we calculated by a factor of 4π 3 . The (normalized) values calculated by both codes are plotted in Fig. 1; Fig. 2.
Usage Notes
We have used Gnuplot for plotting the data, but any software package able to read data in csv (comma separated value) format should produce the same results. | 4,137 | 2015-11-24T00:00:00.000 | [
"Physics"
] |
Afabicin, a First-in-Class Antistaphylococcal Antibiotic, in the Treatment of Acute Bacterial Skin and Skin Structure Infections: Clinical Noninferiority to Vancomycin/Linezolid
Afabicin (formerly Debio 1450, AFN-1720) is a prodrug of afabicin desphosphono, an enoyl-acyl carrier protein reductase (FabI) inhibitor, and is a first-in-class antibiotic with a novel mode of action to specifically target fatty acid synthesis in Staphylococcus spp. The efficacy, safety, and tolerability of afabicin were compared with those of vancomycin/linezolid in the treatment of acute bacterial skin and skin structure infections (ABSSSI) due to staphylococci in this multicenter, parallel-group, double-blind, and double-dummy phase 2 study.
In the mITT population, the demographic and baseline characteristics were comparable among the three treatment groups, although in the vancomycin/linezolid group, the percentage of male patients and the percentage of patients with wound infections were slightly higher and the mean areas of the primary lesions were slightly larger (Table 1).
The overall mean durations of i.v. and oral treatment in the mITT population were 1.1 days (means ranged from 1.0 days for HD afabicin to 1.2 days for LD afabicin) and 6.6 days (mean, 6.6 days for all treatment groups), respectively. Concomitant antibiotics were used more frequently in the afabicin treatment groups (23.9% and 24.2% of patients) than in the vancomycin/linezolid treatment groups (16.8%). Amoxicillin was the most common concomitant antibiotic, administered to 13.0%, 12.1%, and 13.9% of patients in the LD afabicin, HD afabicin, and vancomycin/linezolid treatment groups, respectively. The use of short-acting antibiotics within 24 h prior to randomization was infrequent (5.3% of patients overall). Of the patients with polymicrobial infections with a staphylococcal and a nonstaphylococcal species, 23.5% (8/34) of patients in the afabicin groups and 30.8% (8/26) of patients in the vancomycin/linezolid groups received a concomitant antibiotic. Primary efficacy outcome. The primary efficacy outcomes of early clinical response at 48 to 72 h postrandomization, as specified in the FDA guidelines (16), were comparable among treatment groups (94.6%, 90.1%, and 91.1% for LD afabicin, HD afabicin, and vancomycin/linezolid, respectively) in the mITT population (Table 3). Both LD afabicin and HD afabicin were found to be noninferior to vancomycin/linezolid (difference, Ϫ3.5% [95% confidence interval {CI}, Ϫ10.8 to 3.9%] for LD afabicin; difference, 1.0% [95% CI , Ϫ7.3 to 9.2%] for HD afabicin). There were 23 patients that had not responded to treatment at the primary endpoint (n ϭ 5, n ϭ 9, and n ϭ 9 for LD afabicin, HD afabicin, and vancomycin/linezolid, respectively) ( Table 3).
All patients with polymicrobial infections involving a nonstaphylococcal pathogen in the afabicin groups (n ϭ 19 in the LD group, n ϭ 15 in the HD group) were responders for the primary endpoint at 48 to 72 h postrandomization, compared with (Table 4). In the LD and HD afabicin groups, the baseline pathogen of nonresponders was S. aureus only. In the vancomycin/ linezolid group, baseline pathogens were either S. aureus only or polymicrobial (see footnotes in Table 3). Among the patients with polymicrobial infections in the afabicin groups, 16 were infected with Gram-positive species only (n ϭ 8 in the LD group, n ϭ 8 in the HD group) and 18 were infected with Gram-positive and Gram-negative species (n ϭ 11 in the LD group, n ϭ 8 in the HD group). In the vancomycin/linezolid group, 10 patients were infected with Gram-positive species only and 12 patients were coinfected with Gram-positive and Gram-negative species (Table 5). Secondary outcomes. The secondary efficacy outcomes of clinical outcomes at 48 to 72 h postrandomization, end of treatment (EOT), and STFU are presented in Table 3. At 48 to 72 h, clinical success rates were comparable between treatment groups. The clinical success rates at EOT were similar in the LD afabicin and vancomycin/linezolid groups; however, at STFU, the rate was higher in the vancomycin/linezolid group (92.1%) than in the LD afabicin and HD afabicin groups (84.8% and 83.5%, respectively). The clinical success rates at EOT were marginally higher than at 48 to 72 h postrandomization. At STFU, clinical success rates in the afabicin groups were comparable to Clinical failure rates at 48 to 72 h postrandomization were slightly lower in the vancomycin/linezolid group (13.9%) than in the LD afabicin (16.3%) and HD afabicin (18.7%) groups (Table 3). For each of the treatment groups, the most frequent reason for clinical failure was the requirement of further antibiotic treatment of the original site of infection, due to new signs, symptoms, or complications attributable to the ABSSSI (12.0%, 8.8%, and 7.9% in the LD afabicin, HD afabicin, and vancomycin/linezolid groups, respectively).
Clinical success rates in the PP population were marginally higher than in the mITT population, and the largest differences were seen at STFU (Table 3).
Few postbaseline samples were taken from skin lesions: n ϭ 54 at 48 to 72 h postrandomization; n ϭ 5 at EOT; and n ϭ 2 at STFU. Therefore, microbiological eradication rates presented in Table 6 were based largely on the investigator's assessment of clinical outcome. At 48 to 72 h postrandomization, of the 54 patients who had a lesion sample taken, 5/17 in the LD afabicin group, 3/15 in the HD afabicin group, and 6/22 in the vancomycin/linezolid group had a microbiological outcome of documented eradication. Of these, two patients had a superinfection at 48 to 72 h postrandomization: one patient in the LD afabicin group (MSSA at baseline, MRSA at 48 to 72 h postrandomization) and one patient in the vancomycin/linezolid group (MRSA and S. constellatus at baseline, S. epidermidis at 48 to 72 h postrandomization). No decrease in afabicin desphosphono activity has been observed in the collected isolates compared with that at baseline.
To evaluate the clinical response, the area of the primary lesion was measured at the screening visit to provide a baseline value, at the primary endpoint of 48 to 72 h after the randomization, at EOT, and at STFU. A summary of lesion area and change from baseline in the mITT population is presented in Table 7. Overall, the mean (SD) lesion area at baseline by ruler and digital photography was 349.889 (254.2175) cm 2 and 241.383 (168.9697) cm 2 , respectively. The changes from baseline were comparable across the three treatment groups. Overall, the maximum change from baseline by ruler was observed at STFU (approximately 98%). The mean (SD) change from baseline by ruler at STFU was Ϫ99.221 (3.0001), Ϫ98.433 (7.2591), and Ϫ98.635 (4.7646) cm 2 , respectively, for the low-dose afabicin group, high-dose afabicin group, and control group. The percentages of change from baseline in lesion area over time by ruler and digital photography were comparable across the three treatment groups. As part of secondary endpoints, the clinical success was assessed for different lesion types, i.e., wound, major abscess, and cellulitis. However, the comparison did not show statistically significant differences in success rates (Table 8).
Safety. Overall, of the 324 patients in the safety population ( Fig. 1), 144 experienced at least one treatment-emergent adverse event (TEAE) ( Table 9). More patients in the HD afabicin and vancomycin/linezolid groups experienced a TEAE than in the LD afabicin group (45.8%, 46.7%, and 40.9%, respectively). The most frequently reported TEAE in each treatment group was headache, which was experienced by a higher percentage of patients in the HD afabicin group (17.8%) than in the LD afabicin group (9.1%) and vancomycin/linezolid group (10.3%). The percentages of patients that experienced a treatment-emergent infusion site reaction were similar between treatment groups (4.5% [5/110], 4.7% [5/107], and 4.7% [5/107] in the LD afabicin, HD afabicin, and vancomycin/linezolid groups, respectively). Of the patients with TEAEs, most (97.9%, 141/144) had mild or moderate events. Three patients (all in the afabicin treatment groups) had severe TEAEs: two were considered not related to study medication (one case of cellulitis and one case of heroin overdose [considered a serious adverse event [SAE]), and one patient in the LD afabicin group experienced nephrolithiasis and renal colic, both of which were related to study medication. Cardiac TEAEs were reported by two patients in the vancomycin/linezolid group (angina pectoris and nodal arrhythmia) and one patient each in the LD afabicin group (sinus tachycardia, considered unrelated to study medication) and the HD afabicin group (mild QT interval prolongation considered related to study medication). A postdose maximum QT interval with Fridericia's correction (QTcF) of Ͼ500 ms was detected in two patients in the vancomycin/linezolid group (none in the afabicin groups) and of 480 to Յ500 ms in three patients in each of the HD afabicin and vancomycin/linezolid groups (none in the LD afabicin group).
No hepatic TEAEs were reported; however, seven patients experienced elevated liver enzymes, mostly at Ͼ2 times the upper limit of normal (2ϫ ULN). None of the patients in the HD afabicin group had an alanine transaminase (ALT) concentration at Ͼ3ϫ ULN; however, ALT at Ͼ3ϫ ULN was detected in three patients in the LD afabicin group and one patient in the vancomycin/linezolid group. One patient in each treatment group had an aspartate transaminase (AST) concentration at Ͼ3ϫ ULN. None of the patients in the study had a total bilirubin count of Ͼ1.5ϫ ULN, and there were no cases that met Hy's law.
Four patients experienced an SAE (Table 9) and discontinued study medication. Two patients in the LD afabicin group experienced moderately severe treatment-emergent SAEs of worsening of the primary ABSSSI, one of which was considered related to the i.v. study medication (neither patient received oral study medication). Both patients had MRSA isolated from their lesions at baseline. One patient in the HD afabicin group experienced moderate bacteremia (baseline blood culture positive for S. aureus) which was considered not related to the study medication (blood sample taken prior to first dose of afabicin) or study procedure. One patient in the vancomycin/linezolid group developed moderate cellulitis during follow-up which was not related to the study medication (i.v. or oral) or study procedure.
One patient, with a history of drug abuse and hepatitis C and who was on diamorphine at the time of randomization, died on day 3 of the study due to a heroin overdose. The patient was in the HD afabicin group and had received two doses of i.v. afabicin and one dose of oral afabicin. The investigator did not consider this to be related to the study medication or study procedure, and no autopsy was performed.
DISCUSSION
In this phase 2 trial, afabicin, administered BID at two dose levels, was noninferior to vancomycin/linezolid in the treatment of patients with ABSSSI due to staphylococci. Furthermore, both dosing levels of afabicin were well tolerated.
The primary efficacy endpoint used in this study of lesion response at 48 to 72 h is part of the FDA guidelines for the development of drugs for the treatment of ABSSSI (16). While the highest early clinical response (ECR) rate observed in the mITT population was for the LD afabicin group (94.6%), rates for HD afabicin and vancomycin/ linezolid were 90.1% and 91.1%, respectively. Overall, these ECR rates compared favorably with those of ceftaroline, delafloxacin, iclaprim, linezolid, and tedizolid (17)(18)(19)(20)(21)(22). Rates of investigator-assessed clinical success at EOT and STFU were also high (Ͼ87% and Ͼ83%, respectively). At the STFU time point, clinical success was lower in the LD and HD afabicin groups (84.8% and 83.5%, respectively) than in the vancomycin/ linezolid group (92.1%). However, of the patients who missed the STFU visit in the mITT population, seven in the afabicin group and one in the vancomycin/linezolid group had an outcome of clinical success at EOT. Furthermore, clinical success rates were comparable between treatment groups among patients who reached STFU without major protocol deviations, that is, in the PP population (97.0%, 98.2%, and 100% for LD afabicin, HD afabicin, and vancomycin/linezolid groups, respectively). All patients in the present study with polymicrobial infections involving a nonstaphylococcal pathogen in the afabicin groups showed an early clinical response, suggesting that in the context of polymicrobial infections with bacteria that are not susceptible to the agent, specifically targeting the staphylococcal pathogen with afabicin led to a positive outcome of the infection.
The efficacy of afabicin desphosphono, the active moiety of afabicin, has been previously demonstrated by Hafkin et al. (15) in a phase 2 study of oral afabicin desphosphono for the treatment of patients with ABSSSI. Following oral administration of afabicin desphosphono at 200 mg BID, 82.9% of patients in the microbiologically evaluable population achieved Ն20% decrease in the area of erythema on day 3 (15), which was lower than the early clinical response rate reported in the current study. The differences between the two studies include the initial routes of administration and different formulations (afabicin versus afabicin as the desphosphono salt) and the use of two dosing regimens, with no requirement for fasting, in the current study. Importantly, these factors are not expected to impact the pharmacokinetic (PK) profile between the studies, as phase 1 PK studies have demonstrated that afabicin (i.v. infusion or oral dosing) is rapidly converted to the active moiety, afabicin desphosphono (time to maximum concentration of drug in serum [T max ] at 2 to 4 h postdose) (14). Of note, the 240 mg BID afabicin (HD afabicin) dose regimen represents approximately 200 mg BID afabicin desphosphono, which was the dose tested in the previous ABSSSI study (15). Taken together, the two phase 2 studies have shown that the active moiety of afabicin is efficacious in the treatment of patients with ABSSSI.
The development of afabicin has led to i.v. and oral formulations of the antibiotic. Following the minimum of two doses of i.v. afabicin, approximately two-thirds of patients were assessed by the investigator as ready to step down to oral therapy. The advantages of an early change to oral therapy include the opportunity for an earlier hospital discharge, which benefits the patient as well as reducing health care costs (23). Furthermore, afabicin provides an opportunity for patients to step down from an i.v. to oral formulation of the same antibiotic, which is thought to be less complicated than other strategies (24). This is not an option for a number of agents approved for the treatment of ABSSSI, such as daptomycin, ceftaroline, dalbavancin, and oritavancin, for which oral formulations are not available, and vancomycin, which is not systemically useful (1). Afabicin is therefore a promising agent for the treatment of serious ABSSSI due to staphylococci that require i.v. therapy.
Risk factors for treatment failure among patients with an ABSSSI include drug/ alcohol abuse, obesity, age, and involvement of difficult-to-treat pathogens (23). The demographic characteristics of patients in this study were generally well balanced across treatment groups; however, the study population included patients who were potentially difficult to treat due to their medical histories. For example, the average lesion size at baseline in this study exceeded 300 cm 2 ; other clinical studies of ABSSSI patients have reported similarly large lesion sizes (17-19, 22, 25, 26). A large proportion of patients (Ն82.2% in each treatment group) in this study had a history of drug abuse and therefore were more likely to have more advanced infections due to delays in seeking medical care (27). Furthermore, a high proportion of patients in the study were obese (body mass index [BMI] Ն 30 kg/m 2 ). Both drug abuse and obesity have been associated with recurrent emergency department visits (28). Despite the potential complications of a population with such comorbidities, 84.8% of randomized patients completed the study up to STFU. This figure compares favorably with other studies; for example, even with lower percentages of drug abusers (56.7% and 50.7%), the REVIVE 1 and 2 studies of i.v. iclaprim versus vancomycin for the treatment of ABSSSI due to Gram-positive pathogens reported comparable completion rates at the same time point (92.0% and 90.7%) (18,19).
The IDSA guidelines for the treatment of severe ABSSSI include the use of vancomycin, linezolid, daptomycin, and telavancin (3); however, there are safety concerns related to these agents (4-7), underscoring the need for extending the available treatment options. In addition, more recently approved agents such as dalbavancin can be considered in this indication (29). As with all new agents, and especially those of a new chemical class, the safety of afabicin requires close scrutiny until more clinical data are available. However, in this study, afabicin, at both dosing levels, was generally well tolerated in patients with ABSSSI. The most commonly reported TEAE was headache. Treatment-emergent headaches were experienced by a higher proportion of patients in the HD afabicin group than in the LD afabicin and vancomycin/linezolid groups; all were mild or moderate in intensity. Comparison of the incidences of TEAEs between the LD and HD afabicin groups indicates that the safety profile of LD afabicin is marginally more favorable than that of HD afabicin. Four patients had SAEs, only one of which was considered related to i.v. (LD afabicin) study medication (exacerbation of skin infection; moderate in intensity). There were no deaths considered related to the study medication.
Afabicin is a potent inhibitor of staphylococcal FabI. In contrast to the broadspectrum antibiotics, this selective spectrum of activity is expected to reduce the impact of afabicin on the intestinal microbiota (30). Indeed, in the current study, treatment-emergent diarrhea was experienced by half the number of patients in both the LD and HD afabicin groups as in the vancomycin/linezolid group. Furthermore, data from a phase 1 drug-drug interaction study during which 16 healthy subjects received oral afabicin (240 mg BID for 20 days) showed no impact of the agent on gut microbiota richness and diversity (31). Taken together, these studies indicate that owing to its narrow-spectrum activity, afabicin has the potential to eradicate pathogens while preserving commensal microbiota. The use of afabicin therefore has the potential to reduce the incidence of complications caused by microbiota dysbiosis, such as antibiotic-induced colitis or candidiasis, which can occur following broad-spectrum antibiotic therapy (11,12,32).
In conclusion, this study has shown that afabicin is efficacious and well tolerated in the treatment of ABSSSI due to staphylococci. In vitro studies have demonstrated that environments rich in fatty acids can favor the emergence of S. aureus variants that are resistant to afabicin desphosphono (33,34). However, the results of the present study indicate that targeting FabI appears to be a valid approach in the ABSSSI setting. The availability of both i.v. and oral formulations of afabicin offers the possibility of using the same agent when changing from i.v. to oral treatment, which is advantageous when the patient is responding to treatment. The narrow spectrum of activity of afabicin is not only beneficial for the patient but also well aligned with antimicrobial stewardship, as it is believed that preservation of gut microbiota may also reduce spread of antibiotic resistance (35). Both the efficacy and safety data from this study support further development of afabicin for the treatment of ABSSSI and potentially pave the road for treatment of other types of staphylococcal infections such as bone and joint infections (36).
MATERIALS AND METHODS
Study design. This was a multicenter, randomized, parallel-group, double-blind, and double-dummy phase 2 study to evaluate the efficacy, safety, and tolerability of i.v. and oral afabicin compared with i.v. vancomycin and oral linezolid in the treatment of clinically documented ABSSSI due to staphylococci susceptible or resistant to methicillin (ClinicalTrials registration number NCT02426918; https:// clinicaltrials.gov/ct2/show/NCT02426918).
Main inclusion criteria. Patients eligible for inclusion were between 18 and 70 years of age and of either sex. Patients had a clinically documented ABSSSI (specifically, wound infection, cutaneous abscess, burn, or cellulitis) that was suspected or documented to be caused by a staphylococcal pathogen by either Gram staining showing Gram-positive cocci in clusters or a registered rapid diagnostic test. Patients had ABSSSI that were accompanied by clinical signs of erythema, edema, or induration measuring at least 75 cm 2 . The primary infected lesion had to show at least two of the following: significant pain or tenderness to palpation, purulent or seropurulent drainage or discharge, fluctuance, and/or heat or localized warmth. Patients had at least one of the following signs and symptoms of systemic inflammation or complicating factors: documented or reported fever of Ն38.0°C, white blood cell (WBC) count of Ͼ10,000 cells/mm³, Ͼ15% immature neutrophils irrespective of total WBC, local or regional lymphadenopathy, or elevated C-reactive protein. Patients who had received an antibiotic with activity against Gram-positive cocci within the 14 days preceding randomization were included if they met one of the following criteria: their causative Gram-positive pathogen from the ABSSSI lesion was Staphylococcus that was resistant in vitro to the antibiotic(s) administered (with clinical progression), they had documented failure to previous ABSSSI antibiotic therapy, or they had a single dose or a single course of short-acting antibiotic (4-to 6-h half-life) with potent antistaphylococcal activity within 24 h of randomization (limited to 25% of patients randomized).
Main exclusion criteria. Prior exposure to afabicin or afabicin desphosphono precluded enrollment into the study, as did any Gram-positive antibacterial therapy during the preceding 14 days or any other investigational medication during the preceding month (with some exceptions, as defined in the inclusion criteria). Patients excluded from the study were as follows: those with advanced disease with infected nonhealing wounds in peripheral sites, an abscess not drained or a wound infection involving foreign material not removed within 24 h of starting study medication, infected abdominal wounds that were unable to be surgically closed, necrotizing or gangrenous infections, or infected bites, or those with the primary site of infection on a limb that was likely to need amputation during the study. Patients with a primary infection (including erysipelas) due to suspected or documented streptococci or infection with a Gram-negative pathogen without concomitant staphylococcal infection or with a pathogen that was nonsusceptible to either study medication were also excluded, as were patients with sepsis or a nonskin source of infection and those not expected to survive for at least 60 days.
Procedures. Patients were randomized in a 1:1:1 ratio to receive either afabicin i.v. 80 mg BID followed by oral afabicin 120 mg BID (low dose [LD] afabicin), afabicin i.v. 160 mg BID followed by oral afabicin 240 mg BID (high dose [HD] afabicin), or vancomycin i.v. 1 g or 15 mg/kg BID followed by oral linezolid 600 mg BID (vancomycin/linezolid). Patients received their first dose of study medication on day 1 of the study. Following two doses of i.v. treatment, they were assessed by the investigator and were switched to oral treatment on day 2 if the acute toxicity of infection had resolved (resolution of fever, reduced/stable lesion size), the patient could tolerate fluids and a regular diet, and the investigator confirmed the patient no longer needed i.v. treatment. If needed, patients continued with i.v. dosing until they were ready for the switch to oral dosing. Treatment with study medication (i.v. and oral dosing) lasted between 7 days (14 doses; minimum treatment period for the patient to be evaluable) and 10 days (20 doses; maximum treatment period). Patients were assessed at baseline, i.e., at 48 to 72 h after randomization, within 24 h of the last dose (end of treatment [EOT]), and 7 to 14 days after EOT (short-term follow-up [STFU]).
During the screening period (within 48 h prior to randomization), the following were obtained for each patient: informed consent, eligibility verification, medical history, and demographic data. Two blood cultures were obtained from each patient at screening, and blood cultures were repeated if the patient remained febrile for Ͼ48 h. If they were positive for a pathogen, further blood cultures were obtained at least every 48 h until negative. If the patient's repeated 48-h sample was positive for the baseline pathogen, he/she was to be discontinued from the study and would be considered a failure for primary endpoint. The patient would then be offered an alternative antibiotic treatment. If the patient's blood cultures at any point after 48 h became positive for the baseline pathogen, he/she would be discontinued from the study and considered a failure. Lesions were assessed for area of erythema, edema/ swelling, and induration at screening, 48 to 72 h postrandomization, at EOT, and at STFU. Lesion samples (including purulent wound exudates, skin lesion biopsy specimens, tissue samples, and aspirates of abscess cavities) were collected at screening for microbiological culture, Gram staining, identification, and susceptibility testing. After screening, lesion samples for microbiological assessment were taken only from wounds that had not healed and were not taken after day 3 unless there was a relapse.
All blood and lesion samples collected for microbiology were processed and analyzed by local laboratories according to their routine procedures. These analyses included Gram staining, species isolation and identification, and susceptibility testing. All clinically relevant bacterial isolates were shipped to the central laboratory for confirmation of species identification by matrix-assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS) and susceptibility testing according to CLSI guidelines (37,38) and for molecular characterization of resistance and virulence genes by PCR and pulsed-field gel electrophoresis (PFGE) typing when appropriate. Data from the central laboratory were used in this study.
Concomitant medications were recorded daily until EOT and at STFU. The protocol was amended such that amoxicillin was administered to all patients with cellulitis, irrespective of treatment group. Nonstudy antibiotics with little or no activity against Staphylococcus spp. were permitted throughout the study, as was a single dose or a single course of short-acting antibiotic (4-to 6-h half-life) with potent antistaphylococcal activity within 24 h of randomization (limited to 25% of patients randomized). Treatment with vancomycin was allowed 6 to 12 h prior to screening (no later than 7.5 days prior to screening in patients with renal dysfunction).
Analysis populations. Four analysis populations were defined. The intent-to-treat (ITT) population included all randomized patients. The microbiological intent-to-treat (mITT) population comprised all randomized patients who had an identified baseline staphylococcal pathogen (S. aureus and/or a pathogenic coagulase-negative Staphylococcus [CoNS], including S. epidermidis, S. haemolyticus, and S. lugdunensis]) and received at least one dose of study drug. The per-protocol (PP) population comprised all patients in the mITT population who completed the study up to STFU without any major protocol deviations. The safety population comprised all patients who received at least one dose of study drug.
Efficacy outcomes. The primary efficacy endpoint was an early clinical response rate at 48 to 72 h following randomization in the mITT population, as specified in the FDA guidelines (16). Responders were patients whose primary ABSSSI lesion involving erythema, edema, or induration had decreased by Ն20% in area from baseline. Nonresponders were patients in the following categories: did not meet the criteria for clinical responders, required systemic concomitant antibiotic therapy that was potentially effective against the baseline staphylococcal pathogen, required unplanned incision and drainage of the ABSSSI within 48 to 72 h following randomization, required unplanned major surgery due to failure of study medication, and death prior to evaluation of the primary efficacy endpoint.
Secondary efficacy endpoints were clinical and microbiological outcomes. Clinical outcome (success or failure) was based on the investigator's assessment of the patient's signs and symptoms of infection in the mITT and PP populations at 48 to 72 h following randomization, EOT, and STFU. Clinical success was defined as the resolution or near resolution of most disease-specific signs and symptoms, no new sign, symptoms, or complications, and no requirement for further antibiotic therapy for the treatment of the original site of infection at EOT or STFU. Clinical failures were the following: patients who did not meet all the criteria for clinical success, patients in whom unplanned incision and drainage of the ABSSSI was performed within 48 to 72 h following randomization or in whom unplanned major surgery was required due to failure of study medication, and patients who developed osteomyelitis after baseline.
Microbiological outcomes were determined at 48 to 72 h following randomization, EOT, and STFU for all patients in the mITT population. Documented eradication was defined as the absence of baseline pathogens in follow-up cultures of the original site of infection. Conversely, documented persistence was the presence of baseline pathogens in follow-up cultures of the original site of infection. Presumed eradication and presumed persistence were assigned in cases where samples were not available for culture (lesion samples were not taken from wounds that had healed) and involved an investigator assessment of clinical outcome. A superinfection, at 48 to 72 h postrandomization and EOT, was defined as a new pathogen at the original site of infection during treatment in the presence of signs and/or symptoms of infection. A new infection at STFU was defined as a new pathogen at the original site of infection after treatment, in the presence of signs and/or symptoms of infection.
Safety. Treatment-emergent adverse events (TEAEs) and serious adverse events (SAEs) were reported by treatment group and were evaluated for severity and relationship to study medication (by i.v. treatment, oral treatment, and study procedure separately). All adverse events (AEs) were monitored until they were resolved, any abnormal laboratory values had returned to baseline levels or stabilized, or until there was a satisfactory explanation for the changes observed. An SAE was defined as any untoward medical occurrence that, at any dose, resulted in death, was life-threatening, required inpatient hospitalization, or resulted in persistent or significant disability/incapacity. Study procedures for safety assessments included the following: recording of AEs (every visit), clinical laboratory tests (hematology, serum chemistries, and coagulation tests within 24 h of screening, 48 to 72 h postrandomization, and at EOT), vital signs (blood pressure, pulse measurements, body temperature, and respiration rate at screening, before the first and second i.v. doses, 48 to 72 h postrandomization, after the first oral dose and last morning dose, at EOT/early termination [ET], and STFU), electrocardiograms (at screening, 48 to 72 h postrandomization and prior to and 2 to 4 h after the last morning dose at EOT), and physical examination findings (at screening, EOT, and STFU).
Statistics. The study was designed to demonstrate noninferiority between afabicin and vancomycin/ linezolid at the primary efficacy endpoint. A sample size of at least 231 patients in the mITT population was required to demonstrate noninferiority using a noninferiority margin of 15%, a 2-sided type I error of 5% and power of 80%, and when the early clinical response rate was assumed to be 87.5% in all treatment groups. Noninferiority was established if the upper bound of the two-sided 95% confidence interval for the difference in ECR rates (afabicin ECR rate minus vancomycin/linezolid ECR rate) was Ͻ0.15.
For secondary outcomes and safety assessments, descriptive analyses were performed in the mITT, PP, and safety populations, respectively.
Ethical conduct. The protocol and informed consent form were reviewed and approved by an institutional review board or independent ethics committee at each study center prior to study initiation, and the study was conducted according to the protocol and any subsequent amendments. Informed consent was obtained from each patient before any study-related investigations were performed. The study was conducted according to the ethical principles of good clinical practices as defined in the U.S. Code of Federal Regulations, the International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (ICH) E6 Good Clinical Practice, and local ethical and legal requirements. | 7,212.8 | 2020-08-03T00:00:00.000 | [
"Biology",
"Medicine"
] |
Electrical Properties of Yttria-Stabilized Zirconia, YSZ Single Crystal: Local AC and Long Range DC Conduction
Widely-used complex plane analysis of impedance data is insufficiently sensitive to characterize fully the bulk properties of YSZ single crystal. Instead, more extensive data analysis is needed which uses a combination of parallel, admittance-based formalisms and series, impedance-based formalisms. Bulk electrical properties are measured at higher frequencies and contain contributions from both long range conduction and local dielectric relaxation. At lower frequencies, electrode–sample contact impedances are measured and are included in full equivalent circuit analysis. The impedance of YSZ crystal of composition 8 mol% Y2O3 in the (110) orientation, with Pt electrodes, was measured over the temperature range 150–750°C and frequency range 0.01 Hz-3 MHz. Full data analysis required (i) a parallel constant phase element (CPE)–resistance (R) combination to model the electrode response, (ii) a series R-C element to represent local reorientation of defect dipoles and (iii) a R-C-CPE element to represent long range oxide-ion conduction; (ii) and (iii) together model the bulk response. The dielectric element underpins all discussions about defect structure and properties of YSZ but has not been included previously in analysis of impedance data. The new equivalent circuit that is proposed should allow better separation of bulk and grain boundary impedances of YSZ ceramics.
Yttria-stabilized zirconia (YSZ) is a very well-known oxide ion conductor that is used as the solid electrolyte in solid oxide fuel cells and oxygen gas sensors. [1][2][3] It usually takes the form of a high-density ceramic in which the bulk resistance in series with a grain boundary resistance gives the overall sample resistance. 4 In almost all cases, the grain boundary resistance is present and cannot be eliminated readily by attention to ceramic processing conditions. The nature of the grain boundary impedance is often unclear, although significant compositional differences from the bulk, associated with dopant segregation, may be involved. 5,6 In order to measure sample impedances, sample-electrode contacts are necessary and therefore, appropriate consideration of their associated impedances forms part of the overall impedance analysis. For YSZ, contact impedances include contributions from the blocking of oxide ions at the sample-electrode interface, charge transfer resistances associated with the O 2− /O 2 redox couple and the diffusion of O 2 molecules between the surrounding atmosphere and the sampleelectrode interface. Usually, electrode contact impedances are wellseparated on a frequency scale from bulk/grain boundary impedances because they have much higher associated capacitances, C: typically (1-10) × 10 −6 F for the electrode contact compared with ∼ 1 × 10 −10 F for a grain boundary capacitance and (2-3) × 10 −12 F for a bulk capacitance. Relaxation frequencies, ω, are given ideally by ωτ = 1, where τ = RC and therefore, the frequency maxima of impedance semicircles, arcs or peaks associated with bulk/grain boundary impedances are usually separated by several decades from those of electrodesample contact impedances. This allows the visualization and characterization of sample properties without the necessity to eliminate sample-electrode contact impedances but, of course, a full analysis of properties, including the modelling of sample-electrode impedances, can also be carried out, as shown here.
Conductivity Arrhenius plots for oxide ion conduction in YSZ have been reported on many occasions and we do not give a comprehensive survey of the early literature here. Many data sets, especially for compositions that exhibit high oxide ion conductivity, show distinct curvature of the Arrhenius plots which is attributed to the trapping at lower temperatures of mobile oxide ion vacancies in vacancydopant defect complexes. 3 temperatures and ca 0.85 eV at high temperatures; the difference of ca 0.25 eV is often regarded as the dissociation enthalpy of the defect complexes. [11][12][13][14][15] It is standard practice in the analysis of impedance data of YSZ ceramics and single crystals to present data in the form of impedance complex plane plots, Z /Z and to obtain bulk conductivity data from the low frequency intercept of the high frequency arc or (distorted) semicircle on the real, Z axis; sometimes, the data are fitted to a semicircle whose center is depressed below the Z axis. 16,17 Impedance complex plane, Z * , plots represent a good method to separate bulk and grain boundary resistances since the appropriate equivalent circuit for data analysis is a series combination of the parallel RC elements that represent the bulk and grain boundary components. However, Z * plots on linear scales give undue weighting to the largest resistances in a sample and effectively, exclude from view any low resistance components such as those associated with inhomogeneous ceramics that may have conductive grain cores but resistive grain boundaries.
A more comprehensive analysis of impedance data that avoids such weighting is obtained by presenting the same impedance data in at least two of the four formalisms: impedance Z * , admittance Y * , permittivity ε * and electric modulus M * using the following interconversions: 18,19 where C 0 is the vacuum capacitance of the conductivity cell, without a sample in place. Each of these formalisms has real and imaginary components, such as: Following these interconversions, data may be plotted as either complex plane (or Nyquist) plots, e.g. Z vs Z or as spectroscopic (or Bode) plots, e.g. Z , M vs log f . It was shown recently that accurate fitting of impedance data of YSZ ceramics to the traditional equivalent circuit consisting of a series combination of bulk and grain boundary impedances was not entirely successful. Instead, better agreement was obtained on introduction of an additional series RC element, that was attributed to a localized dipolar reorientation process, into the equivalent circuit; 20 this element was placed in parallel with the element representing long range dc conduction through the sample.
F967
The objectives of the present work were to (1) obtain impedance data on a YSZ single crystal that therefore, did not contain any grain boundary contribution, (2) find the most appropriate equivalent circuit to model the bulk impedance data by considering the possibility of a parallel combination of both long range and local conduction processes, (3) extract values for the various circuit parameters as a function of temperature and (4) model and characterize the impedance response of the sample-electrode interface.
Experimental
Single crystals of yttria-stabilized zirconia of composition 8 mol% Y 2 O 3 (8YSZ) were obtained from Pi-Kem. The crystals were provided as plates parallel to lattice planes of the set {110} with dimensions 5 × 5 × 0.5 mm. The crystals were already polished on a pair of opposite plate faces and were used as-received. Electrodes were fabricated from Pt-paste on opposite plate faces which was dried and hardened by heating at 900 • C for 2 h. The crystals were then attached to the Pt leads of a conductivity jig which was placed inside a horizontal tube furnace.
Impedance measurements in air were obtained over the temperature range 150 to 750 • C and recorded using two instruments, an Agilent 4294A over the frequency range 40 Hz to 3 MHz and a Solartron 1260A over the frequency range 0.01 Hz to 1 MHz. Most of the data were collected using the Agilent, but use of the Solartron enabled an extra three decades at low frequency to be accessed, especially for measurements at high temperatures; the nominal ac voltage used was 100 mV. At each temperature, the system was allowed to equilibrate for 1 h, without voltage applied, prior to impedance measurements. Data were analyzed using Zview (Scribner Associates Inc.) software. Impedance data were corrected for crystal geometry and electrode contact area; this allowed resistance and capacitance to be reported in resistivity and permittivity units of cm and F cm −1 , respectively. Open circuit measurements of an empty jig were used to obtain the blank parallel capacitance, C 0 , of the jig and leads, which was subtracted from the values obtained with a sample present in the jig. In order to obtain the C 0 value, the jig was assembled with hardened Pt electrodes of similar dimension, but without a sample in place. Closed circuit measurements were obtained by connecting the two electrodes directly and used to correct for the series jig resistance.
The first objective in data analysis was to find the most appropriate equivalent circuit to represent data sets; to achieve this, data were presented in various formats so as to visually, gain an overview of the various components shown by the data. We found the following presentations to be particularly appropriate and were the ones used here.
First, Z vs Z plots served to highlight the main resistive components but had the disadvantage that small additional resistances were effectively hidden. As also shown later, Z vs Z plots were not a good discriminator between the possible equivalent circuits.
Second, log Y vs log f plots gave equal weighting to the various conducting elements and in particular, highlighted the presence of a high frequency dispersion which was modelled using the bulk constant phase element, CPE.
Third, combined Z /M vs log f plots were examined to see whether the main resistance, shown by the largest arc in plots of Z vs Z or the largest peak in plots of Z vs log f , represented the sample bulk. If it did, the Z peak should coincide approximately with the largest peak in M vs log f which corresponds to the bulk response since it represents the element with the smallest (ie bulk) capacitance.
Fourth, log C vs log f [C ≡ ε C 0 ] plots were examined since they gave equal weighting across the frequency spectrum to the various capacitive elements, including the limiting high frequency capacitance, any intermediate frequency capacitances and low frequency, electrode-sample contact capacitances.
The second objective in data analysis was to fit the experimental data to possible equivalent circuits. It is essential to identify the most appropriate equivalent circuit in order to have correct equations to evaluate the component R, C and CPE parameters. Fit quality and accuracy were assessed by both visual inspection of the data in various formalisms, as indicated above and the residuals between experimental and fitted data.
The third objective was to determine the dependence of the various circuit component values on temperature and interpret the component parameters in terms of sample characteristics.
Results and Discussion
Impedance data are shown for the (110) orientation in Figure 1 as (a) a Z * complex plane plot and (b) Y , (c) M /Z , and (d) C spectroscopic plots at one representative temperature, 306 • C.
At this temperature, data are dominated by the sample response whereas sample-electrode contact impedances start to appear in the data at higher frequencies. The Z * data (a) show a slightly distorted high frequency arc with a low frequency inclined spike (inset). The initial interpretations of these data are as follows: Using conventional complex plane analysis, the high frequency arc (a) can be fitted to an appropriate function and the dc resistance value of the sample obtained from the low frequency intercept on the Z axis. The capacitance associated with sample resistance can be obtained from the arc maximum using the relation: As expected for a single crystal, there is no additional arc at lower frequencies associated with a grain boundary impedance. The low frequency spike represents the onset of the sample-electrode contact impedance.
On presenting the data in other formats (b-d), additional features are seen. Y data (b) show a low frequency plateau corresponding to the dc conductivity, that is also obtained from the Z /Z plots, and in addition, a high frequency, power law dispersion. Such dispersions are a characteristic feature of all ionically-conducting materials (and probably, many semiconducting materials as well) and are a manifestation of Jonscher's Universal Dielectric Response. 21 The dispersions correspond to regions of the frequency/time domain where local conduction processes occur but on shorter timescales than dc processes at lower frequencies. With increasing frequency in the dispersion region, increasingly easier conduction processes are detected and the measured ac conductivity rises.
Over the years, various empirical functions had been used to model the dispersion region until the seminal work of Jonscher who recognized the universal occurrence of a power law dependence of ac conductivity on frequency. Most recently, Almond and co-workers demonstrated that such power law dependence is a natural consequence of an equivalent circuit that consists of a large resistorcapacitor network. 22,23 Until their demonstration, the significance of the characteristic slope, n, of the log conductivity-log frequency plots was not well appreciated but is now regarded simply as the ratio between the numbers of capacitive and resistive connections in the network. This high frequency power law conductivity dispersion is modelled in equivalent circuits by inclusion of a CPE whose admittance takes the form: The bulk electrical properties of many ionic conductors are modelled well by an equivalent circuit A, Figure 2, that consists of a parallel combination of a resistance, R, which represents the dc conductivity, a capacitance, C which represents the limiting high frequency permittivity, often given the symbol ε ∞ and a CPE which represents the power law dispersion.
M /Z plots, Figure 1c show one main peak in each spectrum with the peak maximum at slightly higher frequency for M than for Z . The peak maximum of an ideal, Debye-like M peak is inversely proportional to the capacitance of the R-C element responsible for the peak: where C 0 is the capacitance of the empty jig that contains electrodes in the same geometrical arrangement. Since the smallest capacitance in an equivalent circuit usually represents the bulk component, the M peak, and the associated Z peak, enable assignment of these peaks to the bulk sample conductivity. The observed small separation in peak maximum frequencies (c) is a direct consequence of the presence of the CPE in the equivalent circuit A, Figure 2. In addition, the CPE causes the M , Z peaks to broaden asymmetrically: the M peak is Debyelike at frequencies lower than the peak maximum but broadened at higher frequencies whereas, the Z peak is Debye-like at frequencies above the peak maximum but broadened at lower frequencies. 24 C data, Figure 1d, show two dispersions at high and low frequency with some evidence for both a limiting high frequency plateau at ∼2 pFcm −1 and a poorly-resolved intermediate frequency plateau at ∼6 pFcm −1 . The high frequency plateau in C corresponds to a permittivity, of ∼25, using: where e 0 is the permittivity of free space, 8.854 × 10 −14 Fcm −1 . This ε value is attributed to the bulk permittivity, ε ∞ of the crystal. C data at lower temperatures show this plateau more clearly, Figure 1e. The intermediate frequency plateau has an effective permittivity of ∼70. It is not immediately obvious how this should be assigned since data obtained from single crystals should be free from any grain boundary or surface layer impedances and anyway, such a capacitance value of ∼6 pFcm −1 would represent a significant volume fraction of the sample and be much smaller than expected for a grain boundary or surface layer. 19 It therefore, seems likely to represent an additional parallel element in the equivalent circuit.
The high values reached by the low frequency dispersion in C Figure 1d and the observed low frequency spike in Z * , (a), are attributed to blocking capacitance effects at the crystal-electrode interface and in particular, are associated with oxide-ion conduction of the YSZ crystal. 25 Further interpretation of the impedance data required fitting to possible equivalent circuits, to establish the most appropriate circuit. The assessments of the validity of the possible equivalent circuits were carried out in various ways: (1) by visual comparison of fitted and experimental data over the whole frequency range, using data Although impedance data covering 8 decades of frequency were obtained, this was insufficient to fully fit to a complete equivalent circuit at any single temperature. Consequently, partial circuits were used for three temperature ranges: (i) low, (ii) intermediate and (iii) high which were then combined at the end of the analysis to give a master circuit. Finally, at the highest temperatures, (iv), modification to the circuit was required to include the introduction of instrumentationrelated inductive effects.
(i) Low temperature data, 170 to 220 • C The first step to establish the most appropriate equivalent circuit was to find a partial circuit that fitted the lowest temperature data sets since, at these temperatures, only the bulk response was detected over the measuring frequency range. The partial circuit A shown in Figure 2 covers the high frequency data associated with the bulk response and includes both dc conduction and ac conductivities associated with short range, power law effects. 21,26 An excellent fit of low temperature data to this partial circuit was obtained, as shown at 190 • C in Figure 3.
The presence of a CPE in the equivalent circuit was readily apparent in two ways. First, as shown in log Y vs log f , the CPE represents the power law dispersion, at high frequencies, with slope n, Figure 3c. Second, in plots of log C vs log f , the CPE contributes a power law dispersion of slope (n-1) at lower frequencies because C = Y /ω = Bω n−1 ; this is seen over the frequency range ∼10 4 -10 6 Hz in Figure 1d. In the analysis of high frequency data, it is essential that both CPE 1 and C 1 are included in the equivalent circuit. 27 A CPE alone cannot account for experimental data in which both frequency-independent ε ∞ is detected at high frequencies and frequency-dependent C at lower frequencies.
Unfortunately, this point is often not recognized in the literature, perhaps because data may not extend to frequencies that are high enough to detect ε ∞ and therefore, equivalent circuits that are used to represent the bulk response may contain only R 1 and CPE 1 . An alternative reason may be that data presentation is often limited to the use of Z vs Z complex plane plots. These are completely insensitive to the presence of high frequency, power law impedances which occur at frequencies close to the origin of Z vs Z plots.
(ii) Intermediate temperature data, 260 to 440 • C The second stage in finding an appropriate equivalent circuit was to consider data obtained at increasingly higher temperatures; additional impedance components became apparent in the lower frequency C data and required inclusion of additional element(s) in the equivalent circuit. The effect of including various possible additional circuit elements was tested based on two strategies. One was to add a second element in series with the bulk element shown in circuit A, Figure 2. This would represent a second series impedance associated with the single crystal and correspond to an electrical inhomogeneity of some kind. Given the small value of the intermediate frequency capacitance seen in Figures 1d, 1e, this electrical inhomogeneity would correspond to a significant volume fraction of the crystal. The second strategy was to consider an additional impedance in parallel with the bulk conductivity represented by circuit A, Figure 2; in order for this to be detected as a separate element, it should have dielectric character and involve a series R-C combination.
It was found that partial circuit B, Figure 2, containing an additional parallel impedance, gave the best fit to the experimental data at intermediate temperatures. This partial circuit has the logical simplicity of combining, in parallel, a conductive element, R 1 -C 1 -CPE 1 and a dielectric element represented by the C 2 -R 2 series combination. Circuit B also contains a series element CPE 3 to represent the onset of impedances associated with the sample-electrode interface. Fits to experimental data recorded at 306 • C, are shown in Figures 4a-4d; the residuals are shown in (e) and are small over the entire frequency range.
The suitability of several other plausible equivalent circuits was tested, as shown in Figure 5 and Table I, circuits (D) to (J). Each of these circuits contains the same element, CPE 3 to represent the onset of the electrode-sample interfacial impedance and, therefore, the circuits differ only in the element(s) that represent the bulk response. The results, Figure 5, show that all of these circuits were unsatisfactory for various reasons, as follows.
Circuits (D) and (E) are simple circuits that have a single conducting element to represent the sample bulk. Circuit (D) has the parallel element R 1 -CPE 1 whereas (E) has the parallel element R 1 -CPE 1 -C 1 . Both of these are often used in the literature to represent a bulk conductive response; they are in series with element CPE 3 to represent the sample-electrode interface. Residuals and fits for these circuits are not good.
Circuit (F) is the classic circuit used to represent many ceramics with a series combination of elements such as grains, grain boundaries and surface layers. Poor quality of the fits, of C' in particular, as well as poor residuals, show that this circuit is unsuitable.
Circuits (G), (H), (I) and (J) are other possible circuits that combine conductive and dielectric components although none have the logical consistency of a conducting element R 1 -C 1 -CPE 1 , in parallel with a separate dielectric element R 2 -C 2 that is present in circuit A. However, these circuits all gave poorer residuals than circuit (B) as well as an unrealistically high value of C 1 for circuit (I).
(iii) High temperature data, 500 to 600 • C The final step in obtaining a circuit that represents the complete range of impedance data was to fully characterize the sample-electrode contact impedance that is seen with increasing temperature and at lower frequencies. The complete, or master, equivalent circuit that includes partial circuits A and B is shown as circuit C in Figure 2, although at these high temperatures, it was not possible to have a sufficiently wide range of frequencies to include refinement of the parameters, R 2 , C 2 and C 1 in data fitting. Element CPE 3 , that represents the sampleelectrode interface at high temperatures, is modified by the addition of a parallel resistance, R 3 . Consequently, this interfacial impedance has a finite resistance, as shown by an extrapolated limiting low frequency intercept on the real Z' axis of the impedance complex plane plot, Figure 6a. Since, at high temperatures, data do not extend to frequencies that are high enough to include a significant contribution from elements C 1 and R 2 -C 2 , circuit C is simplified to give the partial circuit shown in Figure 6a. Fits of Y and C spectroscopic plots to this partial circuit are shown in Figures 6b, 6c with residuals in 6d.
(iv) Highest temperature data, 650 to 750 • C, with inductive effects Impedance data at the highest temperatures were similar to those shown in Figure 6, but with one main difference. Instead of the impedance data showing the onset of the R 1 -CPE 1 high frequency arc, an inductive effect was seen in which the impedance data at high frequencies cross the Z' axis to give positive values of Z''. This is shown in Figure 7a together with an equivalent circuit containing a series inductance L 1 which gave a good fit to the data at these high temperatures. The effect of the inductance on log Y at the highest frequencies is shown in (b) and is also seen as a resonance effect in log C data (c). There was no evidence for an inductive effect in lower temperature data, < 700 • C, and therefore the inductance is not included in the master circuit C.
Arrhenius plots for the conductivities σ 1 , σ 2 and σ t obtained from fitting to circuit B are shown in Figures 8a, 8b. The Arrhenius plot for σ t , (b), is not linear, consistent with that reported for various YSZ samples on many other occasions. 3,7,8,10,28,29 This non-linearity is widely attributed to trapping of oxygen vacancies in vacancy-dopant complexes at low temperatures: 30 at higher temperatures, dissociation of the complexes occurs and the trapping enthalpy is not included in the activation energy. The total conductivity, σ t , at high temperatures, would therefore represent the hopping of free vacancies.
The temperature dependence of R 3 is shown in Figure 8c and that of the CPE 1 parameters and C 1 is in Figure 8d. Resistance R 3 controls the total dc resistance of the sample-electrode arrangement and is associated, in some way, with the sample-electrode-air interface. It also has a very high activation energy, 2.5(1) eV. The two main processes taking place in the vicinity of the interface are redox electron transfer between oxygen species and the diffusion of O 2 molecules through the Pt electrode between the surrounding atmosphere and the sample-electrode interface, both of which could have a significant associated impedance. These processes may be significantly different for the flat, single crystal surfaces used here and the higher surface area, intrinsically rough, surfaces of most YSZ ceramics. Further work is required to better understand the nature of the interface reactions and their effect on resistance R 3 .
The Arrhenius plot for σ 1 is parallel to that for σ t at low temperatures and therefore, has the same activation energy. The similarity of the conductivity data for σ 1 and σ t at low temperatures is rationalized using the equation for Y * of circuit B. The bulk component of circuit B has four elements in parallel: R 1 , C 1 , CPE 1 and R 2 -C 2 and therefore, its admittance, Y * , can be written as the summation of their individual admittances, as follows: In the low frequency limit, as ω → 0, Y * = 1/R 1 = σ 1 = σ t and therefore, σ t contains no contribution from the dielectric resistance R 2 . Consequently, R 2 makes no contribution to the intercept values of R t in impedance complex plane plots and could not be detected by standard impedance complex plane analysis.
The Arrhenius plot for σ 2 has lower activation energy than that for σ 1 , and is similar to that of σ t at high temperatures. σ 2 appears to represent the reorientation of the vacancy-dopant complexes. The interpretation of its lower activation energy would be that dipole reorientation, represented by series element R 2 -C 2 , does not require dissociation of the complexes; hopping of the oxygen vacancies within the complexes is therefore similar to the hopping of free vacancies at (7) high temperature, without the need in either case for vacancy-complex dissociation.
In order to investigate the effect of heating to high temperatures and subsequent cooling rate on the conductivity, (110)-oriented YSZ single crystals were annealed in air at 1200 • C for 90 minutes and cooled at different rates. Figure 9a shows the Arrhenius plots of the samples cooled at different rates. All three data sets show non-linear Arrhenius plots, but slight differences can be observed in both high temperature (b) and low temperature (c) ranges. At high temperature, the sample quenched in liquid N 2 shows slightly lower conductivity than the samples cooled at 10 and 0.5 • C/min, which show similar conductivities. Conversely, at low temperature, the quenched sample shows higher conductivity values than the samples cooled at intermediate and low cooling rates.
The equivalent circuit analysis results reported above have enabled us to identify the most appropriate equivalent circuits to represent the experimental data sets. At lower temperatures where the bulk response of the crystals can be seen in the available frequency range, it is clear that the bulk response contains two components which are in parallel, rather than a series-connected circuit which is usually appropriate for ceramic materials consisting of grain and grain boundary components. Thus, as expected, there is no evidence of a component attributable to grain boundaries or, indeed, to a surface layer or crystal inhomogeneity. We are therefore now in a position to consider the possible mechanistic origins of the two parallel components, one of which represents long range conduction and the other Figure 6. Impedance spectra for 8YSZ single crystal with the field perpendicular to (110) measured at 500 • C. (a) Experimental and fitted data shown for impedance complex plane plot with the equivalent circuit used, (b) Y' spectroscopic plot, (c) C' spectroscopic plot and (d) residuals. which appears to represent local conduction or a dielectric relaxation process.
The traditional explanation of curvature in conductivity Arrhenius plots of YSZ ceramics and single crystals is the so-called 'dipoletrapping model', which invokes the trapping of mobile oxygen vacancies by the Y acceptor dopants. The trapping arises because the dipole components have charges of opposite sign, i.e. using Kröger-Vink notation, they are Y and V •• o . At lower temperatures, an additional dissociation enthalpy is required to enable long range conduction of the oxygen vacancies and this gives rise to an activation energy which contains terms for both dipole dissociation and vacancy migration. At higher temperatures, above the region of curvature in the Arrhenius plots, it is presumed that a sufficient number of dipoles are dissociated and therefore, the observed lower activation energy contains only the vacancy migration term.
Our equivalent circuit B is at least partly consistent with this model; the series element R 2 -C 2 represents hopping of oxygen vacancies within the dipoles and therefore, leads to dipole reorientation but not long range vacancy migration. The dipole reorientation is an ac process only but occurs at the same time as long range dc conduction; therefore, R 2 does not contribute to the total crystal resistance R t (= R 1 ). From the difference in activation energies of σ 1 and σ 2 , the value of ∼0.2 eV may be assigned to the dissociation enthalpy. This value is similar to that reported in the literature [11][12][13][14][15] based on high and low temperature activation energies whereas here, both values are obtained from the same, low temperature, data sets. This simple dipole trapping model has certain drawbacks. As pointed out by Ahamer et al., 15 these YSZ materials cannot be regarded as dilute defect systems since the dopant Y concentrations are far too high; it is difficult to imagine how genuinely-free oxygen vacancies could arise since there will always be Y dopants in the near vicinity of the oxygen vacancies. It is also difficult to explain the differences in conductivity observed between quenched and slow-cooled crystals using this model. As shown by Ahamer et al. 15 and also in Figure 9, quenched crystals have a higher conductivity at low temperatures followed by a slightly smaller conductivity at high temperatures. The increased conductivity at lower temperatures in the quenched samples could be interpreted reasonably as an increase in number of free oxygen vacancies arising from the dipole dissociation at high temperatures prior to quenching. However, the higher temperature data imply a reduction in mobile vacancy concentration which cannot be explained by a simple model of dipole dissociation.
Given the large concentration of both Y dopants and oxygen vacancies, other reversible structural changes may occur as a function of temperature which influence the mobile carrier concentration at high temperatures. High temperature neutron diffraction studies on YSZ powders showed additional broad diffuse scattering peaks which disappeared above 650 • C and a discontinuity in thermal expansion coefficient data was used as evidence for the occurrence of a second order phase transition. 31 We now consider the recently-proposed 'two different barrier heights' model 15 in which the conduction pathway involves a sequential combination of hops over two different barrier heights. This model can also account for curvature in the Arrhenius plots. Thus, at low temperatures the higher barrier limits the long range conductivity. With increasing temperature, the higher barrier becomes less important since it has a higher activation energy than that for hops over the lower barriers. Consequently, at high temperatures, the lower barriers limit the long range conductivity.
A drawback of this model is that it is a series model and therefore, impedance data at low temperatures should fit an equivalent circuit that has two R, C components, representing the two barrier heights, which are placed in series. In addition, the spectrum of conductivity, Y vs frequency should show two plateaux, one representing the overall conductivity at low frequencies and a second one at higher frequencies that includes the conductivity of the easier hops. In our impedance data, there is no evidence of a second series component in the equivalent circuit nor of two plateaux in the conductivity, Y , spectra. As with the dipole dissociation model, there is also the difficulty in explaining the differences in conductivity of quenched and slow-cooled samples.
In conclusion, there is a closer fit of the dipole model to the equivalent circuit B that contains two parallel conduction pathways, but the dipole model is a significant approximation to what must be complex, co-operative conduction mechanisms in which two activation barriers can be identified. Further, the nature of the defect clusters may change at a second order transition and involve more structural reorganization than simple dipole dissociation.
Crystallographic evidence for defect clusters has been obtained by single crystal neutron diffraction studies on YSZ crystals with a range of Y contents 32 and Sc-doped YSZ ceramics. 33,34 An important cluster appears to be a pair of oxygen vacancies, separated by a cation, in the < 111> direction. These are reported to be stable to high temperatures, close to melting. However, the precise nature of the structural changes to the defect complexes responsible for curvature in the Arrhenius plots is, at present, unknown. Possibly, two separate cluster formation mechanisms are involved; one involves Y Zr − V •• O pairs and the other involves O oxygen vacancy pairs. The increase in concentration of one kind of cluster may be at the expense of the second kind and this may be reflected in the conductivity data showing an enhanced conductivity at lower temperatures at the expense of a reduced conductivity at higher temperatures. However, this is speculation and requires further study.
Conclusions
Accurate representation of bulk impedance data of single crystal YSZ samples requires the presence of a dielectric element in the equivalent circuit in addition to the usual element that represents the bulk conductivity. The circuit that best fits the bulk response is a parallel combination of the R 1 -C 1 -CPE 1 conducting element with the R 2 -C 2 dielectric element, Figure 10. R 1 represents the dc resistance of the sample and is the same as the total resistance R t obtained by conventional complex plane analysis. R 2 represents the resistance to defect complex reorientation and has similar activation energy to the total resistance at high temperatures. We are, therefore, able to determine the parameters for local hopping or dipole reorientation separate from the long range, conductivity parameters.
Previous studies on YSZ ceramics showed the need for inclusion of the dipole element but a full assessment of the most appropriate circuit to represent the data was not made. 20 Here we show that, with single crystal data and no contribution from grain boundary impedances, it is possible to identify unambiguously the most appropriate equivalent circuit. Circuit B is also the most logical circuit since it represents the two parallel processes of conduction and dielectric relaxation.
Choice of the most appropriate equivalent circuit to fit and analyse data requires data presentation in numerous ways so as to give equal weighting to all impedance components over the entire frequency range. Conventional impedance complex plane plots on linear scales, Z vs Z , which have been widely used previously to analyse YSZ impedance data, are insensitive to impedance phenomena at high frequencies and were unable to discriminate between the various equivalent circuits that were considered and tested. It was found to be particularly useful to present impedance data as log Y vs log f , which showed the distribution of conductivities and log C vs log f , which showed the distribution of capacitances. These presentations were sensitive to additional impedance components because the equivalent circuit, Figure 10, has a parallel combination of contributing elements that are best separated using admittance-based formalisms and are an example of the truism: admittances add in parallel whereas impedances add in series.
As far as we are aware, the contribution of short range dielectric processes in parallel with long range ionic conduction has not been well-recognized previously in the analysis of impedance data of YSZ. However, the occurrence of series-based, local ac conduction processes as part of overall, long range dc conduction, is widely attributed to the frequency-dependent, power law ac conductivity at high frequencies, such as shown in Figures 1b, 3c, etc. Such processes are usually represented by a CPE, which can be deconvoluted into resistive and capacitive components, whose relative contribution is given by the CPE parameter, n. From present results, both CPE 1 and the dielectric processes, R 2 -C 2 , contribute to the overall impedance response of YSZ materials.
The activation energy for σ 2 , which represents dipole reorientation, is similar to that of σ t at high temperatures, where it is presumed that oxygen vacancies require no dissociation energy in order to move. The higher activation energy of σ t at low temperatures therefore contains a contribution from dipole dissociation, estimated at ∼0.2 eV. This simple model of dipole dissociation needs modification to take account of first, structural studies of defect complexes, including temperaturedependent cluster formation in diffuse scattering neutron diffraction data and second, the high concentration of dopants and oxygen vacancies which exceed greatly the limit for considerations using dilute defect equilibria.
These results on a single crystal sample show that the bulk response contains two components, representing dielectric and conduction processes. Recognition and modelling of this complexity may help to shed light on grain boundary contributions to the impedance of ceramic samples. The intermediate frequency capacitance plateau that we identify with the dielectric component C 2 has been in evidence in the impedance response of numerous other single crystal and ceramic samples, 35,36 not only of YSZ; it may therefore be a common feature of the impedance data of many ionic conductors. | 8,947.4 | 2018-08-25T00:00:00.000 | [
"Materials Science"
] |
Beta-type functions and the harmonic mean
For arbitrary f:a,∞→0,∞,\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f:\left( a,\infty \right) \rightarrow \left( 0,\infty \right) ,$$\end{document}a≥0,\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$a\ge 0,$$\end{document} the bivariable function Bf:a,∞2→0,∞,\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$B_{f}:\left( a,\infty \right) ^{2}\rightarrow \left( 0,\infty \right) ,$$\end{document} related to the Euler Beta function, is considered. It is proved that Bf\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$B_{f\text { }}$$\end{document}is a mean iff it is the harmonic mean H. Some applications to the theory of iterative functional equations are given.
In this paper we are interested in answering when a beta-type function is a bivariable mean in (a, ∞). Our main result says that the beta-type function of a generator f : (a, ∞) → (0, ∞) is a mean iff f (x) = 2xe α(x) where α : R → R is an additive function, or equivalently, that B f is the harmonic mean (Theorem 2). This substantially improves the result of [1] where the homogeneity of the beta-type function is assumed.
In the preliminary Sect. 2 we recall the notions of mean, premean, reflexivity of a function, and some of their properties. The increasingness of the beta-type function B f is equivalent to the concavity of the function log •f in the sense of Wright (Proposition 1). In Sect. 3 . At the end we propose a unique and natural extension of the harmonic bivariable mean to R 2 .
The case of k-variable beta-type functions, k ≥ 3, will be considered in our next paper.
the beta-type function of generator f .
Note that the beta-type function B f of a generator f : In this context one could also consider the functions of beta-type of the generators f defined on the intervals (−∞, a) with values in (−∞, 0) .
Remark 4.
Replacing f in this Definition 3 by 1 f we get We shall prove the following be a continuous function. The following two conditions are equivalent: Proof. Assume (i). Then, as log is increasing in (0, ∞) , the function log •B f is increasing in each variable. So, by the definition of B f , for all x, y, z ∈ (0, ∞), or, equivalently, for all x, y, z ∈ (0, ∞), Choosing arbitrary u, v > 0, u < v, and t ∈ (0, 1), and taking we obtain that the above implication is equivalent to the following one: for all u, v > 0, and t ∈ (0, 1), which shows that B f is increasing if, and only if, log •f is concave in the sense of Wright. Since f is continuous, in view of a theorem of Ng [6], log•f is Wright concave if, and only if, log •f is concave. Since B f is a mean, it is reflexive. Thus we have shown that (ii) holds true. The converse implication follows from Remark 3.
Beta-type functions and reflexivity
Applying the method of the theory of iterative functional equations by Kuczma [4], we prove the following 2a) . The function f satisfies the functional equation if, and only if, 2a) . The function f satisfies the functional equation
(ii) By induction, we shall prove that, for all n ∈ N 0 , Take arbitrarily x ∈ (0, 1) . There exists a unique n ∈ N 0 such that If n = 0, then 1 ≤ x < 2 and, by the definition of f 0 , so (3.10) holds true for n = 0. Assume that (3.10) holds true for some n ∈ N 0 .
Taking arbitrarily x ∈ 1 2 n+1 , 1 2 n , we have 2x ∈ 1 2 n , 1 2 n−1 , thus, applying (3.10), we obtain Hence, by (3.3), so (3.10) holds true for x ∈ 1 2 n+1 , 1 2 n , which means that (3.10) holds for n + 1. By the induction principle, formula (3.10) holds true for all x ∈ (0, 1). Both reasonings prove the validity of (3.4), which is the second statement of our theorem. Arguing similarly as in the previous case we can prove the converse of (ii). In part (iii), since one implication is obvious, we must show that the continuity of f 0 and (3.5) imply the continuity of f . By (3.4), the continuity of f 0 on (1, 2) implies that f is continuous on n∈Z 2 n , 2 n+1 . It remains to show the continuity of f at the point 2 n for all n ∈ Z.
Applying in turn (3.4), (3.5), we have Vol. 91 (2017) Beta-type functions and the harmonic mean 1047 The validity of is obvious since, by (3.4), f is defined on the interval 2 n , 2 n+1 and the composition and multiplication of continuous functions are continuous. This finishes the proof. Krull's theorem [2,3] (see also [4], pp. 114-115) gives us the existence of a (unique up to a constant) convex solution h to (3.12). Since, for any real constant k, the function u −→ e −u + k satisfies the functional equation (3.12) and is convex, it follows that for some k, h (u) = e −u + k, u ∈ R. | 1,239 | 2017-10-23T00:00:00.000 | [
"Mathematics"
] |
Supersymmetry Searches in GUT Models with Non-Universal Scalar Masses
We study SO(10), SU(5) and flipped SU(5) GUT models with non-universal soft supersymmetry-breaking scalar masses, exploring how they are constrained by LHC supersymmetry searches and cold dark matter experiments, and how they can be probed and distinguished in future experiments. We find characteristic differences between the various GUT scenarios, particularly in the coannihilation region, which is very sensitive to changes of parameters. For example, the flipped SU(5) GUT predict the possibility of $\tilde{t}_1-\chi$ coannihilation, which is absent in the regions of the SO(10) and SU(5) GUT parameter spaces that we study. We use the relic density predictions in different models to determine upper bounds for the neutralino masses, and we find large differences between different GUT models in the sparticle spectra for the same LSP mass, leading to direct connections of distinctive possible experimental measurements with the structure of the GUT group. We find that future LHC searches for generic missing $E_T$, charginos and stops will be able to constrain the different GUT models in complementary ways, as will the Xenon 1 ton and Darwin dark matter scattering experiments and future FERMI or CTA $\gamma$-ray searches.
Introduction
The recent years have provided a plethora of new experimental and cosmological information that provides important constraints on possible extensions of the Standard Model (SM). Recent LHC results, including the Higgs measurements [1,2,3,4], severely constrain some of the simplest scenarios. We know, however, that there must be some physics beyond the SM. For example, massive neutrinos cannot be accommodated within the SM, nor can the observed baryon asymmetry of the universe or the origin of Cold Dark Matter (CDM) be explained. In looking for possible extensions of the SM that address these issues, supersymmetry (SUSY) continues to provide significant theoretical advantages, especially if we believe in unification beyond the SM. Most notably for the purposes of this paper, the lightest supersymmetric particle (LSP) can explain the origin of CDM [5,6].
Reconciling the amount of CDM deduced from the data of the Wilkinson Microwave Anisotropy Probe (WMAP) [7,8] and the Planck satellite [9,10] with the predictions of supersymmetric models, has been a major challenge in recent years. Although the minimal constrained supersymmetric extension of the SM (CMSSM) is still compatible with the LHC and WMAP predictions [11], the allowed parameter space is severely constrained, since a Higgs mass m h ∼ 125 GeV implies a relatively heavy sparticle spectrum. However, the allowed regions may change significantly in different versions of the MSSM, especially in the coannihilation strips where, e.g., m χ ∼ mτ 1 or ∼ mt 1 . Because of their narrow widths, these coannihilation strips are particularly sensitive to changes in the input model parameters.
In this work, we study these changes in various scenarios that go beyond the constrained minimal supersymmetric extension of the SM (CMSSM), in which soft SUSY-breaking terms are assumed to be universal at the GUT scale, using dark matter considerations as a probe of different theoretical constructions. In general, GUT scenarios that favour particular degeneracies in the sparticle spectrum will lead to additional contributions to coannihilations, thus enhancing their efficiency. Conversely, experimental signals sensitive to these degeneracies can provide information about the gauge unification group. Following previous studies of models with non-universal soft SUSY-breaking Higgs mass parameters (NUHM1,2) [12,13,14,15,16], we analyse the predictions of various SUSY GUT models, including SO (10) [17,18,19,20,21], minimal SU (5) [22,23] and flipped SU (5) [24,25,26,27].
Our paper is structured as follows: In Section 2 we review the features of GUT models that are relevant for our studies. In Section 3 we discuss our sampling methodology for searching for regions of the parameter space compatible with the data. In Section 4 we discuss the implications of nonuniversalities for the different mechanisms that reproduce the correct relic CDM density. In Section 5 we discuss in more detail how the results of our scans depend on the relation between the relic abundance mechanisms, the value of the Higgs boson mass and the supersymmetric contribution to δa µ = [(g µ − 2)/2] exp − [(g µ − 2)/2] SM . In Section 6, we present the cross sections for direct and indirect CDM detection. In Section 7, we study how sparticle searches at the LHC impose further constraints on our models. Finally, in Section 8 we summarise our results and discuss future prospects.
Relevant Features of Supersymmetric GUT Models
We assume that SUSY breaking occurs at some scale M X above M GU T , and is induced by a mechanism that generates generation-blind soft terms. Between the scales M X and M GU T , the renormalization group equations (RGE) and additional interactions associated with flavour, e.g., Yukawa interactions, might induce non-universalities in the soft terms, which we do not consider here, while the theory still preserves the GUT symmetry. Below M GU T , the effective theory is the MSSM with SUSY masses that are common for fields in the same representation of the GUT group. Our approach is therefore to assume a pattern of soft terms with common soft masses for all the particles that belong to the same representations of the GUT group under consideration, while allowing different common masses for inequivalent representations.
The simplest possibility arises within an SO(10) GUT [17,18,19,20,21]. In this case, all quarks and leptons are accommodated in the same 16 representation, while we assume that the up and down Higgs multiplets are in a pair of 10 representations. Since this assignment also determines common sfermion mass matrices and beta functions, similar behaviour under RGE runs is to be expected. Consequently, in this GUT model there is a common soft SUSY-breaking mass for all sfermions (squarks and sleptons) and two different masses for m hu and m h d . From this point of view, the SO(10) scenario can be identified with the NUHM2 studied previously [13], and we include it as a reference for comparison with other GUT groups.
The situation changes significantly in the case of the SU(5) group [22,23]. In this case, the multiplet assignments are as follows: The soft terms that we assume are the same for all the members of the same representation at the GUT scale, but are different for the 10 and 5 in general, and we assume that the singlet neutrinos decouple at the GUT scale, and therefore do not affect our analysis. A similar approach was followed in [28], but the main aim of that work was to reconcile the correct prediction of m h (requiring a heavy SUSY spectrum) with a supersymmetric contribution that could explain the discrepancy of the SM prediction for (g µ − 2) with its experimental value. However, the main aim of this work is to analyse the relic density predictions and to extend the full analysis also to the case of the flipped SU(5) [24,25,26,27], which leads to predictions that differ significantly, and has some distinctive features. The particle assignments are different in flipped SU(5) [24,25,26,27]: The impacts of these assignments on the evolution of sparticle masses, and hence on the coannihilation strips are discussed in detail below. As before, we assume that the singlet neutrinos have already decoupled at the GUT scale. In both SU(5) models we assume that the Higgs doublets H u and H d of the MSSM arise from 5 and5 SU(5) representations, respectively.
The soft SUSY-breaking scalar terms for the fields in an irreducible representation r of the unification group are parametrised as multiples of a common scale m 0 : while the trilinear terms are defined as: where Y r is the Yukawa coupling associated with the representation r, and we use the standard parametrization with a 0 a dimensionless factor, which we assume to be representation-independent. Since the two Higgs fields of the MSSM arise from different SU(5) representations, they have different soft masses, in general. The situation in the different GUT groups is then as follows: • SO(10): In addition to the CMSSM parameters, we introduce two new parameters x u and x d defined as follows: Similarly, the A-terms are parametrised by: as in minimal SO(10) with fermion fields in a common 16 representation and two Higgs fields in different 10 representations.
• SU(5): Here we use as reference the common soft SUSY-breaking masses for the fields of the 10, m 10 . The masses for the other representations are then defined as: and the A-terms are specified via a common mass scale: • Flipped SU(5): Here we have where x R refers to the SU(2)-singlet fields. Similarly, the A-terms are specified as universal: A similar parametrization of GUT scalar non-universality in SO(10) and SU(5) was used also in Ref. [29]. For our analysis in the following Sections we assume a common unification scale M GU T defined as the meeting point of the g 1 and g 2 gauge couplings. The GUT value for g 3 is obtained by requiring α s (M Z ) = 0.1187. Above M GU T we assume a unification group that breaks at this scale. We also assume that SUSY is broken above M GU T by soft terms that are representation-dependent but generation-blind.
Sampling Methodology and Constraints
We work within the three different GUT scenarios SO(10), SU(5) and flipped SU(5) described above, mapping the areas of the parameter space allowed by WMAP, Planck and other constraints onto the ratios of GUT values of the soft terms for each representation. We perform scans of the model parameter spaces using their matter representation patterns as guides for the soft scalar terms at the GUT scale, assuming common gaugino masses. For our searches for regions of the parameter spaces compatible with the data, we use a Bayesian approach based on the MultiNest algorithm [30] The likelihood function that drives our exploration to regions of the parameter space where the model predictions fit the data well is built from the following components: where L EW is the part corresponding to electroweak precision observables, L B to B-physics constraints, L Ωχh 2 to measurements of the cosmological DM relic density, L LUX to the constraints from direct DM detection searches (dominated by the LUX experiment) and L Higgs (L SUSY ) to Higgs (sparticle) searches at colliders. We now discuss each component in turn: L EW : We implement the constraints on the effective electroweak mixing angle sin 2 θ eff and the total width of the Z-boson, Γ Z , from the LEP experiments [42]. For the mass of the W boson, m W , we use the Particle Data Group value [43], which combines the LEP2 and Tevatron measurements. We assume Gaussian likelihoods for all these quantities, with means and standard deviations as given in Table II of [44].
L B : We consider the following flavor observables related to B physics: We assume Gaussian likelihoods for all of them, and for most of them we use the measurements shown in Table II of [44]. However, the experimental values assumed for BR(B s → µ + µ − ) and BR(B d → µ + µ − ) are (2.9 ± 0.8) × 10 −9 and (3.6 ± 1.55) × 10 −10 , respectively, where we quote the total uncertainties found by adding in quadrature the theoretical [45] and experimental [46,47] uncertainties.
L Ωχh 2 : We include the constraint on the DM relic abundance from the Planck satellite, assuming that the lightest neutralino is the dominant DM component. We use as central value the result from Planck temperature and lensing data Ω χ h 2 = 0.1186 ± 0.0031 [48], with a (fixed) theoretical uncertainty τ = 0.012, following Refs. [14,16,61], to account for the numerical uncertainties entering in the calculation of the relic density.
L LUX : For direct DM detection, we include the upper limit from the LUX experiment [49], as implemented in the LUXCalc code [50], including both the spin-independent and spin-dependent cross-sections in the event rate calculation. We adopt hadronic matrix elements determined by lattice QCD [51,52].
L Higgs : The likelihood for the Higgs searches has two components. The first implements bounds obtained from Higgs searches at LEP, Tevatron and LHC via HiggsBounds [53], which returns whether a model is excluded or not at the 95% CL. The second component constrains the mass and the production times decay rates of the Higgs-like boson discovered by the LHC experiments ATLAS [1] and CMS [2]. For this we use HiggsSignals [54], assuming a theoretical uncertainty in the calculation of the lightest Higgs mass of 2 GeV.
L SUSY : The constraints from SUSY searches at LEP and Tevatron are evaluated following the prescription proposed in [55]. The present limits from the Run 1 of LHC are displayed in the corresponding Figures in Section 4.
L g−2 : We adopt for the discrepancy between the experimental value of the anomalous magnetic moment of the muon and the value calculated in the Standard Model δa SUSY µ = (28.7 ± 8.2) × 10 −9 [56], where experimental and theoretical errors have been added in quadrature. This corresponds to a 3.6σ discrepancy with the value predicted in the Standard Model, and relies on e + e − data for the computation of the hadronic loop contributions to the Standard Model value. The likelihood function is assumed to be Gaussian.
In each case, we run the MultiNest algorithm until we reach a sample of about 3 × 10 4 points. Our focus in this work is to scan the parameter space of the new models, in order to study their phenomenology and identify regions compatible with the data. Performing any statistical (frequentist or Bayesian) interpretation of our results, based on global fits and confidence or credibility level regions, is beyond the scope of this paper and would require samples that are orders of magnitude larger that the ones we have gathered. Instead, we present scatter plots showing the correlations of pairs of parameters and/or observables in various planes. In doing this, we select from the full samples only those points predicting the value of all the observables within the 2σ interval (with σ obtained by summing in quadrature the experimental and theoretical errors as explained in the previous paragraphs). If for the observable only an experimental exclusion limit exists, then the theoretical value is required to be within the 90/95 % CL exclusion limits. After applying these cuts, the number of points in the samples is substantially reduced. In particular, in none of the GUT models do we find points with a supersymmetric contribution to δa µ within the 2σ interval. However, we highlight the points in our samples whose contributions to the anomalous magnetic moment of the muon lie in the 3σ interval. We discuss this issue in Section 5.
Non-Universality Parameters and Relic Density Mechanisms
It is well known that, if the required amount of relic dark matter is provided by neutralinos, then particular mass relations must be present in the supersymmetric spectrum. In addition to mass relations, we use the neutralino composition to classify the relevant points of the supersymmetric parameter space. The higgsino fraction of the lightest neutralino mass eigenstate is characterized by the quantity where the N ij are the elements of the unitary mixing matrix that correspond to the higgsino mass states. Thus, we classify the points that pass the constraints discussed in Section 2 according to the following criteria: In this case, the lightest neutralino is higgsino-like and, as we discuss later, the lightest chargino χ ± 1 is almost degenerate in mass with χ 0 1 . The couplings to the SM gauge bosons are not suppressed and χ 0 1 pairs have large cross sections for annihilation into W + W − and ZZ pairs, which may reproduce the observed value of the relic abundance. Clearly, coannihilation channels involving χ ± 1 and χ 0 2 also contribute.
A/H resonances: The correct value of the relic abundance is achieved thanks to s-channel annihilation, enhanced by the resonant A propagator. The thermal average σ ann v spreads out the peak in the cross section, so that neutralino masses for which 2m χ m A is not exactly realized can also experience resonant annihilations.
τ coannihilations: The neutralino is bino-like, annihilation into leptons through t-channel slepton exchange is suppressed, and coannihilations involving the nearly-degenerateτ 1 are necessary to enhance the thermal-averaged effective cross section.
τ −ν τ coannihilations: Similar to the previous case, but also the ντ is nearly degenerate in mass with theτ 1 .
Thet 1 is light and nearly degenerate with the bino-like neutralino. These coannihilations are present in the flipped SU(5) model. We have performed the parameter-space scans in the three GUT groups with two different sets of ranges, as detailed in Table 1. The first one (Set 1) is broader, sampling soft terms up to 10 TeV and all the x i in the ranges 0 < x i < 2. The MultiNest sampling of Set 1 finds that the data are more easily accommodated with a heavy spectrum, where the higgsino neutralino and A funnel mechanisms dominate and only few points in the coaannihilation areas are found within the 3 × 10 4 points sample.
Therefore, in order to zoom in the low mass spectrum, where coannihilations are expected to show up and is also favoured by the δa µ constraint, we performed a separate scan (Set 2), where we decrease the upper limits on m 0 and m 1/2 . Furthemore, the ranges of the paramters x i are also restricted, since the coannihilation regions depend on the values of the x i in a known way, as we explain below.
In the following plots, the points corresponding to the above mechanisms will be presented using different symbols and colours, as specified in the legend of Fig. 1. Although, in the scatter plots that we present, points of different types appear superimposed, we found that by applying the selection rules (12)- (16) to the whole samples, the obtained sets do not intersect. Only a few points that have h f > 0.1 and do not satisfy any of the above conditions are found in the Set 1 scans. These points lie in the areas of both the A/H resonances and a higgsino-like neutralino. Since they do not form a distinct region, they will not be shown in the figures, for reasons of clarity. Fig. 1 shows the correlations between the non-universal parameters and the relic density mechanisms for each GUT group. The parameter x d has no particular correlation with the above mechanisms, while in all cases higgsino DM corresponds to x u > 1, and the same is also true for almost all the A/H funnel points, as seen in the upper left, upper right and lower left panels of Fig. 1. These mechanisms are independent from the GUT model choice, because the Higgs mass Set 1
SO(10)
SU (5) FSU (5) 100 GeV ≤ m 0 ≤ 10 TeV parameters follow the same pattern in the three scenarios. They are also influenced by the other soft terms due to renormalization group running, but this effect is not apparent in scatter plots like those presented here. In SO(10) and SU(5) most of theτ coannihilation points have x u < 1. In SU(5), the flexibility allowed by the x 5 parameter allowsτ −ν coannihilations that are not present in SO (10). As seen in Fig. 1, these points lie in the x 5 < 1 region. This is due to the fact that the lepton doublet belongs to the 5 representation while the quarks and the lepton singlet are in the 10. Then, to satisfy the m h constraint, the squark masses have to be large and the same is true for the right sleptons. However, left-slepton soft masses driven by a small value of x 5 may lead to sleptons in the coannihilation range. The lower panels of Fig. 1 show that in the flipped SU(5)τ andτ −ν coannihilations are located in the quadrants defined by x u < 1, x 5 < 1, x R > 1. We also see that ã t coannihilation area is present. This is possible in flipped SU(5) because the right-handed squarks are in the 5 representation, so the stop mass decreases with x 5 . On the other hand, x R cannot be very small or the lightest stau becomes tachyonic. By restricting it to be > 1 we avoid this situation, and also increase the left component in the lighter stau, since x 5 < x R .
For simplicity and to avoid any discussion about cosmological constraints on tachyons, we consider here only positive values for m 2 Hu and m 2 H d at the GUT scale. However, some negative values may be allowed, and have been included in some other analyses, leading to small differences. For example, in the case of the NUHM2 with m 2 Hu = m 2 H d = m 2 0 , the authors of [13] also consider negative values for m 0 , m 2 0 , m 2 Hu , m 2 H d and find their best fit point for m 0 < 0. In [57] the authors find a small stop island at 95% CL in the NUHM1 (m 2 Hu = m 2 H d = m 2 0 ) and a larger one in the NUHM2; this can be attributed to negative values of m 2 Hu , which enters in the RGE for m 2 t R and decreases its value. Because of our restriction to positive values of x u and x d , in our analysis flipped SU(5) is the only scenario where this mass can become low for low values of x 5 , due to its presence in the 5 instead of the 10. In [28], M A and µ are taken as free parameters at low energies and the RGEs are used to obtain the corresponding GUT values for m 2 Hu and m 2 H d , as described in [15]. This is a way to avoid sampling points that fail the electroweak symmetry breaking test. However, although some of these points correspond to negative values for m 2 Hu and/or m 2 H d , the
Relic Density, Higgs Mass and δa SU SY µ
We now discuss in more detail the relations found in our scans between the relic abundance mechanisms, the value of the Higgs boson mass and the supersymmetric contribution to δa µ = Most of the turquoise points in Fig. 2, with a higgsino χ 0 1 are confined in a thin strip with mass around 1 TeV, independently of the gauge group, and are only present in the upper panels of Fig. 1 where x u > 1, as discussed previously. A higgsino-like neutralino with mass around 1 TeV is a general prediction driven by the relic density bound and has been emphasized before in many analysis [58], [59], [60], [61], [62].
Most of the A/H resonance points have a χ 0 1 mass larger than 800-900 GeV. They are numerous in the Set 1 scans (upper panels of Fig. 2), whereas they are reduced substantially in the Set 2 scans (lower panels of Fig. 2). In fact, for parameters within the ranges of Set 2, the A/H mass is smaller than in Set 1, therefore its decay width is smaller and the condition (13) is more difficult to respect.
The coannihilation areas are different in the various models and, as is well known, they feature upper limits on the χ 0 1 mass. In the case of SO(10), theτ 1 area (orange circles) is well defined, with the neutralino mass in the approximate interval 300-600 GeV. In the case of SU(5),τ 1 −ν τ coannihilations (green triangles) are also involved, and the upper limit increases to ∼ 1.1 TeV. The number ofτ 1 points is reduced drastically in the Set 1 scan of flipped SU(5) and in the Set 2 scan Figure 2: The neutralino relic density Ω χ h 2 as a function of the neutralino mass, using the symbols defined in the legend of Fig. 1. The points surrounded by black squares satisfy the constraint δa µ at 3σ. of SO (10). In the former case, the right-handed slepton mass is determined by the parameter x R , which in Set 1 is free to vary over values larger than 1. In the latter case, the reduction is due to tension between the contribution to δa µ , which needs relatively light sleptons and gauginos, and a Higgs mass around 125 GeV, which pushes the preferred values of m 0 and m 1/2 towards higher values. The coannihilations in flipped SU (5) are recovered in the scan with Set 2, as seen in the bottom-right panel, where we also note the appearance of thet 1 strip (dark blue squares). In SU(5) the situation is intermediate. The number of coannihilations does not vary so strongly, but in passing from Set 1 to Set 2 a greater concentration of points with light neutralino masses can be observed.
The black squares highlight the points for which the supersymmetric contribution to δa µ differs from the central value by less than 3σ. The typical values of δa µ in the black squares are found to be in the range 4 − 6 × 10 −10 , which is similar to the best-fit points in NUHM1 and NUHM2 models [12,13]. All the black squares are in the region ofτ 1 coannihilation, with a minority also featuringτ 1 −ν τ coannihilations. The scan with the largest number of δa µ -friendly points occurs in flipped SU(5) Set 2, due to the lightness of the spectrum and the additional freedom in the choice of parameters, as compared to SO(10) and SU (5).
In all our models, despite the larger freedom in the scalar sector allowed by the new parameters, it is hard to fully explain the δa µ anomaly with a supersymmetric contribution. In this respect the situation is thus similar to other models with gaugino and sfermion mass unification such as the CMSSM, NUHM1 and NUHM2 models [12,13]. The difficulty to explain the anomaly at the 2σ level in non-universal scalar GUT models was also recently discussed in [63], while, by relaxing also the condition of gaugino universality, the authors of [64] find models with supersymmetric contribution at 2σ. Anyway, the GUT-boundary conditions employed in both these studies are Figure 3: The spin-independent neutralino-nucleon cross section as a function of the neutralino mass. The dotted line is the current exclusion curve from the LUX experiment [49]. The projected sensitivities at 90% confidence level for the XENON 1 ton experiment [65,66] (dashed line) and the DARWIN experiment [67] (full line) are taken from [68]. See the legend. different from ours.
Direct and Indirect Dark Matter Searches
In Figure 3 we show scatter plots of the spin-independent neutralino-nucleon cross section as a function of the neutralino mass. The present limits from the null result of the LUX experiment [49] (dotted line) already exclude points with a higgsino-like neutralino and some points in the A/H funnel area. The projected sensitivity of the XENON 1 ton experiment [65,66] shows that it could probe most of these areas, while coannihilations could be fully probed only with a multi-ton mass experiment like the DARWIN project [67], with an exposure of 500 t × y. These sensitivity curves are deduced from the recent study in [68].
We also show in Fig. 4 the present situation of the indirect dark matter search through γ-ray emission from annihilations in the halos of dwarf galaxies of the local group [69], by showing the total non-relativistic neutralino-neutralino annihilation cross section times the relative velocity in dark matter halos σ ann v r as a function of the neutralino mass. The three curves are all limits from the combined analysis of the FERMI satellite with 6 years of data [70] obtained assuming that ττ , bb and W W final states dominate. Gamma rays may result from the decays and hadronization of any of these final states, and in principle these limits apply only to points where these channels dominate neutralino annihilation, and can therefore be compared with the higgsino and resonance regions. We see that at present the curves do not touch the favoured regions of parameter space. Future data from FERMI or CTA arrays [61], [71] may possibly probe the turquoise and red points Figure 4: The total non-relativistic χ 0 1 annihilation cross section times relative velocity as a function of the neutralino mass. The purple lines are the present exclusion limits from a FERMI analysis of gamma-ray emission from dwarf spheroidal galaxies [70]: see the legend for the specific final states. The horizontal black dashed line corresponds to the usual benchmark value of σ ef f v rel 2 − 3 × 10 −26 cm 3 /s.
(higgsino and A/H funnel regions). As could be expected, the annihilation cross section in the slepton coannihilation region is too small to be probed by this kind of indirect searches. The dashed line corresponds to the usual benchmark value of σ ef f v rel 2−3×10 −26 cm 3 /s for a weakly-interacting massive particle with a relic abundance Ωh 2 0.1. We remark that the values of σ ann v r shown in Figure 4 coincide with those of the thermal average at freeze-out σ ef f v rel only when there are no coannihilation channels and the product σ ann v r is a constant independent from the relative velocity/temperature at freeze-out.
LHC Searches
LHC missing energy searches: The coannihilation areas yield a light supersymmetric spectrum that is already partially probed by the first years of operation of LHC. In Figs. 5 and 6, we show the distributions in the (m 1/2 , m 0 ) and (mq, mg) planes for the various GUT models. The present LHC 95 % CL exclusion limit from missing E T searches [57] are depicted as solid lines, while the projected sensitivity with 300 fb −1 at 14 TeV [57] is represented by dashed lines. We see that the present exclusion limits already constrain thet-coannihilation area of flipped SU(5) (blue squares), and graze theτ -coannihilation points favoured by the δa µ constraint in all GUT models. The projected LHC missing E T sensitivity covers most of the coannihilation areas, but leaves practically untouched the higgsino and resonance areas. Heavy Higgs and charginos: We now discuss the sensitivity of other search channels at the LHC to supersymmetric particles in the models we study. The present exclusion curve in the (tan β, m A ) plane is shown as a solid purple line in Fig. 7. We see that the mass of the pseudoscalar neutral Higgs A is generally larger than 1 TeV, except for a few points in the higgsino and resonance regions, which are excluded by this constraint. If the sensitivity in this plane could be pushed to masses up to 2-2.5 TeV, most of theτ 1 coannihilation areas could be probed, as seen in the lower panels of Fig. 7.
More interesting is the search for the lighter chargino, χ ± 1 , shown in the (m χ , m χ ± 1 ) plane in Fig. 8. In the Set 1 scans (upper plots), the points are distributed in the region where m χ m χ ± 1 2m χ . In the coannihilation regions, the neutralino is bino-like, m χ M 1 , whereas the chargino is gaugino-like with m χ ± 1 M 2 2M 1 2M 1 . On the other hand, for a higgsino-like neutralino, we have m χ µ, since the mass of the lightest chargino is dominated by the µ mass parameter. The A/H funnel resonance, with bino-higgsino neutralino mixing, lies in the area between the two extreme cases above. This behaviour results from the interplay between the universality of the gaugino masses at the GUT scale and constraints imposed by the relic abundance.
The solid indigo line is the present limit from the search for χ ± 1 χ 0 2 production and decays into W/Z and missing E T , and the solid red line shows the limit from the search for gaugino pair production χ ± 1 χ 0 2 , χ ± 1 χ ± 1 , with multi-τ final-state decays and missing energy. In both cases, the dashed lines indicate the projected sensitivity with 3000 fb −1 at 14 TeV: all the limits are taken from Refs. [72], [57]. In the latter channel, the LHC will be able to probe parts of theτ 1 coannihilation strips (orange circles and green triangles) and, in the case of the flipped SU(5), most of thet 1 coannihilation strip. Fig. 1. The current LHC 95 % CL exclusion (solid purple line) and projected sensitivity (dashed purple line) are taken from [57]. See the text for details. The projected lines correspond to the sensitivity with 3000 fb −1 . Figure 9: Scatter plots of non-universal GUT models in the (m χ , mt 1 ) plane, with the same legend as in Fig. 1. The solid and dashed purple lines are the present limit and projected sensitivity [73,74,57]. See the text for details. The projected line corresponds to the sensitivity with 3000 fb −1 . Figure 10: Scatter plots of non-universal GUT models in the (m χ , mb 1 ) plane, with the same legend as in Fig. 1. The solid purple line is the ATLAS 95 %CL limit from [73,74]. See the text for details.
Third-Generation Squarks: As seen in Fig. 9, the stop mass in the models we study is generally larger than 800 GeV, and the present limits from searches int 1 → tχ 0 1 , do not reach such values. On the other hand, the projected sensitivity with 3000 fb −1 will partly cover theτ 1 coannihilation regions. The flipped SU(5) Set 2 scan displays the stop coannihilation strip where both thet 1 and neutralino mass are in the range 200-600 GeV. However, we see that thet 1 strip is not affected by the above-mentioned search, though we have already seen that it is constrained indirectly by the limits in Figs. 5, 6 and 8.
We see in Fig. 10 that the lighter sbottom squark is heavier than 1 TeV in all the panels. The present 95 %CL limit forb 1 pair production decaying to bχ [73,74] does not reach the favoured regions, and the searches for this sparticle are not competitive with the other channels.
Complementarity of searches: Under the assumption that the lightest neutralino constitutes all the observed relic abundance, Figs. 3, 4, 5 and 6 show the complementarity of dark matter experiments and of LHC searches for supersymmetric particles. The GUT-inspired models and their respective parameter spaces, as studied in our work, can be fully probed or excluded by combining 300 fb −1 of data accumulated by missing energy LHC searches (coannihilation areas), with the next generation of ton-scale direct-detection experiments. This is consistent with the results of [61], [57], where similar complementarity was found in studies of the CMSSM, NUHM1, NUHM2 and pMSSM10.
Summary and Conclusions
In the following, we summarize the principal conclusions of this work.
• We have identified different patterns of soft SUSY-breaking terms at the GUT scale, depending on the grand unification group, which we have used to distinguish different GUT scenarios via their dark matter predictions and the constraints from LHC searches.
• We have calculated the SUSY spectra for the different gauge groups, finding that the models predict different spectra for the same LSP mass, connecting possible future observations with the structure of the underlying unified theory.
• None of the GUT models studied offers high prospects for reducing substantially the a µ discrepancy via a SUSY contribution.
• In general, scenarios that favour degeneracies in the sparticle spectrum lead to additional contributions to coannihilations, thus enhancing the efficiency and importance of these processes.
• We have studied the different relic density predictions and determined upper bounds for the neutralino mass in the different GUT scenarios. We have also computed the cross sections for direct and indirect dark matter detection in each case, combining the bounds from different dark matter experiments with those from LHC searches.
• We have found that SO(10), SU(5) and flipped SU(5) lead to very different predictions for dark matter and LHC experiments, and thus are distinguishable in future searches. Among other differences, flipped SU(5) predictst 1 − χ coannihilations that are absent in the other groups within the parameter ranges studied here, but can be explored by LHC searches.
• Direct searches for astrophysical dark matter scattering show interesting prospects for the Xenon 1 ton and Darwin experiments, and models with a higgsino-like LSP or A/H resonance annihilation may offer prospects for future FERMI or CTA γ-ray searches.
• The LHC searches for generic missing E T , charginos and stops are quite complementary, and future LHC runs will be able to constrain the models in several different ways.
The interesting prospects for exploring the parameter spaces of different SUSY GUT models found in this paper, and the fact that their potential signatures are quite distinctive, whet our appetites for data from LHC Run 2 and searches for astrophysical dark matter. | 8,744.4 | 2015-11-19T00:00:00.000 | [
"Physics"
] |
Superchiral near fields detect virus structure
Optical spectroscopy can be used to quickly characterise the structural properties of individual molecules. However, it cannot be applied to biological assemblies because light is generally blind to the spatial distribution of the component molecules. This insensitivity arises from the mismatch in length scales between the assemblies (a few tens of nm) and the wavelength of light required to excite chromophores (≥150 nm). Consequently, with conventional spectroscopy, ordered assemblies, such as the icosahedral capsids of viruses, appear to be indistinguishable isotropic spherical objects. This limits potential routes to rapid high-throughput portable detection appropriate for point-of-care diagnostics. Here, we demonstrate that chiral electromagnetic (EM) near fields, which have both enhanced chiral asymmetry (referred to as superchirality) and subwavelength spatial localisation (∼10 nm), can detect the icosahedral structure of virus capsids. Thus, they can detect both the presence and relative orientation of a bound virus capsid. To illustrate the potential uses of the exquisite structural sensitivity of subwavelength superchiral fields, we have used them to successfully detect virus particles in the complex milieu of blood serum. A technique that uses twisted light fields to detect biomolecular structures could find application as a low-cost clinical tool for screening viruses. The protein coatings around many viruses, such as the turnip yellow mosaic virus (TYMV), have complex polyhedral shapes that are difficult to resolve with conventional optical microscopes. Malcolm Kadodwala from the University of Glasgow and other colleagues in the United Kingdom now report that ‘superchiral’ light — localized fields generated by metal nanostructures that spiral as they travel — are sensitive to the asymmetric polyhedral of TYMV. By spectroscopic measurements of particle rotations in superchiral light at different frequencies, the team identified specific asymmetric signals that correlated to virus alignment on gold photonic substrates. This approach was then used to determine TYMV levels in human blood serum spiked with the virus.
Introduction
One of the markers of the transition from chemistry to biology is when individual molecular building blocks selfassemble into complex biological architectures. Optical spectroscopy provides a means of characterising the static and dynamic structural properties of individual molecules through probing of quantised states. However, optical spectroscopy cannot generally do the same for molecular assemblies 1,2 . Thus, characterisation of the static and dynamic structural properties of biological assemblies is achieved through alternative techniques, diffraction and NMR, which lack the advantages of ease of use and rapidity of optical spectroscopy. In this work, we seek to span this length scale gap in the spectroscopic toolbox. We show that electromagnetic (EM) near fields of subwavelength extent that have enhanced chiral asymmetry (superchirality) can probe the structure of biomolecular assemblies. To validate this hypothesis, we used chiral near fields to sense the chiral structure of a model biological assembly, a plant virus with an icosahedral capsid (turnip yellow mosaic virus (TYMV)) and thus detect its relative alignment on a surface.
Spectroscopy is sensitive to the structure of individual free floating (i.e. orientationally averaged) molecules in solution because they have electronic and vibrational states that provide structurally sensitive spectroscopic fingerprints 3 . In general, when molecules aggregate into larger assemblies, the electronic states of the monomer are not perturbed, and therefore, the spectroscopic response reflects the monomer and not the aggregate structure 4 . There are exceptions to this, such as J-aggregates, which have a different spectroscopic response than the individual component monomer 4 . However, this requires wavefunction mixing, which occurs for relatively simple aromatic molecules, that creates new electronic states correlated with the structure of the aggregate that provide a spectroscopic fingerprint. This does not occur for structurally and compositionally more complex biological assemblies (e.g. virus capsids formed by the assembly of protein molecules). In the absence of electronic perturbations upon aggregation, spectroscopy can discriminate between molecular assemblies that are strongly anisotropic (e.g. rod-like structures with high aspect ratios) and other assemblies. This arises because anisotropic aggregates have molecular polarisabilities with respect to the molecular reference frame that are not equivalent. Consequently, polarisation-dependent spectroscopic techniques, such as oriented circular dichroism (CD) 5,6 , linear dichroism (LD) [7][8][9] and polarised Raman 10 , can be useful. However, in the general case of aggregates that are not strongly anisotropic, alignment does not provide a route to additional information. For example, spectroscopy is insensitive to the details of the icosahedral structures adopted by a vast array of viruses. This is because the size of the capsid is much smaller than the wavelength of light required to excite the chromophores of the coat proteins (IR/Vis/UV). Consequently, the electric field is uniform throughout the virus particle, and the spectroscopic response is insensitive to the spatial distribution of the coat proteins (CPs) in the capsid. Effectively, to an IR/Vis/UV photon, icosahedral capsids are spherical objects. Hence, spectroscopic characterisation of virus capsids is limited to fingerprinting the secondary structure and folds of the coat protein subunits with techniques such as UV/VIS CD 11 , vibrational CD (VCD) 12 and Raman optical activity (ROA) [13][14][15][16] .
Near fields are localised nonpropagating EM fields created by light scattering from nanostructures. They vary spatially on a length scale 1-2 orders of magnitude smaller than the wavelength of light from which they are generated. Light scattering from chiral nanostructures creates near fields that in local regions of space, possess chiral asymmetries greater than the incident light, a property sometimes referred to as superchirality [17][18][19][20] . These chiral near fields display strong spatial variations of both the intensity and chiral asymmetry on length scales ≤ the size of the virus capsid. It is this combination of subwavelength localisation and enhanced chiral asymmetry to which the enhanced structural sensitivity is ascribed.
To validate the structural sensitivity of chiral near fields, we demonstrate a dependency of the interaction of a superchiral near field on the alignment of the TYMV particle. Any dependency on alignment provides definitive evidence that the superchiral near fields are sensitive to the structural details of the icosahedral virus capsid. Our results presage a spectroscopic approach for characterising the static and dynamic structural properties of viruses.
Plant viruses are ideal for this study because they are readily available in large quantities, are non-pathogenic to humans, have well characterised structures, and can be immobilised onto a surface with relatively well-defined orientations using biochemical techniques. TYMV has an unenveloped icosahedral capsid with a quasi-symmetry of T = 3, assembled from 180 subunits of an identical sequence (with 60 subunits forming 12 pentamers and 120 subunits forming 20 hexamers) coat protein (CP) with a molecular mass of 20,600 kDa and a diameter of 28 nm (Fig. 1). RNA is organised within the interior of the protein capsid with little or no penetration into the coat protein 21 and exhibits icosahedral order 22 . Virus particles display a chiral structure on two length scales: the secondary and tertiary structures of protein subunits and the quaternary structure of the icosahedral capsid. The chirality of the icosahedral capsid assembly (point group I) is derived from the mirror symmetry breaking of the protein subunits. Measurements were carried out using TYMV particles adsorbed from solution directly onto the substrate. It is assumed that TYMV will nonspecifically bind to the substrate, producing random orientations, hence creating an overall isotropic distribution. Varying levels of alignment of TYMV have been achieved using two surface immobilisation strategies (Fig. 2). The first approach involves functionalising lysines located on the pentamers and hexamers of the capsid surface (see supplementary information) with thiol groups that can bind the virus to a Au surface. This approach has a higher level of alignment than nonspecific binding with a mixture of capsids with either the pentamer or hexamer next to the surface (in a 3:5 ratio). These thiolated particles will subsequently be referred to as TYMV-Thiol. The second approach utilises surface immobilised fragment antibodies (Fab') to specifically orient the virus particles with respect to the surface. This method produces the greatest level of alignment Gold metafilms formed on a nanostructured polycarbonate template were used in this study 23 . They werẽ 100-nm thick and consisted of either left-handed (LH) or right-handed (RH) "shuriken" shaped indentations ( Fig. 1) that possessed six-fold rotational symmetry and were arranged in a square lattice. These substrates are referred to as "template plasmonic substrates" (TPSs) for brevity. The nanoscale indentations in the surface polycarbonate substrate have a depth of~80 nm, are 500 nm in diameter from arm to arm, and have a pitch of 700 nm. A detailed discussion of the chiral and optical properties of these substrates can be found elsewhere 24 .
Theory
The chiral asymmetry of a near field of frequency ω can be parametrised by the optical chirality density parameter (C) 19,25 where D is the displacement field, B is the magnetic induction and D and B are their respective time derivatives. In free space, the optical chirality density is conserved; however, in localised regions of space, C can be higher than that of equivalent circularly polarised light (CPL), a property that has been referred to as superchirality 19,20 . Numerical simulations have been performed to calculate the C of the near fields, as shown in Fig. 3a, which locally display superchirality (C > 1). Both the spatial extent and chiral asymmetries vary on a length scale comparable to the size of TYMV (Fig. 3b). The interaction of EM fields with chiral dielectrics (such as biomaterials) can be understood through the following constitutive equations: Here, (ε r ) ε o is the (relative) permittivity of free space, and (μ r ) μ 0 is the (relative) permeability of free space. E is the complex electric field, and H is the magnetic field. ξ(λ) is a wavelength-dependent second rank complex tensor describing chiral molecular properties, the sign of which is dependent on the handedness, and it is zero for achiral media. In the case that electric dipole-magnetic dipole interactions are the sole contributor to optical activity, ξ(λ) takes the form For the TYMV particle, the chiral response is derived from a combination of the assembled coat proteins and the RNA, where ξ eff aa λ ð Þ (a = x, y, z) are the effective tensor elements of the virus particle, and ξ aa λ ð Þ Capsid and ξ aa λ ð Þ RNA are the individual contributions of capsid and RNA; ξ aa λ ð Þ Protein are the tensor elements for individual protein subunits, and n = 180. In the case of TYMV, both ξ aa λ ð Þ Capsid and ξ aa λ ð Þ RNA have identical symmetry properties, both reflecting that of the T = 3 icosahedron.
For the case of interaction with light, for the icosahedral TYMV capsid. However, for the case of near fields with subwavelength spatial extent, for capsids aligned on a surface, Previously, the effects of the influence of chiral dielectrics, such as biomolecular layers, on the optical properties of chiral plasmonic materials were modelled using numerical EM simulations 24,[26][27][28][29] . Constitutive Eqs. (2) and (3) were used in these simulations, and it was assumed that the chiral dielectric layers were continuous unstructured slabs. To account for the anisotropic/isotropic material properties, the relationships in Eqs. (7) and (8) were used. These equations have been used to simulate anisotropic 27,30 and isotropic 24,26,28,30 layers on chiral structures. These previous studies demonstrated that anisotropic layers induce larger asymmetries in the optical properties of LH and RH plasmonic structures than isotropic layers. Due to computational limitations, we cannot numerically simulate the interaction of chiral near fields with the nanoscale icosahedral virus capsids. However, the above theory provides a framework for understanding the presented experimental results in terms of the level of structural anisotropy within the virus layer. The concept of isotropic and anisotropic layers in the context of immobilised viruses is illustrated in Fig. 2. An isotropic layer arises when an ensemble of TYMVs adopt random orientations on the surface. An anisotropic layer is one in which TYMV has a well-defined alignment with respect to the surface.
In the current study, we have focussed on the asymmetry induced in the ORD spectra, an approach used in previous experimental 17,31 and modelling 28 studies, which has been parametrised using where Δλ LH/RH are the shifts induced (compared to a reference) in the position of the bisignate ORD peaks for left-handed (LH) and right-handed (RH) structures by the introduction of a chiral dielectric (TYMV). If there is a nonchiral change in the dielectric environment of the near field region, then ΔΔλ = 0. In this study, we have derived ΔΔλ from the two extremes of the bisignate line shape referred to as peaks 1 and 2, which are labelled ΔΔλ 1 and ΔΔλ 2, respectively.
Results
The optical properties of the TPS are sensitive to chiral materials, displaying equal and opposite asymmetries in optical properties when exposed to molecular enantiomers 24 . Figure 4 shows the optical rotatory dispersion (ORD) spectra collected from LH and RH TPSs immersed in PBS buffer. The ORD spectra display a bisignate line shape, which, as expected, switches sign between the LH and RH structures. ORD spectra for TYMV nonspecifically bound to unfunctionalised TPSs, TYMV-Thiol and TYMV specifically bound to the mixed-Fab' layer are shown in Figs. 4-6. The corresponding ΔΔλ 1,2 parameters derived from these data are displayed in Fig. 7. The asymmetry parameters for nonspecifically bound TYMV and TYMV-Thiol are calculated relative to the positions of the ORD resonances for unfunctionalised TPSs in buffer, while the specifically bound TYMV shifts are relative to the functionalised layer. Data were obtained for three types of virus layers deposited from solutions that contained 0.01, 0.10 and 1.00 mg/ml TYMV.
Turning first to the nonspecific case of TYMV binding directly to Au (Fig. 4), greater amounts of virus adsorb with increasing concentration, based on the average shift of peak 1 ðΔλ ¼ Δλ LH þ Δλ RH =2Þ (Supplementary Fig. 3).
However, the magnitudes of the asymmetries are small, being just greater than the standard error. The small asymmetries are consistent with the experimental and modelling results of previous studies on nonspecifically bound, structurally isotropic proteins on the same TPSs 27,30 .
The binding of TYMV-Thiol (Fig. 5) is similar to that of the unthiolated case, with similar amounts deposited at the three concentrations ( Supplementary Fig. 4). The level of asymmetry is higher than that for TYMV, with a consistent (negative) asymmetry being observed at the three concentrations studied. This is indicative of the chiral field detecting a degree of structural anisotropy in the TYMV-Thiol layer caused by a level of preferential alignment.
The mixed-Fab' layers in isolation produce asymmetries in the ORD spectra, which is indicative of the expected welldefined orientation of the immobilised Fab' ( Supplementary Fig. 5). The level of asymmetry is comparable to that obtained in a previous study of proteins bound in specific orientations, albeit using a His-Tag immobilisation strategy 31 .
Significantly greater asymmetries are observed for specifically bound TYMV than for both nonspecifically bound TYMV and TYMV-Thiol (Fig. 6). The amount of TYMV adsorbed onto the surface cannot be accurately gauged from the Δλ AV values due to the large asymmetries produced. Indeed, the values of Δλ RH and Δλ LH for peaks 1 and 2 have similar magnitudes but opposite signs. Similar ΔΔλ 1,2 values are observed for the three concentrations used, indicating that the amounts of specifically bound TYMV are similar for all three concentrations. This is to be expected, as the amount of immobilised TYMV is controlled by the strength of the To illustrate a potential application of the enhanced structural sensitivity of superchiral near fields, we used them to detect TYMV spiked into blood serum. Serum is a complex biological fluid comprising all the components of blood apart from blood cells and clotting agents. This complex milieu contains >1000 components, spanning nine orders of magnitude in concentration. It includes many different types of chiral molecules, such as serum proteins, antibodies, antigens, hormones and sugars. When a TPS with mixed-Fab' layers is exposed to serum, some serum proteins will nonspecifically interact with the Fab' component 30 . This arises because the specificity of Fab' is to some extent degraded by immobilisation on the Au surface of the TPS. Consequently, when immersed in serum, the functionalised TPS surface will be covered in a disordered, probably multicomponent protein layer, often referred to as a protein corona. The thickness of this layer can be estimated to be equal to the molecular dimensions of a constituent protein (∼10 nm). When a functionalised TPS is immersed in serum spiked with TYMV, the virus particle will displace the nonspecifically bound layer. TPSs functionalised with the mixed-Fab' layers were exposed to serum spiked with TYMV (0.01, 0.10 and 1.00 mg/ml) plus a control of nonspiked serum for 120 min, after which ORD spectra were collected. Subsequently, to remove any potentially nonspecifically bound TYMV, the TPSs were washed in copious amounts of nonspiked serum, and then, ORD spectra were collected in the presence of nonspiked serum. The ORD spectra for these two experiments are shown in Figs. 8 and 9. The ΔΔλ 1,2 parameters, shown in Fig. 10, were calculated relative to the mixed-Fab' layer in buffer. Nonspiked serum gave rise to a small asymmetry, which can be attributed to the structurally disordered (isotropic) blood protein-Fab' complexes formed through nonspecific interactions. For TYMV-spiked serum, significant asymmetries were observed, which are all within the experimental error of the values obtained for the virus immobilised in buffer. There was no significant difference between the data collected in the presence of spiked serum and after replacement with unspiked serum. These measurements indicate that structurally well-defined TYMV-Fab' complexes are formed even in a serum milieu.
Discussion
The primary and most significant result of this work is the sensitivity of chiral near fields to the higher order (quaternary) structure of the virus capsid, established by the correlation between the level of orientational order and the magnitude of the optical asymmetries. The inherent novelty of the current work lies in the ability of chiral near fields to detect the icosahedral shape of a virus, a complex self-assembled biological aggregate, which is invisible to conventional spectroscopic phenomena that utilise CPL. This goes beyond the previous examples of using chiral near fields to probe/detect the lower-order structure of simpler biomolecules. Clearly, the reported phenomenon does not provide the rich structural information of high-resolution crystallography. However, the icosahedral TYMV capsid is sensed by the chiral fields, rather than appearing as an isotropic spherical object as in optical spectroscopic techniques. In effect, the chiral near field can detect not only the presence of a virus particle but also its orientation. This unique capability can be used to enhance the effectiveness of immunoassays for pathogenic virus detection. A virus particle binds to an immobilised antibody element via an epitope on the capsid surface. Consequently, the relative orientation of the virus is determined by where the epitope is on the capsid surface. The novel structural sensitivity of chiral near fields provides an additional discrimination mechanism that could offer a means of differentiating between, for instance, two closely related virus pathogens that may either bind to a recognition element via different epitopes or produce specific and nonspecific binding. The combination of the novel structural sensitivity with the binary binding/nonbinding functionality of conventional immunoassays offers the potential to mitigate false positive results. If one wished, the reported experiments could be performed using a combination of expensive quartz substrates fabricated using electron beam lithography and commercially available spectrometers 17 . However, given the typical spectral acquisition times of ∼10 min, combining our low-cost disposable polycarbonate substrates with portable reflectance polarimetry 33 would create an effective field diagnostic technology.
In summary, using TYMV as a model, it has been demonstrated that superchiral EM fields with subwavelength spatial extent are, in contrast to normal light, sensitive to the higher order icosahedral symmetry of the virus capsid. We believe that in the context of the biophysical toolbox, the sensitivity of superchiral near fields is best exploited as a spectroscopic "triaging" tool, providing speedy low-resolution assessment of materials, which, if required, can subsequently be studied in more detail with costly low-throughput and highresolution techniques. The ability of superchiral near fields to detect virus particles within serum provides a proof-of concept of the potential applications, presaging a novel label-free simple spectroscopic measurement for immunoassays for detecting viruses. The enhanced structural incisiveness of superchiral fields provides an additional parameter for discriminating between target and off-target interactions with an immobilised recognition element. 1.00 mg/ml TYMV 0.10 mg/ml TYMV Fig. 9 Spectra (solid) collected in serum for functionalised TPSs exposed to serum spiked with TYMV washed with copious serum. Functionalised TPSs in buffer (dashed) are shown for comparison.
Materials and methods
Optical rotatory dispersion (ORD) and reflectivity measurements ORD spectra were collected using a custom-made polarimeter that measures the reflected light from our samples. The design is similar to a basic reflected light microscope with a tungsten halogen light source (Thorlabs), Glan-Thompson polarisers (Thorlabs) and a ×10 objective (Olympus). A camera (Thorlabs) is used to position the sample, and spectra are collected using a compact spectrometer (Ocean optics USB4000). ORD spectra are obtained using the Stokes method, and the intensity of light is measured at four analyser angles (0°, ±45°and 90°). LH and RH pairs of ORD spectra are collected in 10 min.
Simulations
Electromagnetic (EM) simulations were performed using a commercial finite-element package (COMSOL v4.4, Wave Optics module). Periodic boundary conditions were used to emulate the metafilm arrays. Perfectly matched layer conditions were used above and below the input and output ports. Linearly polarised EM waves were applied at normal incidence onto the films. COMSOL uses the finite-element method to solve Maxwell's equations for a specified geometry with the fields and optical chirality being measured at predefined surfaces above, within, and below the films.
TYMV purification
Brassica rapa var. pekinensis plants were grown under greenhouse conditions at 21°C. Turnip yellow mosaic virus (TYMV) in crude sap extract mixed with abrasive celite was rub inoculated onto a true leaf of each Brassica rapa pekinensis plant grown to the first two true leaf stage of development. After 4 weeks, the plants developed strong systemic symptoms and were harvested, and total TYMV was isolated according to the protocols established by Leberman 34 with the modifications suggested by Katouzian-Safadi and Berthet-Colominas 35 , with high speed centrifugations being carried out in a Beckman SW 41Ti rotor at 28,000 rpm for 3 h. The pelleted virus was resuspended in 10 mM Tris HCI pH 7.5 with 0.1 M EDTA and was subjected to CsCl density centrifugation in a gradient to obtain samples with densities of 1.26, 1.36 and 1.46, which were loaded into a 13.2 ml Thinwall Ultra-Clear tube (Beckman Coulter) and centrifuged in a SW 41Ti rotor at 28,000 rpm for 3 h. The upper band corresponding to the natural top component (empty virus) and the lower band corresponding to the bottom component (virus containing genome) were visualised under white light and isolated using a needle attached to a syringe. The samples were diluted and centrifuged in an SW 41Ti rotor as before to a pellet and were resuspended in an appropriate buffer, centrifuged to a pellet and resuspended in the buffer again to remove CsCl. The concentrations of the TYMV forms were estimated according to the protocols of Tamburro et al. 36 .
TYMV-thiol production
The thiol groups were attached to the lysines at the virus surface by means of N-hydroxysuccinimide (NHS) ester chemistry, also called amine-reactive cross linker chemistry. At physiological pH, the amines of a protein are positively charged and therefore sit at the protein surface. These amines then become available for conjugation reagents. A fluorometric thiol assay was used to confirm virus thiolation.
TYMV-specific F(ab') 2 production
TYMV-specific rabbit polyclonal IgG was purchased from DSMZ (AS-0125), and antibody specificity against the TYMV coat protein (CP) was confirmed by western blot analysis (see Supplementary Fig. 2). TYMV-specific F (ab') 2 fragments were generated using the Pierce™ F(ab') 2 preparation kit (Thermo Fisher Scientific: 44988) following the manufacturer's instructions. Briefly, a TYMVspecific IgG sample was loaded onto a prewashed Zeba Spin Column using digestion buffer (20 mM sodium acetate, pH 4.4; 0.05% sodium azide) and centrifuged at 5000 × g for 1 min (Eppendorf Centrifuge 5415D) to collect the desalted sample. The immobilised pepsin resin was prepared by adding 65 µL of the 50% slurry placed into the 0.8 µL spin column, followed by centrifugation at 5000 × g for 1 min. The buffer was discarded, and the resin was washed with 130 µL of digestion buffer, followed by centrifugation at 5000 × g for 1 min. The flow through was discarded, and 125 µL of desalted TYMV-IgG was loaded onto the spin column with a capped bottom containing the pepsin resin. The sample was briefly mixed using a vortex. The digestion reaction was incubated for 2 h using an endover-end mixer in a static 37°C incubator. Next, the bottom cap was removed, and the spin column was placed into a microcentrifuge tube. The sample was centrifuged at 5000 × g for 1 min to separate the digest from the immobilised pepsin. The resin was washed twice with 130 µL of PBS (0.1 M sodium phosphate, 0.15 M sodium chloride; pH 7.2) by centrifugation at 5000 × g for 1 min. The flow through containing F(ab') 2 fragments was collected, and the immobilised pepsin was discarded. To remove any undigested IgG, the F(ab') 2 fragments were further purified using a Nab protein A Plus Spin Column. The equilibrated column was washed twice with 400 µL of PBS by centrifugation at 5000 × g for 1 min. The bottom of the column was capped, and the sample was added to the column. The samples and resin were resuspended by inversion followed by a 10 min incubation at room temperature with an end-over-end mixer. Next, the column was placed in a new collection tube, and the flow through containing F(ab') 2 was collected by centrifugation at 5000 × g for 1 min. The column was washed twice with 200 µL of PBS by centrifugation at 5000 × g for 1 min, and both wash fractions were added to the sample. The protein concentration was determined using the Pierce™ BCA Protein assay kit (Thermo Fisher Scientific: 23225) following the manufacturer's instructions. To assess digestion and purification, the samples were analysed by SDS-PAGE using non-reducing loading dye and NuPAGE™ 4-12% Bis-Tris protein gels (Thermo Fisher Scientific: NP0336BOX) and Simply-Blue™ SafeStain Invitrogen™.
Antibody fragments
Polyclonal antibodies are an ensemble of antibodies that can recognise multiple epitopes on an antigen. A monoclonal antibody, by contrast, displays greater specificity, binding uniquely to a single epitope of a macromolecular antigen such as a protein. In the current study, we immobilised onto the TPS a fragment derived from polyclonal rabbit IgG that had been produced against TYMV, referred to as poly anti-TYMV-IgG. Western blot analysis was used to confirm antibody specificity for the 20 kDa TYMV-CP ( Supplementary Fig. 1). Surface immobilised IgG has been used as a recognition element in previous studies of plasmonic-based sensor platforms. Immobilisation significantly degraded the performance of IgG. This is attributed to a combination of structural heterogeneity and the large size of IgG facilitating denaturisation upon adsorption. Loss of functionality was minimised by immobilising functionally active fragments of IgG rather than the whole molecule 37,38 . Immobilised Fab' fragments adopt more homogenous adsorption structures and are less susceptible to denaturing compared to the whole IgG molecule 39,40 . The IgG was treated with pepsin to produce smaller 88 kDa F(ab') 2 fragments, with a small portion of the digest representing Fab' fragments, most likely due to over-digestion ( Supplementary Fig. 2). In addition, we suggest that the majority of the F(ab') 2 fragments will cleave into Fab' fragments after immobilisation onto the TPS. The Fab' fragments have free sulfhydryl moieties that facilitate attachment to the gold surfaces of the TPSs. Importantly, this produces a consistent attachment point for the poly anti-TYMV-Fab fragments and should significantly limit effects due to the binding orientation. To minimise the potential denaturing of the Fab' fragment through interaction with the Au surface, it is coadsorbed with a thiol: triethylene glycol mono-11-mecaptoundecyl (EG-thiol) 41 . EG-thiol is a neutral spacer molecule with biorepellent properties, so any interactions between Fab' molecules and a surface will be minimised 20,42 . This layer will subsequently be referred to as a mixed-Fab' layer. When tested against buffered solutions of single proteins, similar mixed-Fab' layers were observed to not only retain specificity to the original target but also display nonspecific interactions with other types of protein molecules 30 . The Fab' will bind to a limited range of epitopes (or a single epitope) of TYMV, thereby generating a narrow distribution of orientations relative to the nonspecific interactions of the unfunctionalised surface.
Virus adsorption from solution
The TPSs were enclosed in a 100 μL cell, with TYMV and TYMV-Thiol adsorbed from a 10 mM PBS buffer, pH = 7.4. Stock serum solutions with a total protein concentration of 60 mg ml −1 were produced by dissolving lyophilised human blood serum (ERM © certified reference material, Sigma-Aldrich) in distilled water. Serum (both nonspiked and spiked) solutions with a concentration ranging from 1 mg ml −1 were produced by diluting the stock serum solutions with 10 mM PBS buffer, pH = 7.4. Spiked serum samples were produced by adding the relevant amount of TYMV to the 1 mg ml −1 serum solution. | 6,878.2 | 2020-12-01T00:00:00.000 | [
"Physics"
] |
Gonadotropin Releasing Hormone (GnRH) Triggers Neurogenesis in the Hypothalamus of Adult Zebrafish
Recently, it has been shown in adult mammals that the hypothalamus can generate new cells in response to metabolic changes, and tanycytes, putative descendants of radial glia, can give rise to neurons. Previously we have shown in vitro that neurospheres generated from the hypothalamus of adult zebrafish show increased neurogenesis in response to exogenously applied hormones. To determine whether adult zebrafish have a hormone-responsive tanycyte-like population in the hypothalamus, we characterized proliferative domains within this region. Here we show that the parvocellular nucleus of the preoptic region (POA) labels with neurogenic/tanycyte markers vimentin, GFAP/Zrf1, and Sox2, but these cells are generally non-proliferative. In contrast, Sox2+ proliferative cells in the ventral POA did not express vimentin and GFAP/Zrf1. A subset of the Sox2+ cells co-localized with Fezf2:GFP, a transcription factor important for neuroendocrine cell specification. Exogenous treatments of GnRH and testosterone were assayed in vivo. While the testosterone-treated animals showed no significant changes in proliferation, the GnRH-treated animals showed significant increases in the number of BrdU-labeled cells and Sox2+ cells. Thus, cells in the proliferative domains of the zebrafish POA do not express radial glia (tanycyte) markers vimentin and GFAP/Zrf1, and yet, are responsive to exogenously applied GnRH treatment.
Introduction
Adult neurogenesis plays an important role in controlling brain functions allowing the generation and maturation of new neurons and their integration into neural circuits [1,2]. Neurogenic niches have been identified in specific regions of the adult brain in mammals and fish [3][4][5], including the hypothalamus [6,7]. Unlike mammals, fish produce new neurons along the entire rostrocaudal brain axis throughout life [8] with well-described neurogenic niches in the telencephalon, retina, midbrain, cerebellum, and spinal cord [9]. More recent studies have compared and contrasted the development of mammalian and teleost hypothalamus [10,11], yet few studies have characterized this region in the adult zebrafish.
In mammals, the hypothalamus is characterized by the presence of tanycytes, a specialized type of glial cell that line the wall of the third ventricle in the median eminence, where they contact the cerebrospinal fluid [12,13]. Many studies now suggest that tanycytes have neural stem cell properties and, due to their unique location, may respond to metabolic and/or reproductive cues by modulating hypothalamic neurogenesis [6,12]. Tanycytes can be classified into four groups (α1, α2, β1, and β2) according to their location, morphology, and gene expression [12,14], where only the α2 tanycytes have the neurogenic capacity [15][16][17]. Because the hypothalamus is involved in basic functions, such as controlling metabolism, feeding, body temperature, circadian rhythms, and reproduction [18], the possible role of neurogenesis in controlling these processes is of great interest, especially to develop stem-cell therapies for the treatment of neurological disorders [18,19].
Gonadotropin-releasing hormone (GnRH), secreted by neurons of the preoptic area of the hypothalamus, is essential for controlling puberty and reproduction [20,21]. A decrease in GnRH secretion causes a reproductive syndrome known as Hypogonadotropic Hypogonadism (HH). The clinical manifestation of HH is expressed mainly at puberty, with profound impacts on sexual development, generally resulting in infertility [22]. However, some patients with HH who undergo hormone therapy using testosterone, GnRH, or both show a reversion of HH accompanied by a restoration of pulsatile GnRH release that is maintained after removal of hormone treatment [23,24]. While the mechanism that restores the GnRH level is unknown, the hormone therapy results suggest that testosterone and GnRH could stimulate the differentiation of GnRH neurons in the hypothalamus of adult humans.
Effects of testosterone on neurogenesis have been reported in mammals where androgens can regulate adult neurogenesis in the hippocampus [25] and the subventricular zone (SVZ) of the lateral ventricle [26]. The potential role of GnRH in hypothalamic neurogenesis is apparent in the aging process, where the loss of GnRH function is correlated with decreased neurogenesis [27,28]. Furthermore, our lab has demonstrated that hypothalamicderived neurospheres of adult zebrafish respond to GnRH with increased neurogenesis, including increases in GnRH neurons in culture [29,30]. These results suggest that testosterone or GnRH stimulates the generation of GnRH neurons from hypothalamic neural progenitors in adult zebrafish. Nevertheless, the effect of GnRH and testosterone in the generation of new GnRH neurons has not been demonstrated in vivo.
Zebrafish are an excellent model system for investigating human diseases, given that approximately 70% of human genes have at least one zebrafish orthologue [31]. They are readily amenable to gene editing [32]; thus, targeted mutagenesis approaches have provided powerful tools to better understand the genetic and developmental basis of human diseases. Because of their rapid development and ability to generate many embryos, zebrafish present a unique opportunity for high throughput drug screening to uncover new drugs for treating human diseases. This study characterizes neurogenesis in the preoptic area (POA) of the adult zebrafish, classifying different neurogenic domains within the region. To elucidate the potential role of testosterone/GnRH in hypothalamic neurogenesis, adult fish were treated with testosterone/GnRH, and effects on proliferation were quantified-revealing that only GnRH treatment significantly increased proliferation in the posterior parvocellular preoptic nucleus of the POA.
Hypothalamic Neural Progenitors Are Located in the POA
To characterize neural progenitor cells in the pre-optic area (POA) of the adult zebrafish, we first describe the anatomy of the POA, which in fish is defined as located between the medial region of the ventral telencephalon and the optic chiasm ( Figure 1A, red box). Forty-six paraffin transverse sections of 5 µmwere obtained from the POA of three fish, and representative sections of the parvocellular preoptic nucleus (PP: PPa0, PPa1, PPa2, PPa3, PPa4.1, PPa4.2, PPp1, and PPp2) were analyzed ( Figure 1B-I). The anterior parvocellular preoptic nucleus (PPa) begins at the anterior commissure; the last one is divided into the pars dorsalis (Cantd) and the pars ventralis (Cantv) ( Figure 1B,C). These nuclei are localized adjacent to the diencephalic ventricle (DiV) ( Figure 1C-G). In the ventral region of the PPa4 sections, cells with fusiform nuclei were found along the wall of the DiV ( Figure 1G, boxed, arrow). Cells with fusiform nuclei were no longer apparent in the posterior parvocellular preoptic nucleus (PPp)/suprachiasmatic (SC) ( Figure 1H,I) in agreement with Wulliman [33]. Tanycytes, glial-like cells found lining the ventricle in the mammalian hypothalamus are characterized by a variety of markers, including glial fibrillary acidic protein (GFAP) (a standard marker of glial cells), intermediate filament marker vimentin [17], and Sox2 [34]. Thus, to characterize potential alpha-tanycyte-like progenitor cells in the POA, we selected five representative transverse sections and examined the expression of anti-vimentin (Vim), anti-Zrf1 (homologous to GFAP), and anti-Sox2. In PPa1-PPa2 sections sampled from four fish, Vim+/Zrf1+ cells with the morphology of radial glia were observed in the dorsal wall of the DiV (Figure 2A,B, boxed areas). These cells extend their processes into the PPa (Figure 2A'B', arrows). Additionally, we identified Vim+/Zrf1+ cells in the ventrolateral region of the POA with long processes that extend towards the DiV (Figure 2A,B, asterisks). In PPa3 sections, there were fewer Vim+/Zrf1+ cells lining the wall of the DiV ( Figure 2C, boxed area, C', arrowhead) with Vim+/Zrf1+ cells in the ventrolateral region of the POA ( Figure 2C, asterisk). Moreover, a group of Vim+/Zrf1cells were observed ventrally in the POA, contacting the floor of the DiV with a short process ( Figure 2C, green, arrowhead). In posterior sections of the POA (PPa4), few Vim+/Zrf1+ cells were observed ( Figure 2D), and cells with fusiform nuclei located in the ventral region were observed ( Figure 2D, boxed area, D', asterisk). In PPp1, Vim+/Zrf1+ cells lining the DiV were observed dorsally in the region of the magnocellular nucleus (PM) and PPp ( Figure 2E, boxed area, E' arrowhead), but not in the region of the suprachiasmatic nucleus ( Figure 2E, SC). Although the Vim+/Zrf1+ cells in PPp1 ( Figure 2E' arrow), were similar to those observed in PPa1 and PPa2 sections, these cells extended their processes toward the lateral side of the POA.
In mammals, subsets of tanycytes also express Sox2, a transcription factor essential for neural stem cell differentiation and consistent with the observations that tanycytes can also act as neural stem/progenitor cells. We observed cells immunopositive for Sox2 throughout sections of the PPa/PPp. This expression had a homogeneous distribution along the wall of the DiV (Figure 2F-J; n = three fish). In the wall of the DiV, Vim+ cells were also Sox2+ ( Figure 2F'-H' and J, arrow). However, the Vim+ cells are located in the ventral-lateral region were Sox2− ( Figure 2F-H, green, arrowheads). In PPa4, the cells with fusiform nuclei were Sox2+ ( Figure 2I boxed area, I', arrow). Thus, the POA contains Vim+/Zrf1+/Sox2+ cells with the morphology of radial glia and ventrally located Sox2+ cells with fusiform nuclei.
Cytoplasmic Sox2 Cells Express Fezf2:GFP in Ventral Region of the POA
The transcription factor Sox2 is expressed in stem cells and progenitors, as well as differentiated neurons and glia [35]. Previously we described a group of HuC positive neurons with cytoplasmic anti-Sox2 labeling in the parvocellular nuclei [29]. Here we identified Sox2+ cells adjacent to the DiV located in PPp1 sections that had large nuclei (diameter: 6.41 ± 1.2 µm) and labeling in the cytoplasm ( Figure 3A; six fish). Because PPp1 corresponds to the neurosecretory preoptic area (NPO), these cells with Sox2+ cytoplasmic labeling may have a neuroendocrine function. We next determined whether the cytoplasmic Sox2+ cells co-localized with markers for neuroendocrine cell specification. In zebrafish, the gene forebrain embryonic zinc finger (fezf2) regulates Orthopedia (Otp) [36], a transcription factor essential for neuroendocrine cell specification [37]. To determine whether cytoplasmic Sox2+ cells co-localized with Fezf2:GFP, anti-Sox2 antibody labeling was done in Tg(fezf2:gfp) adults [38] (n= three fish). In the PPp1 sections, two types of cells were GFP+: large cells with low GFP expression ( Figure 3B,C,E,F, arrows) and small cells with high GFP expression ( Figure 3B,C,E,F, arrowheads). The large cells positive for Fezf2:GFP were also Sox2+ ( Figure 3C,F, arrowhead). In contrast, the small cells with high expression of Fezf2:GFP were negative for cytoplasmic Sox2 ( Figure 3B,C,E,F, arrow). These results suggest that cytoplasmic Sox2 is found in a group of endocrine cells located in the NPO.
Proliferative Cells in the POA
It is well known that fish generate neurons throughout life [8], yet the neurogenic potential of the hypothalamus is not well described in adult zebrafish. To identify anti-Vim+ potential tanycyte-like precursor cells in the POA, sectioned tissue was double-labeled for anti-Vim and anti-proliferating cell nuclear antigen (PCNA; Figure 4A-E,D',D", three fish). We observed PCNA+ cells lining the DiV in almost all sections ( Figure 4A-E, arrow), however these cells were Vim-. The majority of PCNA+ cells, (62%), were located in PPa4 sections ( Figure 4D, rectangle; Figure S1), including cells with fusiform nuclei ( Figure 4D'-D"'). To further characterize potential proliferative capacity, sections were double-labeled for anti-Sox2 and anti-PCNA. In all sections, we observed PCNA+/Sox2+ cells ( Figure 4F-J, arrow). Similar to what was observed previously (see Figure 4D,D",D"', arrow), the greatest number of PCNA+ cells were observed in the ventral region ( Figure 4F-J, arrow), particularly in the PPa4 ( Figure 4I, boxed region, I'-I"', arrowhead) where all PCNA+ cells were also Sox2+. Therefore, these results indicate that the Sox2+ proliferative cells are located mainly in the PPa4 region. The principle sources of neurons in the forebrain of adult mammals are GFAPexpressing progenitors [39]. To determine whether the Zrf1 + cells in the POA were proliferative, we first assayed markers of cell proliferation. Because the anti-PCNA antibody can detect distinct populations of cells, S-phase positive and S-phase non-positive [40], we compared PCNA labeling and anti-BrdU labeling in fish that were treated with BrdU (group 1 = 1 pulse day 1, three fish analyzed per group; group 2 = two pulses of BrdU at days 1 and 7; three fish analyzed per group, Figure 5). In fish treated with one pulse 1% of cells were labeled with BrdU ( Figure 5A Figure 5L,N). Thus, the labeling pattern does differ between anti-PCNA and anti-BrdU, and these data confirm that the POA generates new cells during this 7 day interval. To determine whether the radial-glia like Zrf1+ cells were proliferative (BrdU+), we double-labeled for these markers and quantified the number of BrdU+/Zrf1+ cells. The patterns of proliferation in the POA region of the adult fish were quantified by two-pulse BrdU tracking (days 1 and 7; three fish per condition). About 74% of the total labeled cells were BrdU+/Zrf1-( Figure 6A-G, green), 23% were BrdU-/Zrf1+ ( Figure 6A-G, red) and only 3% were BrdU+/Zrf1+ ( Figure 6B, magnification image of boxed, arrows, G, yellow). In analyzing the distribution of the labeled cells ( Figure 6H) the BrdU+/Zrf1+ cells were located primarily in the PPa2 (Figure 6H, yellow). These results suggest that most proliferative cells are not Zrf1+ (GFAP + ), a finding different from what is observed in mammals.
GnRH and Testosterone Treatments Cause Neurogenesis in the POA
Previously, we have shown that GnRH or testosterone, when added to neurospheres cultured from the adult hypothalamus, can trigger differentiation of neurons in vitro [29]. To test the effects of GnRH and testosterone on the neurogenesis of the POA in vivo, adult fish were injected intraperitoneally with GnRH or testosterone, and changes in the number of BrdU+ and cytoplasmic Sox2+ cells were scored. A dose-response curve was generated to determine optimal concentrations: 15 µL per gram of 1 µM GnRH and 1 mg/mL testosterone ( Figure S2). In animals treated with testosterone, we observed a slight increase in the number of BrdU+ positive cells ( Figure 7A'-E', red) relative to controls ( Figure 7A-E, red). In controls the BrdU+ cells were distributed mainly along the lining of the DiV in PPa4 ( Figure 7D), a pattern that was conserved in testosterone-treated animals ( Figure 7D'). Analysis of the overall number of BrdU+ cells in the testosterone-treated animals showed a slight, but insignificant, increase in the number of BrdU+ cells, (Figure 7F,H; testosteronetreated: 349 ± 8 cells, eight fish; control: 232.7 ± 58 cells, 10 fish). Because the cytoplasmic Sox2+ cells represent potential neuroendocrine cells, we also quantified their response to testosterone. No significant differences were found when comparing testosterone-treated with control animals (Figure 7G,I; testosterone-treated: 76.3 ± 16.2 cells, seven fish; control: 67.2 ± 11.9 cells, six fish). In animals treated with GnRH (Figure 8), we observed a more than 2-fold increase in the number of BrdU+ positive cells ( Figure 8A'-E', red) relative to controls ( Figure 8A-E, red). Like in controls for testosterone-treated animals, in controls the BrdU+ cells were distributed mainly along the lining of the DiV in PPa4 ( Figure 8D). In contrast to testosterone treatment, in fish treated with GnRH, BrdU+ cells were observed in nuclei adjacent to the DiV (Figure 8D'). Changes in the number of cytoplasmic Sox2+ cells were observed in the GnRH-treated animals ( Figure 8F,F'), whereas, in the PPp1 region of the POA there was increased cytoplasmic Sox2+ cells ( Figure 8F', arrowheads, J). Quantification of the proliferation in GnRH-treated animals showed a significant increase in BrdU+ cells in GnRH-treated animals ( Figure 8G; 409.7 ± 60.6 cells, nine fish; control: 173.8 ± 38.1 cells, nine fish). Significant increases in BrdU+ cells were observed in sections PPa1, PPa3, and PPa4 ( Figure 8I). Again, in contrast to treatment with testosterone, the number of cytoplasmic Sox2+ cells increased significantly in GnRH-treated animals ( Figure 8H; 176.6 ± 26.7 cells, eight fish; control: 81.7 ± 16.1 cells, nine fish). Increases in cytoplasmic Sox2+ cells were observed in PPa4 and PPp1 with only the changes in PPp1 showing significance ( Figure 8J). Thus, GnRH treatment promoted the proliferation of precursors ( Figure 8E) and presumptive neuroendocrine cells, primarily in the PPp1 of the POA.
Discussion
Here we characterized progenitors located in the anterior/posterior parvocellular nucleus PPa/PPp of the POA (Figure 9). Within this highly proliferative area, the colocalization of cytoplasmic Sox2+ with Fezf2:GFP suggests that a subpopulation of the progenitors may be committed to a neuroendocrine fate. Furthermore, this region is responsive to hormone treatment, specifically GnRH, where in vivo GnRH treatment resulted in increased cell division and an increase in cytoplasmic Sox2+ cells. In contrast, testosterone treatment did not significantly affect the proliferative activity within this region of the hypothalamus. These results reveal for the first time a neurogenic effect of GnRH treatment in the hypothalamus of adult zebrafish in vivo.
Tanycytes as Adult Neural Progenitors
In mammals, hypothalamic tanycytes characterized by the expression of progenitor cell markers, including vimentin, nestin, and Sox2 ( Figure 9A, green, red; [17,34,41] with GFAP found in neural progenitors of the mammalian forebrain [39], and in α2-type tanycytes neural progenitors in the hypothalamus ( Figure 9A yellow; [17]). We identified the expression of vimentin and GFAP in the dorsal regions lining the DiV and these cells had radial glia-like morphology ( Figure 9B, yellow). However, the Zrf1+ cells we observed had limited proliferative capacity. [17]. α2 tanycytes with proliferative capacity express: Vim, Nestin [34], Sox2, and GFAP, while the non-proliferative tanycytes do not express GFAP. Ependymal cells are distributed in the dorsal region. (B) In zebrafish, we described a low-proliferative cell Vim+, Sox2+, and Zrf1+, and high proliferative Sox2+ cells, previously reported to express nestin [42]. Cytoplasmic Sox2/Fezf2:GFP cells were observed in the region previously shown to express cytoplasmic Sox2 [29]. (C) Cytokeratin cells are express in the ventral region of the POA in cytokeratin [43] positive ependymal-like cells similar to those seen in mammals.
While in zebrafish, radial glia-like progenitor cells (GFAP positive) in the adult brain are thought to be the predominant neurogenic cell types [8,9,44], GFAP-negative progenitor populations have been identified in the ventral telencephalon [45]. In rats [46] and zebrafish [7], neural progenitor proliferation has been confirmed in the POA through BrdU labeling in adult animals. Here we identified Sox2+ progenitors distributed along with the DiV, as observed in the TelV of adult zebrafish [47] and in the DiV of mammals [48]. With BrdU labeling, we identified a group of proliferative cells with fusiform nuclei lining the ventral DiV ( Figure 9B, grey) that are similar to neuroepithelial-like progenitors previously described by electron microscopy in the ventral region of the PPa [49] and as Nestin:GFP expressing (but not vimentin and GFAP) in the ventral telencephalon [45]. Neuroepithelial (NE) progenitor cells express progenitor markers Sox2 and nestin [9] and play a role in neural regeneration in the cerebellum of adult zebrafish [50]. Our results agree with the observation that radial glia-like progenitor cells have low proliferation ( Figure 9B, yellow), and the neuroepithelial cells lining the ventral DiV are the principal proliferative population in the PP of the POA.
Cytoplasmic Sox2 Cells
The cell-fate determining transcription factor SOX2 plays an important role in development, stem cell biology, and cancer [51], and thus, undergoes situation-specific protein modifications that can affect the nuclear-cytoplasmic localization of the protein [52]. We found cytoplasmic Sox2+ cells in the neurosecretory preoptic area (NPO), a region with peptidergic neurons, such as those containing vasotocin and isotocin [53,54]. These cytoplasmic Sox2+ cells were Fezf2:GFP+, a transcription factor required for the development of neuroendocrine neurons in the POA [38,55,56]; thus, we proposed that cytoplasmic Sox2 cells are neurosecretory neurons.
Interestingly, our results showed that GnRH triggered a significant increase in cytoplasmic Sox2+ cells in the PPp1, yet the significant increases in BrdU+ cells were in the adjoining PPa4 region. Both migration and transdifferentiation could explain this difference. For example, in zebrafish, the dopaminergic TH+ cells in the PPp migrate from their site of origin in the ependymo-radial glia of the anterior ventricle; thus, the increase in Sox2+ cells could be the result of migration [57]. Alternatively, the transdifferentiation of support cells into hair cells in the zebrafish ear is Sox2-dependent occurring in the absence of mitotic division [58]. Further experiments are needed to better understand the mechanisms of GnRH-induced proliferation in the POA of the adult brain.
Hormone Treatment and Neurogenesis
Here we have shown that GnRH and not testosterone significantly increased cell proliferation in the POA of the adult zebrafish. These results contrast with studies in the other teleosts (Tilapia), where testosterone treatment increased proliferation in the periventricular regions of the brain. However, like the findings presented here, the BrdU labeling was not found in radial glia cells [59]. Although we found a slight increase in proliferation with testosterone treatment, consistent with our previous study performed in hypothalamic neural progenitor cells in vitro, the effect was minimal with a small, but significant, increase in neurons in cultured hypothalamic progenitor cells [59]. Similarly, administration of testosterone and its metabolites 5α-dihydrotestosterone (DHT) has no effect on neurogenesis in the hypothalamus (VMH) of meadow voles [60] or hamsters [61]. Thus, to date, consistent with other studies, there is a limited neurogenic effect of testosterone on the hypothalamus of zebrafish.
GnRH, Neurogenesis, and Longevity
Consistent with our previous studies where GnRH had a significant effect on the differentiation of cultured hypothalamic progenitor cells from adult zebrafish [29], the results presented here showed that GnRH increased proliferation in general, and in neuroendocrine cells types. GnRH ligands and receptors are found throughout the brain in all vertebrates, and they are essential in both central and peripheral reproductive regulation, as well as higher functions, such as learning and memory, feeding behavior [62], and sleep/circadian rhythms [18]. In mice, aging is correlated with a decline in hypothalamic GnRH expression where activated IKK-β and NF-κB inflammatory crosstalk between microglia and neurons significantly down-regulates GnRH transcription [27,28]. Additionally, machine learning for predicting lifespan-extending compounds has identified compounds for GnRH therapies as one of the principal pathways for postponing the onset of many age-related diseases [63]. Hypothalamic neurogenesis, as well as other age-related phenotypes, can be restored by injection of GnRH [28,64]. Our findings that GnRH can increase proliferation of cells in the POA of adult animals, coupled with the development of zebrafish as a model for neurodegeneration in the context of aging [65], and aging-related alterations in patterns of sleep and rhythmicity patterns [66], open the door for investiga-tions into how inhibition of inflammation and/or GnRH therapy could revert symptoms of aging-related diseases.
Animals
Wild-type (WT) fish of the Cornell strain (derived from Oregon AB) were raised and maintained in Whitlock laboratory in a re-circulating system (Aquatic Habitats Inc., Apopka, FL, USA) at 28 • C under a light-dark cycle of 14 and 10 h, respectively. All protocols and procedures employed were reviewed and approved by the Institutional Committee of Bioethics for Research with Experimental Animals, University of Valparaiso (#BA084-2016). The Tg(fezf2:gfp) line was kindly provided by Su Guo [38].
Trichromic Stain in Paraffin Sections
Male zebrafish, 1-2 years old, were sacrificed, heads collected, and fixed in Bouin's solution for 24 h at 4 • C then decalcified in 0.2 Molar EDTA solution pH 7.6 for 7 days at 4 • C. Heads were rinsed in 70% ethanol, dehydrated in an increasing ethanol series to 95% ethanol, cleared in butanol, and embedded in Paraplast Plus (Sigma Chemical Co., St. Louis, MO, USA). Serial sections (5 µm) of the POA were obtained with Leica RM 2155 microtome, mounted on slides, de-paraffinized, and rehydrated. Sections processed for histology were stained with a trichromic stain (Hematoxylin/Erythrosine B-Orange G/Methyl blue) (Sigma Chemical Co., USA). The sections were then dehydrated, and mounted with Entellan (107961-Merck Millipore, Burlington, MA, USA).
Immunocytochemistry in Paraffin Sections
Male zebrafish 1-2 years old were processed as above (trichromic stain). Serial sections 5 or 10 µm of the POA were mounted on poly-L-Lysine (Sigma) coated slides. The sections were de-paraffinized, rehydrated, and incubated in citric acid pH 6 for 30 min at 90 • C for antigen retrieval. To visualize cytoplasmic Sox2, antigen retrieval was not performed.
Immunocytochemistry in Cryosections
Brains processed for cryostat sectioning were collected at 10 a.m. and fixed in 4% paraformaldehyde (PFA 4%) for 24 h at 4 • C. The heads were decalcified in 0.2 Molar EDTA solution pH 7.6 for 48 h at 4 • C and embedded in 1.5% agarose/5% sucrose blocks, and submerged in 30% sucrose overnight at 4 • C. Blocks were frozen at −20 • C with O.C.T. Compound (Tissue Tek ® , Sakura Finetek, Torrance, CA, USA). Twenty µm sections were then cut using a cryostat.
Antibodies
Information on primary antibodies is summarized in Table 1. Sections were incubated in primary antibody overnight at 4 • C and visualized using Alexa-labeled secondary antibodies (1:500; Invitrogen, Carlsbad, CA, USA). The nuclei were stained with DAPI (1:1000; Invitrogen). Sections stained for BrdU were pretreated with 2 M HCl for 15 min at 37 • C and washed with 1M PO4 4 , and incubated in BrdU (see Table 1). The labeling was visualized using Alexa-labeled secondary antibodies (1:250, Invitrogen).
Hormone Injection and BrdU Incubation
The intraperitoneal injection procedure used was modified from [72]. The day before the experiment (day 0), male adult zebrafish (1-2 years old) were separated from females. On days 1, 3, and 7, they were anesthetized with tricaine (0.168 mg/mL) (A5040 Sigma-Aldrich, St. Louis, MO, USA), immobilized using a sponge, and injected intraperitoneally using a #70 Hamilton syringe 15 µL per gram of 1 µM GnRH (Sigma-Aldrich; L4897) diluted in saline solution, or 1 mg/mL testosterone (Sigma-Aldrich; T1500), diluted in 30% methanol and 70% saline solutions. Controls were injected with the solution used to dilute each hormone. On days 3 and 7, fish were placed in water containing 10 mM BrdU (B5002 Sigma-Aldrich) for 15 h at 28 • C, as previously described [73]. On day 8, the fish were sacrificed for immunocytochemistry.
Microscopy
Light field photomicrographs were taken using a Leitz-Leica DMRBE microscope (Wetzlar, Germany) equipped with a DFC290 digital camera. Fluorescent images were obtained using an Olympus BX-DSU Spinning Disc microscope (Olympus Corporation, Shinjuku-ku, Tokyo, Japan) equipped with ORCA IR2 Hamamatsu camera (Hamamatsu Photonics, Higashi-ku, Hamamatsu City, Japan) and Olympus Cell-R software (Olympus Soft Imaging Solutions, Munchen, Germany). A stack of 0.5 µm thick was collected. The images were processed using the deconvolution software AutoQuantX 2.2.2 (Media Cybernetics, Bethesda, Maryland, MD, USA) and ImageJ ® software (National Institute of Health, Bethesda, Maryland, MD, USA). Images are shown in Figure 4D-F) were acquired using a Nikon confocal microscope, Eclipse 80i, and analyzed with the EZ-C1 program version 3.90 NIKON.
Statistical Analyses
Quantification of BrdU and cytoplasmic Sox2 cells was done using Shapiro-Wilk test.The normal distribution of the data was checked under every condition. Statistical significance was evaluated by paired t test. Statistical analyses and graphs were done using GraphPad Prism Version 4.0 software (GraphPad Software, San Diego, CA, USA). All data were graphed with Standard Error of the Mean (SEM).
Data Availability Statement:
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request. | 6,221.6 | 2021-05-31T00:00:00.000 | [
"Biology"
] |
Constraining the formation of the Milky Way : Ages
We present a new approach for studying the chemodynamical evolution of the Milky Way, which combines a thin disk chemical evolution model with the dynamics from N-body simulation of a galaxy with properties similar to those of our Galaxy. A cosmological re-simulation is used as a surrogate in order to extract ∼11 Gyrs of self-consistent dynamical evolution. We are then in a position to quantify the impact of radial migration at the Solar Vicinity. We find that the distribution of birth radii, r0, of stars ending up in a solar neighborhood-like location after ∼11 Gyr of evolution peaks around r0 = 6 kpc due to radial migration. A wide range of birth radii is seen for different age groups. The strongest effect from radial migration is found for the oldest stars and it is connected to an early merger phase typical from cosmological simulations. We find that while the low-end in our simulated solar vicinity metallicity distribution is composed by stars with a wide range of birth radii, the tail at larger metallicities (0.25 <[Fe/H]< 0.6) results almost exclusively from stars with 3 < r0 < 5 kpc. This is the region just inside the bar’s corotation (CR), which is where the strongest outward radial migration occurs. The fraction of stars in this tail can, therefore, be related to the bar’s dynamical properties, such as its strength, pattern speed and time evolution/formation. We show that one of the main observational constraints of this kind of models is the time variation of the abundance gradients in the disk. The most important outcome of our chemodynamical model is that, although we used only a thin-disc chemical evolution model, the oldest stars that are now in the solar vicinity show several of the properties usually attributed to the Galactic thick disc. In other words, in our model the MW “thick disc” emerges naturally from stars migrating from the inner disc very early on due to strong merger activity in the first couple of Gyr of disc formation, followed by further radial migration driven by the bar and spirals at later times. These results will be extended to other radius bins and more chemical elements in order to provide testable predictions once more precise information on ages and distances would become available (with Gaia, asteroseismology and future surveys such as 4MOST).
INTRODUCTION
The power of Galactic Archaeology has been threatened both by observational and theoretical results, showing that stars most probably move away from their birthplaces, i.e, migrate radially.Observational signatures of this radial migration (or mixing) have been reported in the literature since the 1970's, with the pioneering works by [11] and [12].Grenon identified an old population of super-metal-rich stars (hereafter SMR), presently at the Solar vicinity, but with kinematics and abundance properties indicative of an origin in the inner Galactic disc (see also [30]).These results were extended by [13], who showed, by re-analyzing the Geneva-Copenhagen Survey data, that the low-and high-metallicity tails of the thin disc are populated by objects, whose orbital properties suggest origin in the outer and inner Galactic disc, respectively.In particular, the so-called SMR stars show metallicities which exceed a e-mail<EMAIL_ADDRESS>is an Open Access article distributed under the terms of the Creative Commons Attribution License 2.0, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
EPJ Web of Conferences
the present day ISM and those of young stars at the solar vicinity.As discussed by [8] (see also Table 5 of [1]), the metallicity at the solar vicinity is not expected to increase much since the Sun's formation, i.e., in the last ∼4 Gyr, due to the rather inefficient star formation rate at the solar radius during this period, combined with continuous gas infall into the disc.Hence, as summarized in [9], pure chemical evolution models for the MW thin disc cannot explain stars more metal rich than ∼0.2 dex with respect to the Sun and radial migration has to be invoked.N-body simulations have also long shown that radial migration is unavoidable (e.g.[24], [28], [25], [23], [21]), although its main driver is still a hot debated topic in the literature (see [5], [20], and references therein).
It thus seems that the only way to advance in this field is by developing chemodynamical models tailored to the Milky Way (MW), in the cosmological framework.Only then, a meaningful comparison with the large amounts of current and forthcoming observational data (RAVE, SEGUE, APOGEE, Gaia-ESO and future planned surveys such as 4MOST), can be carried out.This was the main goal of the our work Minchev, Chiappini and Martig [19] (where more details can be found), namely, to develop a chemodynamical model for the MW, to be able to quantify the importance of radial mixing throughout the evolution of our Galaxy.
A NEW APPROACH FOR BUILDING A CHEMO-DYNAMICAL MODEL
Despite the recent advances in the general field of galaxy formation and evolution, there are currently no self-consistent simulations, that have the level of chemical implementation required for making detailed chemical predictions.This situation has led us to look for a novel way to approach this complex problem.We will show that our new approach works encouragingly well, explaining not only current observations, but also leading to a more clear picture regarding the nature of the MW thick disc.
The novelty of our approach is: a) we assume that each particle in our N-body simulation represents one star.Dynamically, this is a good assumption, since the stellar dynamics is collisionless; and b) we implement the exact star formation history and chemical enrichment from a thin disk chemical model into our simulated galactic disc.This is done by, at each time output, randomly selecting newly born stars in a way to match the star formation history corresponding to our chemical evolution model, at each radial bin.In this way we are able to insert the dynamics of our simulation into the chemical model.
For the chemical evolution model we assume a thin disk only model.For a detailed description see [19].Here we just recall a few important points, namely: a) our code follows in detail a large number of chemical evolution elements by properly taking into account the stellar lifetimes and the type Ia supernovae rate; b) we assume an exponentially decreasing gas accretion in time, which, combined with a star formation law dependent on the gas density, produces a star formation history for the solar vicinity which is agreement with observations.In other words, given our star formation history, we predict the observed gas and mass stellar densities at the present time; c) our star formation history leads to predicted type Ia, type II and deuterium abundances in agreement with observations.Outside the solar vicinity we have much less constraints and this is the reason why different chemical evolution models that otherwise agree at the present time, diverge in their predictions of, for instance, the abundance gradients evolution (see [7] for a discussion).As the purpose here is to use a pure thin disk chemistry, we assume a chemical evolution model without pre-enrichment from a thick disk, but just primordial composition gas accretion, which as a consequence leads to gradients which flatten with time, similar to other pure thin-disk chemical evolution models in the literature (e.g.[14]).The impact of other alternative chemical evolution models will be studied in forthcoming papers.
The simulation used in this work is part of a suite of numerical experiments first presented by [17], where the authors studied the evolution of 33 simulated galaxies from z = 5 to z = 0 using the zoom-in technique described in [16].The galaxy we have chosen has a number of properties consistent with the MW, namely: (i) it has an approximately flat rotation curve with a circular velocity V c ∼ 210 km/s at 8 kpc (slightly lower than the MW); (ii) the bulge is relatively small, with a bulge-to-total ratio of ∼1/5; (iii) it contains an intermediate size bar at the final simulation time, which develops early on and grows in strength during the disc evolution; (iv) the disc grows self-consistently as the result of cosmological gas accretion from filaments and (a small number of) early-on gas-rich mergers, as well as merger debris, with a last significant merger concluding ∼9 − 8 Gyr ago; (v) the disc gas-to-total mass ratio at the final time is ∼0.12, consistent with the estimate of ∼0.14 for the solar vicinity, and (vi) the radial and vertical velocity dispersions at r ≈ 8 kpc are ∼40 and ∼20 km/s, in good agreement with observations.Maps of the density distribution of our simulated galaxy, at different times, are shown in Fig. 1.A full description of our new approach can be found in Minchev, Chiappini and Martig [19].Here we just concentrate on its limitations and advantages.
The main limitations of our approach from the chemical-evolution point of view are: i) we neglect radial gas flows and the SN-driven galactic winds, possibly resulting in flatter abundance gradients than we currently find.In a forthcoming paper, we show that gas flows are important mostly at the disc boundaries and do not affect significantly the present results, ii) we assume that stars do not contribute to the chemical evolution outside the zone where they are born, but either contribute only to the chemical enrichment within 2 kpc from their birth place, or never die.The latter assumption is valid for most of the stars because (a) the massive stars die essentially where they were born, due to their short lifetimes and (b) low mass stars live longer than the age of the galaxy (never die).We do not expect this simplification to affect our results by more than 10% for chemical elements made in low and intermediate mass stars, and even less for those coming from massive stars, as it is the case for Oxygen.
From the dynamical point of view the limitations are: i) the resampling of the simulation star formation history according to the chemical evolution model, and ii) the difference between the gasto-total mass ratio expected from the chemical model and attained by the simulation.Although the discs in both the chemical and dynamical models grow inside-out, there are some offsets at particular times.These differences in the star formation histories are unavoidable, since the chemical and dynamical models are not tuned to reproduce the same star formation, although they are quite similar for most of the evolution.While the difference between the assumed (chemical model) and actual (dynamical model) stellar and gas densities can introduce some inconsistencies in the resulting dynamics (see details in [19]), this would generally have the tendency of bringing more stars from the inner disc out, due to a larger bar expected at earlier times.As it will become clear later, a larger fraction of old stars coming from the bar's CR region to the solar vicinity would only strengthen our results.
Overall, we do not anticipate the simplifications of our approach to affect significantly any of our results.On the other hand, what we gain with the above simplifications is a new tool to study the chemodynamics of our Galaxy, which is complementary to fully self-consistent models, and where the overall complex problem of galaxy assembly and evolution can be understood by pieces (i.e., same chemistry applied to different simulations and different chemistry applied to the same simulation).We 02001-p.3anticipate that this new approach will also be very useful to gain insights that can later be used in fully self-consistent simulations.Finally, our approach certainly represents an improvement over previous models parameterized chemodynamical models (e.g.[27]), as we extract dynamics from a state-of-theart simulation of a disc formation and use it to fuse with a chemical model tailored for the MW.
IMPACT ON THE MAIN CONSTRAINTS OF CHEMICAL EVOLUTION MODELS: THE AMR, ABUNDANCE GRADIENTS AND [O/FE] VS. [FE/H]
Here we focus on the main results of our chemodynamical model that are more related to age and chemistry.
The distribution of birth radii, r 0 , of stars ending up in a solar neighborhood-like location after ∼11 Gyr of evolution peaks around r 0 = 6 kpc due to radial migration (left panel of Fig. 2).The strongest effect from radial migration is found for the oldest stars and it is connected to an early merger phase in our simulation, where the last important merger happened ∼9 Gyr ago.Locally born stars of all ages can be found in the solar neighborhood.While a wide range of birth radii is seen for different age groups, the majority of the youngest stars are born at, or close to, the solar neighborhood bin.While the low-end in our simulated metallicity distribution is composed by stars with a wide range of birth radii, the tail at larger metallicities (0.25 <[Fe/H]< 0.6) results almost exclusively from stars with 3 < r 0 < 5 kpc.This is the region just inside the bar's CR, which is where the strongest outward radial migration occurs.The fraction of stars in this tail can, therefore, be related to the bar's dynamical properties, such as its strength, pattern speed and time evolution/formation.For this reason it is crucial to have a better knowledge of the properties of the Milky Way bar at present time (see Section 5).Examining the effect on the age-metallicity relation (AMR), we find that some flattening is observed, mostly for ages 5 Gyr (Fig. 3).More interestingly, although significant radial mixing is present, a slope in the AMR is preserved, with a scatter compatible with recent observational work (e.g.[6]).
We found a strong flattening in the [Fe/H] radial profiles of the older populations, however, the younger ones are much less affected (see Fig. 4).For stars younger than 2 Gyr the final gradient is very similar to the initial one out to ∼12 kpc, justifying its use as a constraint for our chemical model.We predict that the [O/Fe] radial profiles are essentially preserved for the chemical model we use.The [O/Fe] profiles for different age groups result straightforwardly from the adopted variation of the infalllaw with radius (and hence the SFHs at different positions) and, thus, provide a way to constrain different chemical evolution models.In the near future, these would be possible to measure by combining the good distances and ages expected from the CoRoT mission (see [18] for a first step in this direction), with abundance ratios obtained by spectroscopic follow-up surveys.For the young populations, this should be already possible to be obtained from the observations of open clusters, e.g., with the ongoing Gaia-ESO or APOGEE surveys.
We find no bimodality in the [Fe/H]-[O/Fe] stellar density distribution.However, when selecting particles according to kinematical criteria used in high-resolution samples to define thin and thick discs, we recover the observed discontinuity in the [O/Fe]-[Fe/H] plane (Fig. 5).This is in agreement with the recent observational results by [4], where a smooth [Fe/H]-[O/Fe] distribution was obtained, after correcting for the spectroscopic sampling of stellar sub-populations in the SEGUE survey.By separating initial final The solid and dotted color curves show the initial and final states, respectively.Note that, while strong flattening is observed for the older populations, the metallicity gradient for the youngest stars (age < 2) is hardly affected at r 12 kpc, thus justifying the use of our chemical model, which uses this as a constraint.
THE FORMATION OF THE THICK DISK
When focusing on the properties of the oldest stars in our model that at present time are found in the solar neighbourhood (age 10 Gyr) we find: -a metallicity distribution which peaks at [Fe/H]∼ − 0.5 and has a metal-poor tail down to [Fe/H]∼ − 1.3 (Fig. 3, upper right panel).-[O/Fe]-values spanning the range 0.2 − 0.4, with a peak around 0.3 (Fig. 3, lower right panel).-a lag in the rotational velocity by ∼50 km/s compared to the young stars (Fig. 7, dashed red line).
All of these properties are strikingly reminiscent of what we call the "thick disc" of our Galaxy, despite the fact that we have used pure thin-disc chemistry.Within the frameworks of the model we present here, the MW thick disc has emerged from (i) stars born hot and heated by mergers at early times and (ii) radial migration (from mergers at early times and bar/spirals later on) transporting these old stars from the inner disc to the solar vicinity.
To illustrate this in a more clear way, we performed another model where we start the implementation with chemistry 2.7 Gyr later, thus avoiding the early massive merger phase.We then integrate the simulation for additional 2.7 Gyr so that we again have 11.2 Gyr of evolution.To keep the correct location with respect to the bar's resonances, we downscale the disc radius by a factor 2.1 to account for the bar's slowing down.Figure 7 shows a comparison of the vertical velocity dispersion, z (r) (solid lines), and rotational velocity, V (r) (dashed lines) evolution resulting from the new realizations (black and blue lines) with our standard model (red curves).The oldest stars have z about a factor of two smaller than when the merger is present.The maximum radial velocity dispersion values (not shown) also drop to ∼43 km/s, which is reflected in the higher V -values.These smaller velocity dispersions and rotational velocity difference between young and old stars is unlike the observed values in the solar neighborhood.This again suggests that an early phase of mergers is a crucial step in developing thick disks.
Such a conclusion is in agreement with most (seemingly contradicting) models of thick-disc formation, which expect contribution from only/mostly one of the following: (i) mergers, (ii) early formation in gas-rich, turbulent clumpy discs, or gas rich mergers, and (iii) radial migration driven by internal instabilities.A combination of these mechanisms working together is required, where strong heating and migration occurs early on form external perturbations (our case) and/or turbulent gas clumps, followed by radial migration taking over the disc dynamics at later times.Yes, mergers are important, but we also need radial migration (unavoidable if a bar, spiral structure and/or mergers are present) to transport out old, hot stars, with thick-disc chemical characteristics.Yes, migration is important, but the old stars need to be "preheated" by being born hot and/or were heated by mergers at high redshift (also unavoidable from our current understanding of cosmology).The high stellar birth velocity dispersions at high redshift we find in our simulation (∼30-50 km/s) is consistent with recent works [10] [3].An important dynamical consequence of this is that the disc becomes less susceptible to satellite perturbations (common at high redshift), making it easier to survive until today.
A different conclusion has been reached by [27] who proposed that pure stellar radial migration can provide a mechanism for the formation of thick discs.They have assumed a certain migration efficiency and chemical enrichment scheme in order to fit the current ISM gradient in the MW, the metallicity distribute function (MDF), and the stellar velocity dispersions.This work suggested for the first time that galactic discs can be heated by radial migration, thus, excluding the need for merger activity during the MW disc evolution.It is important to note that the heating in this model was achieved by the explicit assumption that migrating stars preserved their vertical energy, thus outward migrators populated a thick disc component.Following [27], the increase of disc thickness with time found in the simulation by [25] was attributed to migration in the work by [15].
However, how exactly radial migration affects disc thickening in dynamical models had not been demonstrated until the work by [22] where it was shown for the first time that the conserved quantity for a migrating population is not the energy, but the vertical action.More recently, [20] presented an extensive study of six galaxy models using two completely different simulation techniques to show that internally driven radial migration does not contribute significantly to the increase in disc thickness (except in the disc outskirts), regardless of the migration efficiency.The authors showed that, while outward migrators contribute to some disk heating, inward ones cool the disk so the overall effect is negligible for most of the disk extent.It was thus concluded that radial migration in the absence of external perturbations fails to produce discs thick enough to explain observations and results instead in substantial flaring.
The recent work by [26] also considered the effect of migration on disc thickening, by analyzing the N-body/SPH simulation previously studied by [25] and [15].The authors concluded that radial migration and internal heating thicken coeval stellar populations by comparable amounts, thus challenging the results by [20].However, in [26]'s analysis the authors focused only on the outward migrators, i.e., the ones which contribute to some disc thickening in agreement with what was found in [20].As shown in the latter paper, it is the overall contribution from migrators that should be considered in a given radial bin, not the time evolution of a given population, if one were to tackle the question of how much migrators contribute to the thickening of the disc.Notice that both groups find a similar radial variation of the disk scale height, indicating that when all the migrators are taken into account, flaring thus result and thickening is not enough to make a thick disk.
It is clear that more observational constraints are needed in order to shed light on the disk thickening mechanism and the role of the bar and mergers in the formation and evolution history of our Galaxy.So far most of the observational constraints are confined to a small volume around our Sun.A first step in overcoming this limitation was taken by RAVE, SEGUE, and APOGEE which sample a larger region of our Galaxy (although most of the data is still confined to ∼2-3 kpc from us).
FUTURE OBSERVATIONS: WHAT WILL WE LEARN?
Tighter constraints on the formation of the MW disk can only be obtained through kinematical and metallicity data, covering as large a disk area as possible.In the near future, Gaia will deliver large astrometric accuracies for a large volume of our galaxy, providing direct distance estimates out to 10 kpc with roughly 10% accuracy.However, Gaia needs to be complemented by multi-object spectroscopy.This is one of the aims of the 4-m multi-object spectroscopic telescope (4MOST) to be installed at the VISTA telescope, currently in a conceptual design selection phase for ESO, with a decision in Spring 2013.
With 4MOST we aim at maximizing the scientific return of Gaia, thanks to additional chemical abundance information for stars fainter than ∼14 mag, and radial velocities with a precision better then 2 km/s for stars in the 14 < V < 20 range.By coupling Gaia proper motions and parallaxes with radial velocity, metallicity and detailed chemical abundances information provided by the 4MOST low-(R = 5000) and high-resolution (R = 20 000) modes, we will be in a position to fully trace the position-metallicity-velocity space throughout the disk, finally providing stringent constraints to chemodynamical models of the Milky Way.With its large number of fibers (around 1600 for lowresolution and 800 for high-resolution), 4MOST will be able to obtain spectra of around 30 million objects, with large impact both in Galactic and extra-galactic sciences.
Finally, asteroseismology can also bring crucial information to this field.Thanks to asteroseismology, solar-like pulsating red giants turn out to represent a well-populated class of accurate distance indicators, spanning a large age range, which can be used to map and date the Galactic disc in the regions probed by observations made by the CoRoT and Kepler space telescopes.When combined with spectroscopic constraints, one can estimate the mass and radius of these evolved stasr and hence obtain also their distances and ages.This data will be very important in providing crucial constraints not only onthe the age-velocity and age-metallicity relations at different at different Galactocentric radii and heights from the plane, but also on the abundance gradients and their time evolution.
Figure 1 .
Figure 1.Face-on (upper row) and edge-on (bottom row) density maps of the stellar component of the simulated galaxy for different times, as indicated.The contour spacing is logarithmic (see [19] for details).
Figure 2 .
Figure 2. Left: Birth radii of stars ending up in the "solar" radius (green bin) at the final simulation time.The solid black curve plots the total r 0 -distribution, while the color-coded curves show the distributions of stars in six different age groups, as indicated.The dotted-red and solid-blue vertical lines indicate the positions of the bar's CR and OLR at the final simulation time.A large fraction of old stars comes from the inner disc, including from inside the CR.Right: [Fe/H] distributions for stars ending up in the green bin (left) binned by birth radii in six groups, as indicated.The total distribution is shown by the solid black curve.The importance of the bar's CR is seen in the large fraction of stars with 3 < r 0 < 5 kpc (blue line).
Figure 3 .
Figure 3. Top: The left panel plots age versus [Fe/H] for different radii, resulting from our input chemical model (left).The middle panel shows stellar density contours of the resulting relation after fusing with dynamics, for the "solar" radius (7 < r < 9 kpc).The overlaid lines show the input chemistry for the same radii as in the left.The right panel plots the metallicity distributions for different age bins.Bottom: Same as above but for the [Fe/H]-[O/Fe] relation.There is some contribution from stars born ∼2 kpc and hardly any from r > 14 kpc, consistent with the birth radii distributions shown in Fig. 2.
Figure 4 .
Figure 4.The effect on the initial [Fe/H] (top) and [O/Fe] (bottom) gradients for different stellar age groups.The solid and dotted color curves show the initial and final states, respectively.Note that, while strong flattening is observed for the older populations, the metallicity gradient for the youngest stars (age < 2) is hardly affected at r 12 kpc, thus justifying the use of our chemical model, which uses this as a constraint.
Figure 5 .Figure 6 .
Figure 5. Selection effects can result in a bimodality in the [Fe/H]-[O/Fe] plane.The top panel show the unbiased stellar density distribution, as in Fig. 3.In the bottom plot we have applied the selection criteria used by [2].
Figure 7 .
Figure7.Vertical velocity dispersion z (r) (solid lines), and rotational velocity, V (r) (dashed) evolution for a) our standard model, where an early merger phase plays an important role in the formation of the thick disk (red curves), and b) a model where the radial migration is driven essentially by internal evolution (dark and blue curves). | 6,137.6 | 2013-03-01T00:00:00.000 | [
"Physics"
] |
Vacuum Decay in Real Time and Imaginary Time Formalisms
We analyze vacuum tunneling in quantum field theory in a general formalism by using the Wigner representation. In the standard instanton formalism, one usually approximates the initial false vacuum state by an eigenstate of the field operator, imposes Dirichlet boundary conditions on the initial field value, and evolves in imaginary time. This approach does not have an obvious physical interpretation. However, an alternative approach does have a physical interpretation: in quantum field theory, tunneling can happen via classical dynamics, seeded by initial quantum fluctuations in both the field and its momentum conjugate, which was recently implemented in Ref. [1]. We show that the Wigner representation is a useful framework to calculate and understand the relationship between these two approaches. We find there are two, related, saddle point approximations for the path integral of the tunneling process: one corresponds to the instanton solution in imaginary time and the other one corresponds to classical dynamics from initial quantum fluctuations in real time. The classical approximation for the dynamics of the latter process is justified only in a system with many degrees of freedom, as can appear in field theory due to high occupancy of nucleated bubbles, while it is not justified in single particle quantum mechanics, as we explain. We mention possible applications of the real time formalism, including tunneling when the instanton vanishes, or when the imaginary time contour deformation is not possible, which may occur in cosmological settings.
I. INTRODUCTION
The subject of quantum mechanical tunneling is an essential topic in modern physics, with a range of applications, including nuclear fusion [2], diodes [3], atomic physics [4], quantum field theory [5], cosmological inflation [6], etc. In the context of a possible landscape of classically stable vacua in field theory, motivated by considerations in string theory [7], it is essential to determine the quantum tunneling rate from one vacuum to the next. This has ramifications for the stability of our current electroweak vacuum [8], as well as for the viability of inflationary models [9], and may have ramifications for the cosmological constant problem [10].
In ordinary non-relativistic quantum mechanics of a single particle, quantum tunneling can be calculated in principle by a direct solution of the time dependent Schrödinger equation. However, our interest here is that of quantum field theory. In this case, a direct solution of the Schrödinger equation is notoriously difficult, and so approximation schemes are needed. The most famous approximation method, which is analogous to the WKB approximation in non-relativistic quantum mechanics, involves the computation of the Euclidean instanton solution from one vacuum to another [11,12]. This leads to the well known estimate for the decay rate per unit volume Γ ∝ e −S E , where S E is the bounce action of a solution of the classical equations of motion in imaginary time. This method is generally thought to be accurate when the bounce action S E is large; which evidently corresponds to exponentially suppressed decay rates.
Since the above method involves non-intuitive features, namely a restriction to Dirichlet boundary conditions on the field and dynamics in imaginary time, it begs the question whether there may be other formulations of the tunneling process. Furthermore, if one moves to more general settings, such as in cosmology, there may not always be the usual instanton solution, so one wonders whether other formulations can be employed instead. In this paper we will investigate under what circumstances an alternative approach to tunneling, from classical evolution of fields whose initial conditions are drawn from some approximation to the initial wave-function, can provide an alternative formulation for decay. 1 This work was motivated by the very interesting work of Ref. [1]. In that work they numerically obtained a tunneling rate from a false vacuum in 1 + 1 dimensional spacetime by solving for the classical dynamics of a scalar field starting from initial conditions generated by a Gaussian distribution. The method was to consider many realizations of initial conditions and then to calculate the ensemble-averaged tunneling rate. For their choice of parameters they found that the tunneling rate was similar to the one calculated by the instanton method.
This leads to several natural questions: (i) what is the relationship between these two approaches? One is in an imaginary time formalism, the other is in a real time formalism; so how, if at all, are they related? (ii) Under what circumstances are the rates comparable to each other? (iii) Under what circumstances are these approaches valid? It is known that the instanton method requires the bounce action to be large to justify a semi-classical approximation, but what is the corresponding statement for the other real time method?
In this paper, we address these questions. We will argue that this real time analysis from classical dynamics is not identical to, but is very closely related to, the instanton tunneling process. We will show that for simple choices of parameters, the two rates are parametrically similar. However, things are more complicated for potentials with unusual features, which we will discuss, and there can be advantages to the real time formulation in special circumstances. We will make use of the Wigner representation as it will provide a general formalism to cleanly identify these two complementary approaches. We will discuss under what circumstances the classical dynamics is justified, explain why this would fail in single particle quantum mechanics, and discuss some cosmological applications.
Our paper is organized as follows: In Section II we recap the standard instanton contribution to the decay. In Section III we present a more general formalism, using the Wigner representation, which allows us to describe these two approaches within a single framework. In Section IV we discuss the conditions under which the classical dynamics method is applicable. In Section V we estimate and compare the tunneling rates. Finally, in Section VI we discuss our findings.
II. STANDARD EUCLIDEAN FORMALISM
Let us begin by recapping the standard approach to vacuum decay, which occurs within the confines of a Euclidean, or imaginary time, formalism. In this approach the decay rate can be calculated from the imaginary part of the vacuum energy E 0 as where and Z is defined by Here we denote the (approximate) energy eigenstate around a false vacuum as |φ i . One may neglect the quantum fluctuation around the false vacuum and approximate the energy eigenstate by the eigenstate for the operatorφ. In this case, Z can be written as The path integral can be approximated by the contribution from a saddle point, which is known as the instanton solution. One can also calculate the Gaussian integral for the perturbation around the instanton solution. The result is given by the well-known formula: where φ bounce is the so-called bounce solution in an "upside-down" potential, with V → −V . Therefore, the decay rate can be calculated from the path integral with imaginary time T . Strictly speaking, however, Eq. (5) is not a tunneling rate from the false vacuum energy eigenstate because the boundary condition for the path integral Eq. (4) implies the transition between eigenstates for the operatorφ. The difference between the energy eigenstate and the eigenstate for the operatorφ is negligible only if the zero-point fluctuation around the local minimum is much smaller than the typical scale of the potential.
In quantum field theory, the number of effective degrees of freedom can be large and hence the quantum fluctuations can accidentally overcome the potential barrier. This accidental arrangement and subsequent barrier penetration was seen in the simulations of Ref. [1]. Hence, to only focus on initial conditions that are eigenstates of the field operator, as the usual instanton approach does, is not guaranteed to be the most natural choice of boundary conditions. Therefore, we would like to utilize a formalism that can accommodate general initial conditions on the fluctuations for a more complete analysis of tunneling. In the next section, we analyze tunneling within the Wigner representation as it will allow us to systematically study these different possibilities.
III. MORE GENERAL FORMALISM
Since vacuum decay is a time-dependent process, it is natural to calculate it by using a real time formalism (or Schwinger-Keldysh formalism), which can describe the time evolution of observables. As we will see, this will allow us to more systematically identify initial conditions for the decay, rather than restricting to only those that are useful in the standard imaginary time analysis.
The time evolution of the expectation value of an ob-servableÔ can in principle be calculated from whereρ is an initial density operator and T time is Schwinger's time-ordered operator. The operatorÔ may be taken to be an order parameter of the phase transition. Instead, one may use an operator that gives zero around the false vacuum and nonzero around the true vacuum. This equation implies that the contour for the time integral is given by the one shown in Fig. 1. We can define two kinds of fields: forward (φ f ) and backward (φ b ) fields, depending on the direction of time evolution. It is convenient to then define where π is the canonical conjugates of the field φ. Here φ c and π c are effectively classical fields, as we will explain shortly, while φ q and π q are effectively quantum fluctuations. The path integral can then be written as where the Wigner function W 0 is defined as the Weyl transform of the density matrixρ in the field representation. It is given by where O(φ, π) is the function obtained from the operator O by direct substitutionφ → φ andπ → π. For related work, see Refs. [15,16].
A. Classical Approximation
Now we shall rewrite the above path integral by using some approximations. We will discuss the meaning and justification of these assumptions in the next section.
If the quantum fluctuations are much smaller than the classical quantities, we can approximate H W as Then we can perform the integrations over φ q and π q , which give delta functions of the form This means that indeed {φ c , π c } obey the classical equation of motion. The integrals over the fields are determined by the delta functions and the result is given by If the initial quantum fluctuations are sufficiently small at the false vacuum, we can approximate the potential by a quadratic form; we will return to discuss under what conditions this approximation is valid. We denote the mass parameter as m at the false vacuum, i.e., V (0) = m 2 . The ground state wave-function can then be approximated as where ω k = √ m 2 + k 2 . The initial Wigner distribution W 0 can be then estimated by the one for a free field: This can be regarded as a probability distribution for the initial field values, with φ and π treated as independent random variables. For quantum field theory in d + 1 spacetime dimension, the full result for the initial Wigner distribution is approximated as where we integrate over all k-modes.
B. Relation to the Instanton Calculation
The same result can be obtained by the saddle point approximation for Eq. (11). In the classical limit, the path integral can be approximated by saddle points of the exponent of the integrand. Varying it with respect to φ c , φ q , π c , and π q , and eliminating π c and π q , we obtain One of the solutions to this equation is φ q = 0 with φ c being the solution to the usual classical equation of motion, with initial conditions drawn from the initial Wigner distribution. This saddle point corresponds to Eq. (16). We refer to this as the real time formalism from classical dynamics, seeded by non-trivial initial conditions that ultimately arise from a choice for the initial wave-function. Indeed we note that this process is absent if we were simply to assume trivial initial conditions W 0 = δ(φ c,0 )δ(π c,0 ), which would be the "purely" classical behavior. Now we show that there is another, related, contribution to Eq. (11) that is non-zero even if we were to set φ c,0 = π c,0 = 0. Assuming W 0 = δ(φ c,0 )δ(π c,0 ), we rewrite Eq. (11) as where we assume that O W is independent of π c and are the actions for the forward and backward fields, respectively. This can be rewritten as . This path integral can be calculated by the standard instanton method by deforming to imaginary time. Since we assume φ c,0 = π c,0 = 0 in this calculation, this saddle point corresponds to the transition from vanishingly initial classical fields. It also corresponds to vanishing initial quantum field for φ q = 0, though it leaves the initial condition for the quantum field π unspecified. This is identical to the one calculated by the instanton method discussed earlier in Section II. It is therefore associated with going from a field eigenstate with Dirichlet boundary conditions and again returning, in imaginary time, to a field eigenstate with Dirichlet boundary conditions. It is the so-called bounce solution in imaginary time. Importantly, the difference from the saddle point solution corresponding to Eq. (16) is the initial condition (or the boundary condition at t = 0). Let us comment on how to rotate the time variable in the imaginary space. If we naively take φ q = 0 in Eq. (11), the exponent vanishes. This is not consistent with Eq. (24), where the action does not vanish and gives the Euclidean action in the imaginary time. This inconsistency comes from the naive analytic continuation of the time variable. We can use the epsilon prescription to specify a possible way to change the integration contour in the imaginary space. The Hamiltonian should include an imaginary mass term that specify the way to change the integration contour. Therefore the time variable for the Hamiltonian for φ f should be rotated in the opposite way to the one for φ b . This is the reason that we obtain a nonzero exponent even if we take φ q = 0 in Eq. (11). Later we will comment on more general situations, which may occur in cosmology, where this rotation to imaginary time may be more problematic.
C. Comparison
In summary, there are two basic sets of initial conditions one may utilize to implement the saddle point for the path integral Eq. (11). The first one is given by Eq. (16), where the initial condition is given by some approximation to the initial wave-function and the time evolution is purely given by the classical equation of motion. The second one is given by the saddle point of Eq. (24), where the initial condition is φ = 0 and the time evolution is deformed into the complex plane to the imaginary time axis.
At first sight it may seem surprising that the first should be associated with tunneling. But indeed tunneling can occur because of the non-trivial initial conditions can make for rare events to take place even within the framework of classical dynamics. This is the tunneling process that was calculated in Ref. [1]. In this sense, this contribution is complementary to the instanton contribution, though in appropriate regimes that we will discuss, they can approximate each other quite well.
IV. CONDITIONS FOR THE CLASSICAL APPROXIMATION
In this section we discuss conditions to calculate a tunneling rate by Eq. (16) in the context of quantum field theory. We first note that the distinction between quantum and classical mechanics comes from the commutation relation for quantum operators. In particular, the commutation relation between creation and annihilation operators is given bŷ However, the effect of the right-hand side is negligible when the occupation numbers â † iâ i are large. We then expect that the high occupancy limit corresponds to the classical limit of quantum systems. This implies that the approximation Eq. (14) is justified when the number of particles in the system is extremely large.
In Ref. [17], we have shown that the expectation values of quantum operators are approximated by a corresponding classical ensemble average over many classical micro-states, with initial conditions drawn from the initial quantum wave-function. Eq. (16) is a mathematical expression of this statement. It can be understood as an extension of this discussion to the quantum regime, where the initial state is not a high occupancy state, but a (quasi) vacuum state with zero point fluctuations. Due to the possible production of bubbles, which arises due to rare accidental arrangements from the non-trivial initial conditions, the occupation number can be large enough to use the classical description.
In this case, the approximation φ q φ c is satisfied, except for the initial condition, and we can evolve φ c by the classical equation of motion. In the regime before the tunneling, φ q φ c may not be satisfied. However, we can still use Eq. (16) if the amplitude of fluctuations is small enough to neglect terms in the potential that are higher-order than quadratic. This is because the Wigner approximation is exact for the free-field theory. We will discuss situations in which the neglecting of these higher order terms may not be valid.
A. Tension and Pressure
Eq. (16) can describe the classical dynamics of the field after the bubble nucleation. This is different from the instanton method, where we need to connect the Lorentzian and Euclidean regimes to describes the dynamics of the bubble after nucleation. The tunneling process calculated by Eq. (16) can therefore describe the tunneling process itself as well as the dynamics of nucleated bubble after the nucleation.
Since the nucleated bubble obeys the classical equations of motion, its behavior can be understood easily, particularly for the thin-wall case. The bubble wall tends to shrink to a point due to its tension while it tends to expand due to the pressure of the vacuum energy. As we evolve the field classically with an initial condition, a lot of small bubbles are nucleated, but most of them do not have enough pressure to overcome the tension of the wall. In order for the bubble to expand after the nucleation, the pressure of the vacuum energy should overcome the tension of the bubble. For a thin-wall bubble, this requires where R is the radius of the bubble, σ is the tension of the wall, and is the difference of the vacuum energy.
Here we define area and volume factors in the unit ddimensional sphere: A similar type of inequality is expected to be satisfied for a thick-wall bubble.
B. Occupation Number
Now we examine under what conditions the occupation number of the quanta describing nucleated bubbles is much larger than unity. In this case the nucleated bubble is essentially coherent and can be treated within the framework of classical field theory.
We estimate the occupation number of nucleated bubble in two simple cases. First we consider the case where the scalar potential is described by typical values of curvature scale around vacua m, field value v, height of the potential barrier V h , and the difference of the vacuum energy (see Fig. 2). We note that V h must be smaller than of order v 2(d+1)/(d−1) for d > 1 because of the unitarity bound (e.g., in 3 + 1 dimensions this is related to the familiar idea that the quartic coupling λ φ 4 obeys λ 1 to be in a weakly coupled regime).
Thin-Wall
We assume V h and use the thin-wall approximation for now. In this case, the wall tension is given by Using Eq. (26), we obtain a typical radius of the nucleated bubble as Let us first report on the Euclidean instanton action associated with the bubble, as it is the standard quantity to compute tunneling in the literature, as However, in order to justify the alternative real time tunneling from classical dynamics, we need to compute the bubble's occupancy number. We define it by the gradient energy of the bubble in the unit of m: (if we were to reinstate factors of , the actual occupancy number would be this divided by ). It is roughly given Schematic picture of a typical potential and parameters describing its shape.
by
Note that every factor on the most right-hand side is larger than unity for weakly coupled field theories, so the occupation number can in fact be quite large
Thick-Wall
The above suggests that the occupation number can be as small as of order unity when the vacuum energy is not degenerate and the difference of the VEV is as small as m. This implies that the bubble is a thick-wall type and so we should re-examine the above analysis. Here the scalar self coupling could be as large as O(1). We check that the occupation number is larger than unity in this extreme case, too. We consider the following potential: where U 0 and Λ (U 0 Λ 2(d+1)/(d−1) ) are dimension-full parameters and λ 3 ( O(1)) is a dimensionless constant. Since φ = 0 is a false vacuum, it can tunnel to the other side of the potential hill. The tunneling action and the occupation number are given by where the numerical constants are given by c S 4.1 × 10 and c N 1.0 × 10 2 for d = 3. Even in this extreme case, the tunneling action and the occupation number are larger than O(10/λ 2 3 ). This justifies that the nucleated bubble can be described classically (or a scalar condensate).
C. Single Particle Quantum Mechanics
One may wonder if these arguments can extend to the problem of single particle tunneling in ordinary nonrelativistic quantum mechanics. In this case there is obviously no such thing as a "bubble" that can be formed. So there is no obvious sense in which there is any object at high occupancy.
Nevertheless, we can formally view this problem as quantum field theory in 0 + 1 dimensions. So, for the sake of completeness, let us formally take the result in Eq. (35) and take d → 0. Then we formally obtain Now we should note that in this case m, which is the (square root) of curvature of the potential in quantum field theory at the false vacuum, is just the characteristic frequency of oscillation of the particle ω 0 around the meta-stable minimum in quantum mechanics. However, what is important is that in quantum mechanics, energy conservation tells us that the tunneling process requires that the particle tunnel to a point at the same potential energy as its starting values. Hence, here should be the potential height difference, so it is in fact just = 0. This implies N = 0. So there is no sense in which one is at high occupancy. This means that this procedure of sampling from some Gaussian approximation to the wave-function and using this to determine tunneling will typically fail in ordinary quantum mechanics; we will return to this point again later. Conversely, it can be applicable in field theory in higher dimensions with bubbles of high occupancy, as we discussed above.
V. TUNNELING RATE
Now we shall compare the tunneling rate (per unit volume) via the standard Euclidean instanton to the tunneling rate via classical dynamics with initial conditions drawn some approximation to the wave-function. The latter one can be estimated in the following way.
Ideally the relevant initial fluctuations that ultimately lead to the formation of the bubble are sufficiently small that the potential can be approximated to be a quadratic form around a false vacuum; we will revisit this shortly. Then the initial distribution of fluctuations is given by the Wigner distribution of a free massive scalar field with mass m around the false vacuum Eq. (19). This distribution does not change much even if we allow the field to classically evolve in time. The tunneling rate can therefore be estimated by the probability that φ c (k) and π c (k) are large enough to nucleate a classical bubble. A classical bubble that expands after the nucleation must satisfy the condition Eq. (26) and hence its radius must be larger than R b ∼ σ/ . Now in order to completely determine the probability for tunneling, one should perform a simulation of this non-linear system of classical equations of motion, with the appropriate initial conditions specified above. However, we can give an estimate of the probability of bubble formation by utilizing the initial wave-function's statistical distribution as a guide; we will return to this shortly. In order to form a bubble there are two conditions that need to be satisfied: (a) the field need's to be on the far side of the barrier and (b) the bubble needs to have sufficient energy to avoid collapse. Let us estimate these probabilities in turn First, in order for the bubble to be on the other side of the barrier, we need that the field value in position space obeys φ c v. In the k-space representation this condition means that we need φ c (k) > φ for a bubble of radius R n to be nucleated. The probability P a that φ c (k) exceeds this threshold can be estimated from the Wigner distribution as where is some characteristic frequency, associated with the bubble associated with a characteristic wavenumber However, this is not a sufficient condition for nucleation, because if the bubble appears on the other side of the barrier with arbitrarily low energy then it can collapse. Suppose that a small bubble with radius R ini and kinetic energy R d ini π 2 c forms due to fluctuations. The kinetic energy must be larger than the energy of the bubble with radius R b so that the bubble can expand after the nucleation. This means we need π c > π Note that this π (th) c is the conjugate momentum in position space. It can be written in terms of π(k) in the momentum space as π The probability P b to have sufficient energy can then again be estimated from the Wigner distribution as A. Tunneling Rate According to the Wigner representation, the variables φ c and π c are taken as independent random variables. This says that the probability that both (a) and (b) occur is the product P a P b . This allows us to estimate the tunneling rate (per unit volume) within this real time formalism as with where a and b are O(1) prefactors. We can compare this to the usual result for tunneling using the Euclidean imaginary time formalism given earlier in Eq. (5) as with The instanton rate is calculated for the thin-wall case, but the result is not qualitatively different for the thickwall case once we identify as the difference of the vacuum energy between the false vacuum and the tunneling point. Since this involves an extra factor of R b compared to the scaling in γ R , we need an estimate for the bubble radius, which is roughly This allows us to make the estimate for the instanton tunneling exponent. In this final expression we have still kept a factor of R d b for convenience, since this is a common factor that appears in Eq. (45) also.
B. Examples
We now use the above results to compare the tunneling rates that we have estimated in these different formalisms.
Weakly Broken Z2 Symmetry
Let us consider a potential of the form where δV (φ) is a term that weakly breaks the Z 2 symmetry. This potential is similar to the kind of potential shown in Fig. 2 with V h ∼ m 4 /λ. In this case the bubble thickness is approximately set by the Compton wavelength as λ C ∼ 1/m. However the bubble radius is at least this large, i.e., m R b 1. This ensures the frequency ω b can be approximated by the mass ω b m. By noting that is bounded to be of the order of or much smaller than V ∼ v 2 m 2 , we can conclude that the probability P a P b . Hence the rate γ R is approximated as Then from Eq. (49), with V h /m ∼ m v 2 , we have γ I ∼ γ R .
SM Higgs
As another example, let us consider the Higgs potential in the minimal SM. Upon RG running of the Higgs self-coupling λ, the top mass, and other couplings, one finds that the Higgs potential turns over and then goes negative. This happens at around v ∼ 10 11 GeV, or so. In this example, the potential is dominated by the quartic term near the tunneling point. In this case, we can use the above formula by taking → V h ∼ λ v 4 . The bubble radius is now of order or larger than (using λ ∼ 0.01 in this regime). This radius is much much smaller than the Compton wavelength of the Higgs which is m −1 ∼ 10 −2 GeV −1 . Hence now we are in a regime in which ω b ∼ 1/R b . In this regime, both P a and P b are comparable, and they both give (we naturally focus here on the physical case of 3+1 dimensions). This is comparable to the instanton rate γ I ∼ 1/λ, so we again have γ I ∼ γ R . We note that in this case with R b m, giving ω b m, and probing deep into the quartic term in the potential, it was not guaranteed that the Gaussian approximation based on the free theory would suffice. However, parametrically it is of the right order.
Flat Hill-Top
Suppose the hill-top is very flat, moreso than it appears in Fig. 2. To be clear, let us imagine that it is so flat that V h m 2 v 2 , which would be the naive value based on dimensional analysis. Such a potential is perhaps unusual from the microscopic point of view, but it is allowed in principle. In this case the instanton gives an exponent (normalized to bubble volume) that is linear in the barrier width γ I /R d b ∝ v. On the other hand, if we turn to the real time formalism we obtain different estimates. From Eq. (43) the contribution from the kinetic energy effect gives γ R /R d b ∝ v 0 , which is too small. On the other hand, from Eq. (41) the contribution from the need to be on the other side of the barrier gives γ R /R d b ∝ v 2 , which is too large.
In this case, the Gaussian approximation for the initial wave-function is not accurate, since it assume that the fields mass is m, but for such a potential, the effective mass in the barrier is smaller. Instead we need to alter our simple estimates. We need to essentially replace the frequency of the bubble by some appropriate effective mass, from the effective curvature of the potential, which is indeed of the order of the instanton rate.
On the other hand if we persist with the original Wigner distribution, we believe that it is plausible that a simulation can arrive at roughly the correct tunneling rate anyhow. This is because even though the initial distribution is not an accurate representation of the false vacuum eigenstate, these initial conditions may be partially washed away in the simulation, leading to the appropriate rate.
VI. DISCUSSION
We have used a general Wigner representation to establish two formulations of tunneling with slightly different boundary conditions and dramatically different dynamics: in addition to the usual formulation of the imaginary time saddle point contribution to the decay amplitude with Dirichlet boundary conditions on the field, there is another real time formulation based on classical dynamics with initial conditions set by some estimate for the initial wave-function. While the former one is the familiar one from the instanton action, the latter one is an ensemble average of classical field theory dynamics seeded by quantum zero point fluctuations. We note that this ensemble average can be practically realized by a spatial average of a single simulation by appealing to a form of ergodic theorem.
A. Classicality
In order to justify the classicality of the field in this latter approach, the quantum fluctuations have to organize into a bubble and the occupation number has to be much larger than unity. This can be realized only if the degrees of freedom in the system are large enough, as is possible in quantum field theory, as it is for the nucleated bubbles. We note however that much of the universe would remain at low occupancy, so it is not entirely guaranteed that the classical dynamics is extremely accurate, but perhaps only roughly accurate. Furthermore, this approach is ordinarily not valid in single particle quantum mechanics as the notion of high occupancy there does not seem to be valid.
One may wonder if the tunneling rate depends sensitively on the initial fluctuations. This is actually the case when the number of degrees of freedom in the system is not much larger than of order unity. However, we are interested in the tunneling process in quantum field theory, where the number of relevant degrees of freedom can be quite large. In this case, all relevant modes may interact with each other somewhat chaotically and the distribution will be randomized after an ergodic time. So one expects dynamical evolution to wash away some features of the initial condition. However, our simple estimates for the tunneling rates did involve a dependence on the mass of the field defined around the initial false vacuum, as it affects the initial Gaussian approximation to the wave-function. So these simple estimates involve some sensitivity to initial conditions, especially in the case of potentials with extreme features. But in the case in which the bubble has characteristic wavenumbers kvalues (k m), we are not sensitive to the UV behavior of the initial conditions. Furthermore, more general estimates could be made in more extreme situations also.
B. Applications
As an application of these results, suppose there is an AdS vacuum between two dS vacua. The tunneling rate from a dS vacuum to the other dS vacuum cannot be calculated by using the standard instanton method because there is no instanton solution. However, the transition rate must be nonzero because anything can happen in quantum theory according to the path integral expression [18]. In fact, the "classical tunneling" discussed in this paper is expected to give a nonzero transition rate. This is the only practical way we are aware of to calculate the transition rate in such a case. This transition process is complementary to the standard instanton tunneling process. In this sense, the result gives a lower bound on the tunneling rate.
As another application, consider a dynamical setting, such as during preheating after inflation. In this case a field may exhibit a strongly time dependent effective potential from its interactions with the inflaton or the metric etc. If such a field is also trapped in a type of false vacuum then it may be highly non-trivial to implement the standard instanton tunneling procedure, as this requires deforming the contour to the imaginary time axis. If there is (quasi) periodic behavior in the time domain it will re-organize into growing exponential behavior in imaginary time, which may be an obstruction to an efficient implementation of the Euclidean instanton analysis. Furthermore, if there is some form of non-analytic structure to the time dependence, such as from a step-like time behavior, then this may be an obstruction to deforming the contour. In these cases it may be more intuitive and more practical to perform a real time analysis. | 8,380.4 | 2019-04-18T00:00:00.000 | [
"Physics"
] |
Static Regulation and Dynamic Evolution of Single‐Atom Catalysts in Thermal Catalytic Reactions
Abstract Single‐atom catalysts provide an ideal platform to bridge the gap between homogenous and heterogeneous catalysts. Here, the recent progress in this field is reported from the perspectives of static regulation and dynamic evolution. The syntheses and characterizations of single‐atom catalysts are briefly discussed as a prerequisite for catalytic investigation. From the perspective of static regulation, the metal–support interaction is illustrated in how the supports alter the electronic properties of single atoms and how the single atoms activate the inert atoms in supports. The synergy between single atoms is highlighted. Besides these static views, the surface reconstruction, such as displacement and aggregation of single atoms in catalytic conditions, is summarized. Finally, the current technical challenges and mechanistic debates in single‐atom heterogeneous catalysts are discussed.
Introduction
More than 80% of chemical reactions are related to catalytic processes. [1] Typical catalysts involve homogenous and heterogeneous catalysts. Homogeneous catalysts usually exhibit higher activity and selectivity compared with heterogeneous counterparts. Moreover, the uniform active sites and the controllable coordination environment in homogeneous catalysts enable deep understanding of catalytic mechanisms. However, homogeneous catalysts generally suffer from the poor stability and complex separation procedure. Although heterogeneous catalysts are able to avoid these disadvantages, the lower atomic utilization efficiency and ill-defined active sites relative to homogenous ones bring about both economic and academic concerns. To this end, single-atom catalysts have aroused wide interests from researchers, promisingly bridging the gap between homogenous and heterogeneous catalysts. [2] www.advancedsciencenews.com 1801471 (2 of 9) ©
Synthetic Approaches
Fabrication of single-atom catalysts is the prerequisite for further investigation of their catalytic performance and mechanisms. The extremely high surface energy of isolated metal atoms makes it challenging to prevent them from aggregation under harsh preparation or catalytic conditions. To this end, great efforts have been devoted to physically or chemically confining and stabilizing isolated metal atoms from aggregating into clusters or nanoparticles. Recently, there are several reviews on the synthesis of single-atom catalysts. [9] In this work, we briefly discuss these approaches. Typical synthetic approaches involve two steps: coordinating the ensemble composed of active metal species with the supports; removing the residual ligands from single metal sites (Figure 1).
The coordination step requires the deliberate selection of supports and the delicate synthetic operation. Supports afford the physical isolation or chemical stabilization of metal single atoms. Typical supports for single-atom catalysts include microporous matrices, metal-containing supports, and metalfree supports. Microporous matrices such as zeolites, metalorganic frameworks, and covalent-organic frameworks offer micropores to physically confine or chemically graft metal single atoms. [10] Metal-containing supports involve metal nanocrystals, metal carbides, metal oxides, metal sulfides, and so on. Metal surfaces such as Cu, Ni, and Au anchor single atoms via strong metal-metal bonds. [4e,11] Such interaction also enable the stabilization of single atoms on metal-like materials such as α-MoC, WC x , and TiC. [12] Reducible metal oxides (e.g., TiO 2 , CeO 2 , FeO x , CoO x , and WO x ) stabilize single atoms through defects such as oxygen vacancies. [3a,13] Single atoms can also be embedded in metal sulfides such as MoS 2 via doping. [14] When metal-free materials such as graphene, g-C 3 N 4 , BN, and HSC serve as the supports, chemical bonds are formed between single metal atoms and their coordinating atoms (e.g., C, O, N, and S). [15] In addition, synthetic experiments also need to be delicately operated. During wetchemistry methods, one can achieve the deposition of single atoms via either decreasing the amount of metal loading or controlling the precursor reduction at a proper rate to prevent self-nucleation. [16] Another approach which can precisely control the synthesis of single-atom catalysts is atomic layer deposition which relies on sequential self-terminating reactions between a solid surface and gas phase precursor molecules. [17] More importantly, the experimental operation needs to adapt for the selection of supports. Datye and co-workers trapped Pt single atoms on CeO 2 at high temperature. This approach requires a supply of mobile atoms and a support that can bind the mobile species. [18] The residual ligands can saturate the coordination of metal single atoms to decrease their catalytic activity, or destabilize the metal atoms to trigger aggregation. A typical method to remove ligands is the combustion under O 2 atmosphere. [17a,19] In addition to such harsh treatment, Zheng and co-workers reported the removal of Cl − ligands on Pd single atoms under mild conditions via a photochemical route. [20] Once Pd atoms coordinated with Cl − ligands on TiO 2 were exposed to ultraviolet (UV) irradiation, electron-hole pairs were generated on TiO 2 nanosheets. Electrons were trapped in Ti-3d orbitals to form Ti 3+ sites, while holes broke TiO bonds between glycolate and TiO 2 , leading to the formation of ethylene glycolate (EG) radicals. The UV-generated EG radicals promoted the removal of Cl − on Pd and the stabilization of individual Pd atoms via PdO bonds.
Characterization Methods
To fully convince the fabrication of single-atom catalysts, we need to not only directly visualize the single atoms, but also guarantee the absence of clusters or nanoparticles. Accordingly, the characterization of single-atom catalysis requires a series of complementary techniques to prove the exclusive existence of single atoms. These techniques include atomic resolution aberration-corrected scanning transmission electron microscopy (STEM), X-ray absorption fine structure (XAFS), infrared (IR) spectroscopy, and theoretical calculations (Figure 2).
Atomic resolution aberration-STEM has been utilized as a powerful technique for the direct visualization of atomically dispersed metal atoms on the supports. The contrast in the HAADF-STEM image is associated with the atomic number of the observed atom. When metal atoms are singly dispersed in the detected region, the spots with sharp contrast are separated far apart instead of aggregating into patches. [21] However, the observation of single atoms in STEM images is a only necessary but not sufficient condition for identifying atomically dispersed catalysts. Specifically, the STEM image only offers information about a partial region of a catalyst, but is unable to ensure the adaptability of this local area to the overall structure of each synthesized catalyst. Moreover, the STEM does not apply to distinguishing metal atoms (e.g., Cu and Zn atoms) with similar atomic numbers.
As a typical complement to STEM, XAFS afford the information about the statistical average of the overall structure. XAFS spectrum generally involves X-ray absorption near-edge spectroscopy (XANES) and extended XAFS (EXAFS). XANES reflects the oxidation state and symmetry (e.g., octahedral coordination) of the absorbing atom. [22] EXAFS offers the species of atoms coordinated with the absorbing one, the coordination number, and the bond distance. [22] For a single-atom catalyst, only the coordination with foreign atoms was detected in the absence of the bonding with the same metal atoms. [21] Another technique to identify single-atom catalysts is IR with the help of probe molecules. By detecting the vibrational intensity and frequency of the probe molecules, we obtain the oxidation states and coordination environment of the active centers. For example, we utilize CO molecules to distinguish Pt single atoms and Pt clusters. [23] The typical IR characteristics of Pt single atoms are listed. First, the stretching frequency (2080-2170 cm −1 ) for CO on single Pt δ+ atoms is shifted to 40-50 cm −1 higher than that (2030-2100 cm −1 ) for CO linearly bonded on Pt 0 in clusters. Second, the peak (1750-1950 cm −1 ) for bridgebonded CO on two Pt atoms in clusters is absence for Pt single atoms. Moreover, the stretching frequency for CO on isolated Pt atoms is independent on the coverage of CO due to spatial separation of Pt atoms. In single-atom catalysts, there are no adjacent CO molecules in close enough proximity to induce dipole-dipole coupling responsible for frequency shifts. As for CO adsorbed on Pt clusters, the stretching frequency redshifts with decreasing the coverage of CO due to dipole-dipole coupling. Adjacent CO molecules vibrate in unison, giving rise to a lower vibrational energy at lower coverage.
Besides the experimental techniques, theoretical calculations also play a pivotal role in determining the specific configuration of the active sites. Compared with metal clusters
Metal-Support Interaction
Metal-support interaction not only plays a pivotal role in anchoring metal single atoms on the supports, but also makes a remarkable impact on catalytic performance. [24] Enhanced coordination of metal single atoms with a support varies the electronic properties of catalysts. In contrast, weak metal interaction suppresses catalytic processes that occur on multiatom sites. Optimizing the catalytic performance of single-atom catalysts requires proper metal-support interaction via deliberate selection of an appropriate support. For instance, Ma and co-workers demonstrated that the interaction between Pt single atoms and α-MoC enabled effective methanol-reforming reaction because of abundant surface hydroxyls produced on α-MoC.
[12a] Hutchings and co-workers found that the different Au-Cl coordination in Au single-atom catalysts induced the varied ratio of Au(I):Au(III), resulting in different activity toward the production of vinyl chloride monomer. [25] From the perspective of metal single atoms, their electronic properties such as the highest occupied state (HOS) and charge are altered by the supports which vary in band structures, coordination number, etc., to bind single atoms. [26][27][28] For example, Wang et al. quantitatively depicted the profile of metal-support interaction for single-atom catalysts from the perspective of the HOS. They dispersed Rh single atoms on the surface of VO 2 (Rh 1 /VO 2 ). [27] During NH 3 BH 3 hydrolysis over Rh 1 / VO 2 , the activation energy decreased by 38.7 kJ mol −1 after the metal-insulator transition of supports from monoclinic VO 2 (M) to rutile VO 2 (R). The kinetic analysis indicated that the activation of proton served as the rate-limiting step. Based on the first-principle calculations, the doping of Rh in VO 2 (M) arouses new occupied states in the band gap of VO 2 (M), whereas the HOS of Rh 1 /VO 2 (R) is at the energy comparable to the Fermi level of VO 2 (R) (Figure 3a). The divergence in the HOS between Rh 1 /VO 2 (M) and Rh 1 /VO 2 (R) was 0.49 eV (47.3 kJ mol −1 ), which was close to that (38.7 kJ mol −1 ) of activation energy. In this regard, the researchers associated the difference of apparent activation energy between two phases of VO 2 with the HOSs of Rh single atoms. In addition, Flytzani-Stephanopoulos and co-workers reported that the Bader charge of Au in AuO x (OH) y Na 9 was tuned through varying the number of electron-withdrawing groups (O/OH) on zeolites (Figure 3b). [28] From the perspective of supports, the initially inert atoms in the supports can also be activated by their coordinated metal single atoms. Bao and co-workers reported that Pt single atoms triggered the activity of in-plane S atoms of MoS 2 toward hydrogen evolution reaction. [14b] Based on DFT calculations, the introduction of Pt single atoms increased the electronic states of in-plane S sites below the Fermi level (Figure 3c). The activated in-plane S sites exhibited comparable electronic states to that of the edge S atoms that generally served as the active site for hydrogen evolution as a widespread consensus. [29] Similar phenomenon was also observed by Li et al. in the study of neighboring Pt monomers on MoS 2 (Pt 1 /MoS 2 ) toward CO 2 hydrogenation. [30] The molecular orbital analysis indicated the electron transfer between the Pt atom and its bonded S atoms (Figure 3d). Pt atoms activated their vicinal S atoms to dissociate H 2 and adsorb intermediates. In neighboring Pt monomers, a part of the activated S atoms are shared or adjacent. Those S atoms bridged the range of influence exerted by two Pt atoms and thereby reflected the synergetic interaction.
Synergetic Interaction
To avoid the aggregation of single atoms, the mass loading is generally controlled relatively low. The low loading leads to the separation of single atoms far apart, resulting in the negligible interaction between single atoms. However, shortening the distance between two single atoms arouses distinct catalytic performance. Goodman and co-workers found that a properly spaced pair of noncontiguous Pd sites on Au(111) surface enabled the coupling between a surface ethylenic and acetate species, and thereby exhibited higher activity for vinyl acetate formation than isolated Pd sites. [31] In addition, Yardimci et al. reported that Rh dimers in [Rh 2 (C 2 H 5 ) 2 ] species promoted the scission of HH bond, resulting in more efficient hydrogenation of ethylene than Rh monomers in [Rh(C 2 H 4 ) 2 ] complexes. [22a,32] Bao and co-workers revealed that single Fe sites embedded in a silica matrix enabled direct, nonoxidative conversion of methane, whereas adjacent Fe sites led to CC coupling, further oligomerization and coke deposition. [33] Adv. Sci. 2019, 6, 1801471 Recently, Zeng and co-workers revealed the synergetic interaction between Pt monomers by facilely increasing the Pt mass loading up to 7.5% while still maintaining the atomic dispersion of Pt. In Pt 1 /MoS 2 , [30] Pt atoms replaced Mo atoms in MoS 2 nanosheets, wherein every Pt atom and its directly bonded S atoms composed an "active center." When two active centers were partly overlapped or adjacent, the two relevant Pt atoms were regarded as neighboring monomers. During CO 2 hydrogenation, neighboring Pt monomers exhibited higher activity than isolated ones. The researchers further investigated the catalytic mechanism of different types of Pt monomers by combining temperature-programmed desorption (TPD), in situ diffuse reflectance infrared Fourier transform (DRIFT), in situ X-ray photoelectron spectroscopy (XPS), and DFT calculations. They found neighboring Pt monomers promoted the dissociation of H 2 relative to isolated ones (Figure 4a). Moreover, CO 2 was converted into methanol without experiencing the formation of formic acid intermediates over isolated Pt monomers (Figure 4b-e). In contrast, neighboring Pt monomers worked in synergy to alter reaction pathways, where CO 2 undergoes sequential transformation into formic acid and methanol (Figure 4b-f).
Dynamic Evolution of Single-Atom Catalysts in Thermal Catalytic Reactions
During catalytic reactions such as CO 2 hydrogenation, Fischer-Tropsch synthesis (FTS) and CO oxidation, the surface of catalysts is unable to retain its original structures due to the corrosion of acidic/alkaline environment, the adsorption of substrate molecules, the coordination of solvent molecules, or other factors. [34] For instance, during FTS, Fe-based catalysts tend to form iron-carbide phases such as cementite, Hägg carbides, and hexagonal carbides, owing to the high affinity of Fe atoms to C atoms cleaved from CO. [35] In addition, the surface carbonization has been reported to account for the deactivation of Co-based catalysts during FTS. [36] Although single atoms are anchored on the support surface via strong metal-support interaction, the high surface energy of single atoms usually results in the surface reconstruction such as displacement and aggregation in catalytic reactions. [37][38][39][40][41][42][43] Wang et al. reported that the adsorption of CO and H 2 induced the displacement of Rh single atoms on CoO (Rh 1 / CoO) surface during hydroformylation reaction. [38] During the hydroformylation of propene, Rh 1 /CoO achieved the TOF number of 2065 h −1 and selectivity of 94.4% for butyraldehyde. They found that sole propene was weakly adsorbed on Rh 1 /CoO, whereas the adsorption of propene was prominently facilitated in the atmosphere containing both H 2 and CO (Figure 5a,b). Based on the theoretical calculation, the position of Rh single atoms did not change during the adsorption of sole CO, H 2 , or propene ( Figure 5c). Interestingly, Rh single atoms deviated from the lattice point after the adsorption of both H 2 and CO on Rh 1 /CoO (Figure 5c). The displacement of Rh atoms promoted the adsorption of propene, as the adsorption energy of propene significantly increased Adv. Sci. 2019, 6, 1801471 Figure 3. a) Calculated projected densities of states (PDOS) of VO 2 (M), VO 2 (R), Rh 1 /VO 2 (M), and Rh 1 /VO 2 (R). Reproduced with permission. [26] Copyright 2017, Wiley-VCH. b) Bader charge of Au in AuO x (OH) y Na 9 cluster sites. Reproduced with permission. [27] Copyright 2014, American Association for the Advancement of Science. c) Total DOS for one H adsorbed on Pt-MoS 2 , and PDOS of in-plane and edge S atoms from pure MoS 2 and Pt-MoS 2 . Reproduced with permission. [28] Copyright 2015, Elsevier. d) The profile distributions of the LUMO of Pt 1 /MoS 2 , respectively. Reproduced with permission. [30] Copyright 2018, Nature Publishing Group. by 0.34 eV. As such, Rh single atoms on CoO underwent displacement from their original positions during hydroformylation reaction, facilitating the adsorption and activation of reactants.
Reactant molecules can also induce mobility of metal single atoms and lead to agglomeration into clusters. Rousseau and co-workers theoretically predicted that CO adsorption gave rise to the reconstruction of Au nanoparticles into low coordinated and mobile AuCO species. [39] This prediction was later confirmed by room-temperature STM experiments. [40] Further researches revealed that the Au single atoms linked with O atoms existed only in operando CO oxidation and returned to Au nanoparticle after the completion of reaction. [41] Parkinson and co-workers demonstrated that the strong interaction between CO and Pt adatoms led to the formation of Pt carbonyls (Pt 1 CO) and weakened the PtO bonds. [42] Interestingly, Pt 1 CO monomers aggregated into clusters composed of different Pt atoms under different CO coverage. Based on in situ TEM, Corma and co-workers directly visualized the dynamic reversible transformation between atomically dispersed Pt species and clusters/nanoparticles during CO oxidation at different temperatures. [43] Adv. Sci. 2019, 6, 1801471 c,d) In situ XPS spectra of C 1S and O 1S for 0.2%Pt/MoS 2 and 7.5%Pt/MoS 2 . e,f) Optimized reaction paths in CO 2 hydrogenation for isolated and neighboring Pt monomers on MoS 2 , respectively. Reproduced with permission. [30] Copyright 2018, Nature Publishing Group.
Conclusions and Prospects
Single-atom catalysts behave similar to homogeneous catalysts and retain heterogeneous catalysts' advantage of recycling, thereby bridging the huge gap between these two catalyst systems. Recent years have witnessed the great progresses in the synthesis and characterization of single-atom catalysts. Moreover, the recent researches greatly advances the understanding of how the single atoms and supports interact mutually, whether two single atoms work individually or cooperate in synergy, and how the single atoms dynamically evolve during catalytic reaction. However, there still remain various technical challenges and mechanistic debates in single-atom catalysts.
Advanced characterization technologies are always necessary for the investigation of single-atom catalysis. Direct visualization of different metal atoms (e.g., Cu and Zn atoms) with similar atomic numbers still remains as a challenge. Besides, in situ techniques are required to bridge the pressure gap for better understanding the catalytic process such as the ratelimiting step, the adsorbed intermediates, and the evolution of single atoms.
Precise control over the number of active metal atoms represents a pivotal prospective of the development of single-atom catalysis. A lot of reactions such as CC coupling, A 3 coupling, esterification, etc., require more than one active site. More atoms mean more geometrical patterns, bringing larger challenges in both synthesis and mechanistic studies. Establishing the database of the active ensemble containing different number of atoms is of vital importance to heterogeneous catalysis.
More attention can be turned to the atoms or ligands coordinated with the metal single atoms. Utilizing metal single atoms to trigger the activity of inert atoms in supports will largely extends the application of single-atom catalysis. Moreover, the study of this field also helps to determine the real active sites during catalytic reactions and understand the mechanism in a more comprehensive view. | 4,554.2 | 2018-11-27T00:00:00.000 | [
"Chemistry",
"Physics"
] |
dipm: an R package implementing the Depth Importance in Precision Medicine (DIPM) tree and Forest-based method
Abstract Summary The Depth Importance in Precision Medicine (DIPM) method is a classification tree designed for the identification of subgroups relevant to the precision medicine setting. In this setting, a relevant subgroup is a subgroup in which subjects perform either especially well or poorly with a particular treatment assignment. Herein, we introduce, dipm, a novel R package that implements the DIPM method using R code that calls a program in C. Availability and implementation dipm is available under a GPL-3 licence on CRAN https://cran.r-project.org/web/packages/dipm/index.html and at https://ysph.yale.edu/c2s2/software/dipm. It is continuously being developed at https://github.com/chenvict/dipm. Supplementary information Supplementary data are available at Bioinformatics Advances online.
Introduction
In recent years, there has been a shift in medicine toward the more modern approach known as precision medicine (Ashley, 2016). The traditional evidence-based medicine collects data from metaanalyses and randomized controlled trials, from which mean estimates are derived to infer general recommendations, approximating the 'one size fits all' scenario (Beckmann and Lew, 2016). Precision medicine diverges from the traditional focus on average treatment effects and instead considers what the optimal treatment is for each individual. Moving toward a more targeted approach takes into greater consideration the heterogeneity that exists in patient populations. Overall, the aim of precision medicine is to better deliver safe and effective treatments to patients by identifying the best treatment for each individual. The Depth Importance in Precision Medicine (DIPM) method is a biostatistical approach to realizing the aims of precision medicine (Chen andZhang, 2020, 2022). The DIPM method is a classification tree method designed to identify subgroups of patients that perform especially well or especially poorly with a particular treatment assignment. Currently, the DIPM method is built for the analysis of clinical datasets with either a continuous (Chen and Zhang, 2020) or right-censored survival outcome (Chen and Zhang, 2022) and two or more treatment groups. Candidate split variables supplied by the user are mined by the method in search of the most important ones. Motivated by the work done by Chen et al. (2007) and Zhu et al. (2017), the DIPM method uses a depth variable importance score to assess the importance of each candidate split variable at each node of the tree. Chen and Zhang (2022) applied the DIPM method to analyze a microarray dataset for breast cancer patients and identified new gene expression subgroups that are statistically meaningful. We have developed the dipm R package, which implements the DIPM method in addition to a method simpler in design with the same research aims. In this application note, we present an overview of the package and illustrate the usage of dipm through real datasets. The Supplementary Material contains a manual (the vignette for the package).
Methods and implementation
The DIPM method is designed for the analysis of clinical datasets with either a continuous or right-censored survival outcome variable Y and two or more treatment assignments (Chen andZhang, 2020, 2022). Without loss of generality, higher values of Y denote better health outcomes. Note that this is also true for the survival case only when the event of interest is harmful, as longer times to the harmful event are more beneficial. When Y is a right-censored survival outcome, the data must also contain a status indicator d. When d ¼ 1, this indicates that the event of interest has occurred, while d ¼ 0 indicates that an observation is right-censored.
Candidate split variables are also part of the data and may be binary, ordinal or nominal. All of the learning data are said to be in the first or root node of the classification tree, and nodes may be split into two child nodes. Borrowing the terminology used in Zhu et al. (2017), at each node in the tree, a random forest of 'embedded' trees is grown to determine the best variable to split the node. Once the best variable is identified, the best split of the best variable is identified based on a calculated score. A flowchart outlining the general steps of the DIPM algorithm is provided in Figure 1.
In the DIPM method, the depth variable importance score is used to find the best split variable at a node. The score is a relatively simple measure that takes into account solely two components: the depth of a node within a tree and the magnitude of the relevant effect. Using depth information makes use of the observation that more important variables tend to be selected closer to root nodes of trees. Meanwhile, the strength of the split is also taken into account. This second component is a statistic specified depending on the particular analysis and data at hand. Recall that at each node in the overall classification tree, a random forest is constructed to find the best split variable at the node. Once the forest is fit, for each tree T in this forest, the following sum is calculated for each covariate j: T j is the set of nodes in tree T split by variable j. L(t) is the depth of node t. For example, the root node has depth 1, the left and right child nodes of the root node have depth 2. G t captures the magnitude of the effect of splitting node t. Depending on the type of data available, the test statistic G t will vary. Next, the split criteria used are defined depending on the type of outcome variable and the number of treatment assignments as well. See the Supplementary Material for more details.
The dipm package contains two main functions: dipm and spmtree. The dipm function generates classification trees for the precision medicine setting as described above. The spmtree is also designed for the same aim as a simpler tree method. However, this method does not fit a random forest at each node. Instead, the more classical approach of considering all possible splits of all candidate split variables is used, and the single split with the highest split criteria score is selected as the 'best' split of the node. For each method, the R code calls a C program to generate each tree. The C backend is used to take advantage of C's higher computational speed in comparison to R. Furthermore, the R package has been designed to remain consistent with existing R package implementations of tree-based methods such as rpart (Therneau and Atkinson, 2018) and partykit (Hothorn and Zeileis, 2015). Maintaining consistent function arguments across packages is helpful so that users can focus on the analysis at hand instead of spending excessive amounts of time deciphering the intricacies unique to each package. In addition, the package contains a pruning function pmprune that removes terminal sister nodes with the same optimal treatment. This package also contains the function node_dipm, which is specially designed for subgroup analysis and compatible with the plot method defined in the partykit package. It visualizes stratified treatment groups through boxplots for a continuous outcome and survival plots for a survival outcome, respectively.
Example usage
For both the dipm and spmtree functions, at a minimum, the user must supply a formula and a dataset. For the formula argument, the formula must take format Y $ treatment j X1 þ X2 for data with a continuous outcome variable Y and Surv(Y, d) $ treatment j X1 þ X2 for data with a survival outcome variable Y and a status indicator d. A format such as Y $ treatment j. may be used when all variables in the data, excluding Y, d (if applicable), and the treatment variable, are to be used as candidate split variables. For the data argument, the supplied dataset must contain an outcome variable Y and a treatment variable. If Y is a right-censored survival time outcome, then there must also be a status indicator d. The types argument is optional. When the types argument is missing, the default is to assume all of the candidate split variables are ordinal, which includes numeric variables. If this is not the case, then all of the variables in the data must be specified with a vector of characters in the order that the variables appear. The possible variable types are: 'binary', 'ordinal', 'nominal', 'response', 'status' and 'treatment'. Detailed instructions, examples and returned output can be found in the Supplementary Material.
The weight change data (MASS::anorexia) for young female anorexia patients from the MASS package consists of 72 observations and 3 variables (Venables and Ripley, 2013). The data contain three treatment groups: (i) for cognitive behavioral treatment, (ii) for control and (iii) for family treatment. PreWeight is the weight in pounds of the patient before study period. Similarly, PostWeight represents that of the patient after study period. In our analysis, we consider PostWeight as the response of interest and fit a tree based on the DIPM method. Figure 2 visualizes the tree using the function node_dipm. For both identified subgroups, family treatment is identified as the optimal treatment. However, the effect of cognitive behavioral treatment versus control is more profound in the subgroup of patients with higher weights before the study than in the subgroup of patients with lower weights before the study. The dataset (TH.data::GBSG2) from the package TH.data contains the observations of 686 women from the German Breast Cancer Study Group (Sauerbrei et al., 2000). The treatment is hormonal therapy (0 for no, 1 for yes). The detailed descriptions of other variables can be seen in the Supplementary Material. We fit a survival tree based on the DIPM method and visualize the tree using the function node_dipm in Figure 3. Hormonal therapy is most effective except for the patients with progesterone receptor less than 74 fmol and tumor grade I or progesterone receptor more than 74 fmol and tumor size greater than 26 mm and progesterone receptor greater than 320 fmol.
Significance and conclusions
In summary, the dipm R package implements the DIPM classification tree method designed for the analysis of clinical datasets with a continuous or right-censored survival outcome variable and two or more treatment groups. A secondary, additional method is also included in the package that employs a much simpler approach in identifying the best split at a node. Both methods have been carefully evaluated in previous works (Chen andZhang, 2020, 2022). Furthermore, we develop a plotting function that produces an image of each tree instead of solely a data frame of nodes. Overall, this package delivers a new and handy computational tool that implements the novel DIPM method in the search for subgroups relevant to the precision medicine setting. | 2,462.6 | 2022-06-13T00:00:00.000 | [
"Computer Science"
] |
CORPORATE GOVERNANCE MANAGEMENT TOWARDS COMPANIES INCLUDING IN LQ45 INDEX
This research investigates the effects of managerial ownership, intitutional ownership and independent board of commissioners on earnings management either simultanleously or partially. Population of this research are companies listed on Indonesian Stock Exchange that registered in LQ45 index from 2014-2018. The type of this research is descriptive study and using secondary data from the financial reports. This reseach uses multiple linier regression technique, using firm size as a control variable. where the results of this research show that simultaneously managerial ownership, institutional ownership, independent board of commissioners, and firm size have significant effect to earnings management. Partially, intitutional ownership have no significant effect to earnings management, managerial ownership have no significant effect to earnings management and independent board of commissioners have no significant effect to earnings management.
INTRODUCTION
The income statement of a company will depict the real condition of the company's finances in the current year. In the income statement should be listed facts that occur in the field, Karna will help the internal and external parties in making decisions. External parties that are investors will be greatly assisted by the information that is clear and factual generated in profit reports, while the internal party that is the management of the company will easily utilize the information contained in the income statement as an accountability to external parties i.e. investors and readers of financial statements.
Investors will always see the information contained in the income statement, one of which is profit, income information is often the target of engineering through the action of the management opportunistic to maximize its satisfaction. These opportunistic actions are done by selecting a specific accounting policy, so that profit can be set, raised or lowered accordingly. Management behaviour to measure profit in accordance with the wishes is known as profit management.
Profit management is an effort from the management of the company to audit or influence the information in the financial statements in order to manipulate the stakeholders who want to know the performance and condition of the company (Sulistyanto, 2014). This action is very detrimental to many of the parties in a relatively long period of time can interfere and harm the company. The action of profit management or efforts to influence this financial statement is very contrary to the purpose of the financial statement itself where the understanding of financial statements by Financial Accounting Standards (SAK) is "as a media provider of information concerning financial position, performance, and change of financial position of a company that is beneficial to a number of users of financial statements in the decision of economic decisions.
Agency Theory
Agency theory is a conflict that occurs when the separation between the company or the shareholder as the principal with the manager of the company or the manager as an agent. Conflicts that occur is the difference in interest between the two parties, and according to the view (Jensen & Meckling, 1976) The principal and the agent seeks to maximize prosperity respectively.
Corporate Governance Theory
Corvorate Governance is a method and procedural which is used by BOC and executive to make strategic direction, expectation of achievement of company objectives, monitoring and evaluation of risk management and ensuring the use of resources responsibly, (IFAC 2012). (Sutojo & Siswanto, 2013) explained that ISSN : 2598-831X (Print) Corporate Governance is the means or mechanism used to convince the capital to obtain a return in accordance with the invested investment.
Institutional Ownership Theory
Institutional ownership is the percentage of shares owned by the institution. Institutional ownership is a tool that can be used to reduce conflicts of interest (Pasaribu, 2016), while according to (May Yuniati, 2016) institutional ownership is the level of shareholding by the institution within the company, measured by the institutional-owned stock propotions at the end of the year expressed in percentages.
Institutional ownership is a condition where institutions have shares in a company. These institutions can be government institutions, private institutions, domestic and foreign (Widarjo, 2010). According to (Widiastuti, 2013) Institutional ownership is a shareholding by the agency from external. The institutional ownership structure is the percentage of shares owned by the institutional authorities of the total shares of the company in circulation. Institutional parties include insurance companies, banks, investment firms and ownership by other institutions. In institutional ownership does not expose institutional value to the company's parent.
Managerial Theory of Ownership
Managerial ownership is the percentage of votes related to the shares and options owned by the board of directors and managers of the company. (Boediono, 2005). The proportion of shareholding by the company's management will influence the manager's behaviour in the management of the company's financial statements, if the manager participates in shares ownership then the manager will act like a shareholder who will make the level of financial statements become qualified (Mahariana, 2014). According to Jensen and Meckling (Hikmah, 2013) that the profit management emerged because of an agency conflict that is a difference of interest between the owner (principal) and the management of the company (agent).
Independent Board of Commissioners
The independent Board of Commissioners is a person appointed to represent independent shareholders (minority shareholders) and designated parties not in the capacity of representing any party and solely appointed based on the background of the knowledge, experience, and the professional connection he has to fully perform the task in the interest of the company (Agoes, 2014). Based on the regulation of the Financial Services Authority No. 55/POJK. 04/2015, independent Commissioner is a board of Commissioners originating from outside the public company and has fulfilled the requirements stipulated in the regulation of the Financial Services Authority No. 33/POJK. 04/2014. ISSN : 2598-831X (Print)
Profit Management theory
Profit management is an ability to "manipulate " options available and take the right choice to be able to reach the expected profit level (Stice, 2014). According to (Weston, 2014), "profit management is an intervention in the process of external financial reporting with the intention of obtaining personal gains. From the above, profit management can be concluded management action in the internal financial reporting process for the purpose of the personal benefit of the manager or company by raising, leveling up, or lowering the reported profit from the unit to which it is responsible.
HYPOTHESIS DEVELOPMENT
Institutional ownership is a stock of companies owned by institutions or institutions (insurance companies, banks, investment firms, and other institution ownership), the existence of institutional ownership is allegedly able to provide surveillance mechanisms aimed at aligning various interests in the company (Maharani, 2014). High institutional ownership can minimize profit management practices based on the above explanation, the first hypothesis in this study is:
H1 : Institutional ownership has a negative and significant influence on profit management
With managerial ownership will make the management position equal to the owner of the company that can align or unite the interests of management with shareholders so that the management will act just like the investor in general and will not do profit management in order to know the state of the company that about with the existence of managerial ownership in the company will reduce the management action. Research has stated that managerial ownership has a significant negative influence on profit management. Based on the explanation above, the second hypothesis in this study was:
H2:
Managerial ownership has a negative and significant influence on profit management Profit management can be minimized with better surveillance mechanisms. The independent Board of Commissioners is believed to provide improvement to corporate supervision. With the inclusion of the Board of Commissioners from external companies will increase the effectiveness of the Board in supervising management to prevent misserving financial statements or fraudulent in presenting financial statements.
H3:
The independent Board of Commissioners has a significant and negative influence on profit management
RESEARCH METHOD
The research was designed using hypothesis test research, conducted by combining cross sectional and time series studies. The population in this research is the company which is included in the LQ45 index listed on the Indonesia Stock exchange for the year 2014-2018. The sample selection technique is done by purposive sampling method, with the following criteria ; 1. Samples are companies included in the index of LQ45 in the Indonesian stock exchange annually from 2014 to 2018. 2. Samples are companies included in the LQ45 index which publishes the full annual report containing financial statements and audit reports to the public.
Research Object
Based on the sample selection criteria, obtained 33 companies included in the LQ45 index meet the criteria of purposive sampling, thus observing as many as 165 objects for the five-year range of research. From table 2, the VIF value of each independent variable is less than 10. By doing so, each independent variable has not occurred the symptoms of multicolinearity and intervariable Independent has no correlation.
Heteroskedastisitas Test
1. Institutional ownership using a proxy comparison between the number of shares owned by the institution to the number of shares circulation = ℎ ℎ 100% 2. Managerial ownership using a proxy comparison of the number of shares owned by the management of the shares in the amount circulating = ℎ ℎ 100% 3. Board of Commissioners of Independe using comparative proxy The number of independent commissioners of the company with the = x 100% Table 2 descriptive statistical variable of research can be obtained an overview of the characteristics of data from research variables. The Data used has been a normal tridistic and has been through the test stages of classical assumptions including multicholinerity test, heteroskedastisity and autocorrelation. From the table 3 It can be known value Durbin-Watson stat table Durbin-Watson All independent variables are located in the positive correlation area i.e. DU < DW < 4-DU. So it can be concluded that the entire variable in the research based on the data panel is not occurring autocorrelation.
Descriptive statistics
In accordance with the 1st hypothesis, institutional ownership has no effect on profit management. Based on the results of testing T statistic gained institutional ownership has a value of T count 1.1198 with a probability of significance of 0.2645. It indicates that the value of T-statistic significance is greater than 0.05 (p > 0.05). The higher the level of shares ownership of other institutions in the company will make the level of fraud to manipulate the financial statements that are likely to be reduced, but the large shares owned by the managerial parties can not be used as a reference to reduce profit management, because of the large amount of institutional ownership but with the performance of poor institutional shareholders or not to carry out its function as an institutional share owner, the action of profit management or manipulation of financial statements may occur due to less supervision of the management Institutional.
In accordance with the 2nd hypothesis, managerial ownership has no effect on profit management. Based on the results of testing T statistic gained institutional ownership has a value of T-1.2319 with a probability of significance of 0.2199. It indicates that the value of T-statistic significance is greater than 0.05 (p > 0.05). With the ownership of the manager will reduce the action of profit management, because managers will tend to be cautious in making financial statements as reasonably as possible in accordance with accounting standards rules or managers will reduce the action of profit management or manipulation of equity reports for the benefit of investors and also himself.
In accordance with the 3th hypothesis, Independedn Board of Commissioners has no effect on profit management. Based on the results of testing T statistic gained institutional ownership has a value of T count 1.4729 with a probability of significance of 0.1431. It indicates that the value of T-statistic significance is greater than 0.05 (p > 0.05).
The small large share ownership owned by the Independent party does not necessarily affect the managerial parties in drafting financial statements and does not necessarily reduce profit management, because of the large ownership of the independent Commissioner does not necessarily produce maximum performance preformance run its duties as a supervisory board and so vice versa the little ownership does not necessarily have poor performance of the Commissioner Independe, so the small ownership is not necessarily a reference of a company doing profit management because it depends on the performance of supervision and function Commissioner whether it is maximal in carrying out its duties or not. Therefore, in this research, the Board of Commissioners has no influence on profit management
CONCLUSION
The variables used to detect profit management on financial reporting in the company were included in the LQ45 index at IDX in 2014-2018. It can be concluded that institutional ownership, managerial ownership, the independent Board of Commissioners and the size of the company as co-shared control variables simultaneously affect profit management and can explain profit management measures of 7.30% while the 92.70% is explained by the factors outside the study.
While the independent variable is institutional ownership, managerial ownership, the independent Board of Commissioners partially does not affect profit management. Only control variables that affect profit. ISSN : 2598-831X (Print) | 3,047.8 | 2020-05-23T00:00:00.000 | [
"Business",
"Economics"
] |
MultiStage Authentication to Enhance Security of Virtual Machines in Cloud Environment
The adoption of cloud computing in different areas has shown benefits and given solutions to applications. The cloud provider offers virtualized platforms through virtual machines for the cloud users to store the data and perform computations. Due to the distributed nature of cloud, there are many challenges and security is one of the challenges. To address this challenge, verification method is implemented to achieve high level security in the cloud environment. Many researchers have provided different authentication mechanisms to safeguard virtual machines from attacks. In this paper, Multi Stage Authentication is proposed to overcome the threats from attackers towards virtual machines. In order to authorize and access the virtual machine, multistage authentication incorporating the factors like username, email id, password and OTP is carried out. Mealy Machine model is applied to analyze the state changes with factors supplied at multiple stages and trust built with each stage. Experimental results prove that system is safe achieving data integrity and privacy. The proposed work gives the protection against unauthorized users, provides secure environment to the cloud users accessing the virtual machines. Keywords—Authentication; multi stage authentication; one time password; finite state machine; mealy machine
I. INTRODUCTION
In cloud environment, many users deploy the applications. These applications are accessed by several users. Dependability of users on the cloud is increasing day to day [1] as the investment is lesser. Hence cloud environment is prone to security issues [2]. Illegal access, misuse of data and assets hacking by the malicious users are some of threats that has to be addressed with more importance. Proper authentication has to be in place to safeguard against these attacks [3]. Traditional authentication mechanisms such as password-based login suffer with security problems. Password hijacking, stealing and phishing attacks [4] are some of the threats which create burden on cloud environment. Hence the resource access from the attackers has to be protected by good authentication.
Many well-known cloud computing environments such as Google, Amazon and Microsoft have already adopted the multifactor authentication. The major usage constraint is with respect to the users as they need to use the extra effort to login providing more factors. There is one more main concern to safeguard the user's credentials as they are shared with cloud environment to access the services from the cloud. Cloud users perform computations using virtual machines where they store the data and continue working till completion. Virtual machines belonging to different users are stored in the same host. Hence security of the virtual machine (VM) has to be taken care with almost importance. Before the VM is granted or accessed by the cloud user, authentication has to be carried out. Authentication [5] helps in proving the trustworthiness of the cloud users. Single factor authentication suffered with the problems such as if the user forgets the password and losing of password will completely avoid the legitimate user to access the resources in the cloud. Multistage Authentication (MSA) gives an additional layer of security to access the resources in the cloud and cloud provider is sure of extra security along with service level agreement. First step for cloud users is signing the service level agreement with cloud provider, next step is multistage authentication to access the cloud. Hence using this multistage authentication avoids attacks by the compromised users. There are advantages of choosing multistage authentication when compared with normal authentication such as increased security in the cloud environment and prohibiting the unauthorized users in to the system.
A. Motivation
The main objective of the paper is to protect the VMs in the cloud environment from illegal access and theft of data using MSA. MSA in cloud environment considers more than one factor from cloud users side credentials so that authentication is stronger. Even though the attacker tries with any one factor, gathering all the factors is not an easy measure to enter the system. MSA offers robust method of authentication cloud users and benefits with effective solution to the authentication. To provide high degree of security in the virtualized cloud environment and protect against several cyber-attacks that happen. Using multiple factors [6] provides an additional step towards accessing sensitive and confidential data stored in the cloud provider's domain. It is normally common that most of the users will accessing the virtual machines from the same host. Hence, it's the cloud provider's responsibility to meticulously provide the access to the virtual machines with appropriate authentication mechanism. Its observed number of incidents happening in the cloud regarding the data theft and DOS attacks.
Some of the research questions to be considered are: Access to the virtual machines in the cloud environment by the registered users without hassles.
B. Contribution
The paper starts with theoretical concept of authentication, state machine to provide strong model to overcome illegal access and protect the virtual machines from attacks. The paper includes Authentication requirement and different authentication approaches with pros and cons are explored in the paper.
Mealy Machine is presented to analyze the authentication process performed by the legitimate user to build the trusted environment and prevent the unauthorized user at all states.
The approach uses MSA to allow users to access VMs for completing the tasks assigned by their organization.
To protect every user's credential in the cloud provider's domain, robust MSA approach is applied guaranteeing the integrity and privacy.
This paper is organized as Section 2 gives background, Section 3 describes the related work, Section 4 presents the proposed approach, Section 5 gives the evaluation of the algorithm, Section 6 explains the results and discussions and Section 7 concludes the paper.
A. Traditional Authentication
Authentication [7] is a technique of proving the identity of the user in accessing the system by providing the details such as password and username. Traditionally single factor authentication was used to enter the system with an access card. It is observed that every user obtaining any services from any provider normally uses password-based authentication [3]. This password-based approach is usually used across different applications on hosts. Password is a widely accepted mechanism as it does not involve any major complications, users have to memorise the password and apply whenever the authentication is required. Passwords can be plaintext, combination of various characters involving special characters, numbers and so on. Users have suffered with many attacks due to weak passwords. There are cases where random passwords are selected and attackers can crack passwords. The different types of password attacks are dictionary attack, brute force attack, session hijacking and so on. Attacks disrupt the normal functioning of the cloud environment. The usage scenario of any environment is the users register for the service with certain password, which gets stored in the cloud server. Claimant has to provide the password in order to prove that he is authorized user. If the password matches with the stored password, then the claimant users are authenticated. The systems usually advice to choose the strong password.
B. Authentication
User has to prove that he is legitimate and this can be done using authentication. Usual methods are username and password to prove identity of the user. With the advancements in the security measures of any network, two factor and multifactor authentication [8] was applied to defend against illegal users.
Authentication avoids unauthorized access [5] to the sensitive information. There are all possibilities that the attacker gains access to virtual machines and tampers the information [9] stored, which leads to integrity threat.
Password authentication: This method involves the password given by the users with a combination of characters, symbols and numbers. Users have to create strong passwords to avoid attacks. Many users keep simple passwords to avoid remembering long and cryptic passwords. Hence users are at the risk of password attacks.
Certificate Authentication: User identity is confirmed by the digital certificate issued by the certification authority. The best example is Aadhar card to identify the user. Users provide the digital certificate when they are using the services or resources from the server. Once the server verifies digital certificate, user is decided as the legitimate.
Biometric Authentication: User identification based on the biological characteristics of the user. Using the biometric factors, access doors or login to the system is granted in some of private firms. Biometric feature can be added as one of the factors with multifactor authentication.
Token generated method: User credentials are maintained and users receive the tokens on one of the credentials. They provide the tokens to prove their identity.
Multi factor authentication: Users add more than one factor to authenticate himself with server to access the resources. Multifactor authentication (MFA) can take more than one factor at the same time or multilevel. Due to this method of authentication, system is protected with various threats.
Among these different authentication [13] mechanisms multifactor authentication is applied as it is one of the most promising approaches. Multi factor authentication [6] mechanism defends against the attacks with extra care. Factor is the one which user provides to claim who he is. Suppose an employee enters the organization. He can enter the organization by swiping the card. How will the doors get to know that he is authorized person? It's because he has the smart access card which can be used for authentication. In this smart card, there is integrated chip which controls the access to the office environment. Normally chip stores the user www.ijacsa.thesai.org authentication data, user identification and data used by the users with respect to applications.
The different types of factors are collected [14] from the user to authenticate are: Knowledge factor: Aspects that users know like passwords. This factor is normally shared between the user and the provider. Once the user chooses the factor, it will be stored in the provider's database server and each time the user enters his factor, it has to be validated.
Possession factor: Aspect that user has such as mobile phone or any other device. It can be the one-time password, smart cards or security tokens. If the user happens to lose the device, it is difficult to authenticate the legitimate user.
Inherence factor: Feature that user is like biometric feature, voice or fingerprint. This factor is the one that the user is and the biometric factors are unique to each and every user. The advantages of multi factor authentication [15] are: Improved Security: System security is enhanced by introducing the multifactor authentication. Additional Layers of authentication will add on to the security.
Compliance: Necessary conditions of the organization are satisfied.
Flexibility: Options for authentication is improved with more factors compared to traditional password authentication.
III. RELATED WORK
Ometov et al [15]., has discussed about multifactor authentication right from single factor authentication. They have explored different authentication methods, applications and challenges involved in implementing the multifactor authentication. The authors have identified operational concerns such as usability, robustness, integration and security. They have provided the benefits of MFA towards security. The authors have proposed the reversed approach in which the factors obtained from the users have secrets such fingerprint or pin. Considering n as sum and I factors with S e secrets provided to them.
Factors and correspondingly secrets can be written as ...
I n : S en
Secrets are provided by the user to authenticate so that they can enter the system. Assume there are four factors and user forget any one factor to enter, then there is a trusted cloud party, which will aid to recover factor so that the user will be able to enter the system. Some of the biometric factors such as fingerprint, face change over the time, for which there is support by trusted party to update the feature in the database. There is decision policy which helps in deciding whether the user is authenticated or not.
B. B.Gupta et al [16], have proposed a model for access control based on the identity and mutual authentication using smart cards. The approach has five different phases right from the registration to authentication including updation of credentials. Hash functions are used towards the data. The approach mitigates unauthorized access, eaves dropping and single sign on with smart cards. It also defends against DoS attacks, fake identity and illegal use of smart cards.
C. Singh and T. Deep Singh [17] have proposed MFA with three levels based on the three levels of authentication. At the first level the login and password are stored with double encryption such as SHA-1 and AES. Second level of authentication uses the out of band authentication technique. After the first level, server provides the OTP to registered mail. User has to prove that he is legitimate by providing the OTP to the server. Third level user has to click certain number of images and buttons on the screen to get authenticated. The approach provides the protection against various attacks like man-in middle, brute force and password guessing.
A. Bhanushali et al [18], have given a good input about different authentication algorithms with respect to security, usability, space and storage. They have described the algorithms such as draw a shape, grid selection and déjà vu authentication algorithm based on the images. In order draw a secret, technique user is provided with a drawing and user has to reproduce same by redrawing. In grid selection the user is provided with small grid and needs to draw the pattern for authentication. Déjà vu is based on the seed value generated by the trusted server towards the user and at the time of authentication, user has to prove using this value. The inference provided by the authors is graphical based approach is better than the textual approach with respect to security.
Multilevel authentication [19] is presented by the authors to enhance security for electronic devices. Three levels of security checks are performed to authenticate the legitimate user. First level of security check is done with normal password. Biometric authentication is carried out in the second level. Last level of security check is performed by the accelerometer.
The proposed approach in the paper does not need any trusted third party and the interaction is between the cloud www.ijacsa.thesai.org provider and cloud user. There is no need for the user to go through the sequence of images with MSA approach.
Some users might not be well versed with drawings and user has to go with stages and provide input. Hence the proposed work is friendly to the users and provides security with features such as confidentiality and privacy. The approaches implemented by the various researchers along with the security parameters are given in Table II. TABLE II. IMPLEMENTED AUTHENTICATION APPROACHES WITH SECURITY PARAMETERS
Authors Description Security Parameters addressed
Ometov et al [15] Multifactor Authentication Highlighted the operational concerns robustness, security and integration.
B. B.Gupta et al [16] Smart Cart Authentication Protection against unauthorized access, eavesdropping and DoS attacks. C. Singh and T. Deep Singh [17] MFA based on three level authentications Defends man-in-middle, brute force and password attacks.
A. Bhanushali et al [18] Survey about authentication algorithms Graphical based approach provides better security.
A.Dinakar et al [19] Multilevel authentication Three levels of security checks for authentication.
IV. PROPOSED APPROACH
The objective of the proposed work is to implement secure access to the virtual machines using Multistage Authentication. In Multistage Authentication more than one factor is considered. As per the authentication mechanisms, using multiple factors, system is less prone to attacks.
A. Adversary Scenario
Attacks are possible from the attackers to gather the information stored in the cloud server [20]. Attacker might also try to spoof and access the VMs from the cloud provider. In the Fig. 1 shown below, legitimate users are the authenticated users and attacker [21] is the one trying to intrude the system. In normal authentication, login and password are used to provide the VMs. The traditional approach will give opportunities for attackers to damage the security features such as data integrity, confidentiality and availability [22].
A formulation is obtained for the minimization of attacks on the virtual machines with automata theory. The model used to realize the approach is mealy machine. Given below are the details of the state machine and use case of mealy machine.
B. Introduction to State Machine
State Machine is the machine which works based on the behavior of the system. The output of the system depends on the user's input. The machine which has finite number of states is known as finite state machine. Let us consider the simple example of tube light. When the user switches on the button, the state changes to on and otherwise it is off. There are two states in this machine namely switch on and switch off which is depicted in the Fig. 2.
C. Finite State Machine
Finite State machine is a computational model [23] with defined number of states. States always present the status of the system at the given instance. Consider the example traffic signal. As per the light shown, pedestrians cross the road and vehicles navigate through the traffic. This is one of the classic examples of Finite State machine. Finite state machine contains starting state, accepting states and final state. The output is either accept or reject. With specific input transition of state takes place. All these input symbols are represented in the alphabet.
Formal definition of finite state machine: It is represented with set of three entities shown in the equation 1:
F= (V, I, t r )
(1) V & I are non-empty finite groups t r : V x I V is a state transition function. Next followed by this topic, mealy machine is FSM depending on the present state and input symbol. The system is modeled with Mealy machine.
D. Overview of Mealy Machine
Mealy Machine is a finite state machine [24] [25] in which output state depends on the current input symbol and state. Mealy Machine [25] has simple one input upon which transition to the next state shown in Fig. 3. Only two states are presented Q 0 , Q 1 . Input symbols are 0 and 1. Output is represented as 0 and 1. Mealy machines have fewer states compared to that of Moore machine. Mealy machines are secure to use; response for the inputs with mealy machines are faster. Mealy machines are used so that each level the trust is increased and only legitimate users are granted with the VMs.
E. Use case with Mealy Machine
The factors considered for the authentication are Email-id, password, Phone Number and One Time Password. Using mealy machine trust chain can be seen. In any of the stage the input is wrong, trust is broken. When the trust is broken by any of the user, it is very clear that it is the attack performed by the intruder. These factors are provided as the input symbols to the system. As per the factor and present state, the user progresses to the next state shown in the Fig. 4. Here as per the input, the change in the state is seen. First user name and email id are the input symbol for state change. Next input symbol considered is password, followed by phone number and OTP. The system is multistage or multi-level where there are stages which legitimate user will be able to clear and succeed. Upon correct entry of email, password is generated to the valid email. User can login using the password generated. User has to go through the four stages in order to reach the final stage as shown in the Fig. 4. Every stage input and current stage important to advance to the next stage. Upon receipt of matching symbols like email id, phone number, password and OTP, transition takes place to the next level in the system. The output states are 0 and 1. 0 represents failure and no transition and 1 depicting success along with transition to next state. Three attempts are considered in the proposed approach. As shown in the Fig. 4 the intruder at any stage tries to perform attack, cannot advance further and access the virtual machine.
F. Security Analysis with Mealy Machine
Attacks can be viewed and analyzed based [26][27] on the automata theory. The proposed MSA approach protects against attacks [28] which are discussed below: Replay Attack: If an attacker somehow gathers the email id and user name, guessing the secret password and gathering the OTP is not possible. Attacker cannot penetrate the cloud environment and access the virtual machines. With the mealy machine, if at any stage input is wrong, state change will not happen. Between V 1 an V 5 , if the attacker tries to gather any factor and apply that in between randomly, successful authentication is not possible as mealy machine depends on current state also.
Spoofing Attack: Attacker tries to impersonate in order to avail the virtual machines. Every time OTP is generated and it is not easy for attacker to get the OTP. There are multiple factors for attacker to guess, it is not just login and password compared to traditional systems. Intruder trying to capture Ue, Pw, User name and random password to authenticate himself is not accepted as password is received to legitimate user's email.
Data theft resistant: The approach implemented overcomes the data theft as illegal access is not happening. If an attacker tries to intrude in any stage, there are only three attempts and third attempt being the last one, upon failure goes to trap state. www.ijacsa.thesai.org Brute force attack: If any intruder tries to attack the system in any stage gathering some information, intruder cannot succeed in entering the system. As there are different levels and at each level if wrong input is raised, state change will not take place.
Man in Middle attack: This system is resistant for man in middle attacks.
Intermediately in any state V i Ɛ V where i1,2,3,4,5 it is not possible enter the system. In order to validate the mealy machine model, simulation is performed and evaluated. The factors considered, algorithm steps and implementation details are presented below.
G. Methodology
Cloud user credentials are collected by the cloud provider during the registration phase. When users want to access the resources, authentication is performed. Here multistage authentication is used by the cloud provider. The factors considered for authentication in this approach are presented below.
Email id: The user registers with his or her username and email ID. A unique hash code is created for each user using MD5 algorithm. MD5 algorithm [29] is used to encrypt the 4-digit random passwords generated for the user. The username and 4 digit non encrypted password are sent to the registered email ID. The user logs into his or her email account and must take note of the unique password provided in the body of the message. A link is sent to the registered email id. The user must click on the link included in the email to activate his account and get access to the login page.
User name and password: Once the login page appears, the user enters the username and password sent via email. At this point they must also enter their contact number for the next level -OTP verification. If the username and password authenticate correctly, that is, if it matches with the information stored in cloud provider's database, the second factor is verified.
One Time Password (OTP): Once the user clicks on the generate OTP button after providing the phone number, a random 5-digit OTP is generated and sent to the respective phone number. The user is required to enter the OTP. The entered OTP is verified with the temporarily stored OTP. If verification is successful, the user is granted access to the VM.
Multistage authentication shown in the Fig. 5 is explained step by step in the algorithm provided below. There are stages here in the approach. In the first stage, the email id and user name are provided. Cloud provider verifies in the database and sends the password to email. Using the password, user can login and provide the phone number for receiving OTP. This phone number is validated in the database and the OTP is sent. All these steps are shown in the Fig. 5. The stages of authentication are login, authentication and verification phase in order to grant user with virtual machines requested. This is depicted in the Fig. 6.
1) Login Phase
Step 1: Cloud user requests to allocate the virtual machine to the cloud provider with whom he is signed the SLA. Cloud user sends user name and email id.
Step 2: Cloud Provider generates four-digit hash and sends to the cloud user. Along with this activation link is also sent to the cloud user's mail id.
Step 3: User clicks the link to activate his account in the cloud environment.
2) Authentication Phase
Step 1: User enters the password provided by the cloud provider in to the cloud system.
Step 2: Cloud Provider asks the user to enter the valid phone number.
Step 3: Cloud User Provides the phone number.
Step 4: Cloud Provider generates the randomly five-digit password as one time password valid for 60 seconds to the cloud user's phone.
Step 5: Cloud User has to enter the OTP to access the virtual machine allocated for him by the cloud provider.
3) Verification Phase
Step 1: OTP generated is stored in cloud provider's database.
Step 2: Cloud User entered OTP is compared with stored OTP.
Step 3: Both are same, cloud user is granted virtual machine.
V. EVALUATION OF ALGORITHM
Simulation is performed considering the many users and cloud provider. The authentication system is implemented using php, html and CSS at the front end, backend Mysql and XAMPP web server [30] solution stack. The backend database stores the user details. System has three modules viz. registering user credentials such as user name and email id, login page and OTP page. The algorithm has different factors for authentication. If the user is able to guess any of factor, it is not an easy mechanism for adversary to retrieve all factors. OTP is valid only for limited time and attacker breaking the OTP with in time duration is not easy. User has to sign up to access the cloud provider shown in the Fig. 7. After the registration the username and unique password is sent to the user on his/her registered email id shown in Fig. 8. The user now knows his/her unique password and can access the login page, as shown in Fig. 9. User receives the OTP on valid phone number and enters OTP as shown in Fig. 10.
When the user submits the OTP, it is verified with value stored in the cloud provider database and once it is confirmed the multistage authentication is complete. User has passed all authentication checks. Every time the OTP is generated and there is no chance of gaining the access to the cloud environment.
VI. RESULT AND DISCUSSION
Multistage authentication is checked for different time slots and recorded the successful attempts and failure attempts. In these number of successful logins is legitimate users. Testing is carried out using JMeter [31]. Login analysis is performed using JMeter. JMeter is opensource java-based tool used to test load and performance. Failure attempts some of them are intruders trying with brute force method to enter the system. The system is tested for the varying number of users right from 10 to 100.It is found that the system has provided resistance towards the attacks. Accuracy of system is calculated using the formula given below.
Where Sl is successful logins and Tl specifies total logins. The graph representation is shown in the Fig. 11.
The experiment is run for one hour, three hours and six hours. Overall Login statistics of users are depicted in Table IV. The graph in Fig. 12 indicates that MSA approach provides the legal access to cloud environment. The cloud users after the SLA contract with cloud providers can request for resources using MSA. MSA adds one more layer of security after SLA. The system does not allow the unauthorized access; hence approach provides privacy and protection against intruders who are trying to access the resources illegally.
VII. CONCLUSION
Cloud computing is technology which has lot of benefits to the cloud users in terms of cost, accessibility and scalability. Inspite of these advantages, there are many challenges and security is one of the challenges to be addressed. In order to protect against the attacks launched by illegal users, MSA mechanism is used in the applied. Different authentication mechanisms are discussed. Adversary scenario is presented and how attacker gains access to cloud resources to disrupt the regular functioning of cloud environment. The approach is validated with mealy machine theory. Mealy machine representation provides the stage changes along with trust flow from one stage to another to evaluate if any unauthorized access is carried out. MSA uses the factors viz user name, email id, phone number and OTP. Though user is registered, authentication mechanism has to be carried out every time user wants to access the virtual machines. The proposed approach protects against the attacks such as spoofing, replay and data theft. The results clearly depict how strong the authentication mechanism with respect to number of authenticated logins and time duration. With the observation of the approach implemented, it is quite unlikely to get the access by unauthorized users to the virtual machine which is meant for legitimate users. With the benefit of security, there is an overhead experienced by the user in passing through multiple stages to authenticate and access the VM whenever it is required.
In future, it is planned to consider the roles and grant the access to the virtual machines in cloud environment. | 6,962.2 | 2021-01-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
A study of vectorization for matrix-free finite element methods
Vectorization is increasingly important to achieve high performance on modern hardware with SIMD instructions. Assembly of matrices and vectors in the finite element method, which is characterized by iterating a local assembly kernel over unstructured meshes, poses challenges to effective vectorization. Maintaining a user-friendly high-level interface with a suitable degree of abstraction while generating efficient, vectorized code for the finite element method is a challenge for numerical software systems and libraries. In this work, we study cross-element vectorization in the finite element framework Firedrake via code transformation and demonstrate the efficacy of such an approach by evaluating a wide range of matrix-free operators spanning different polynomial degrees and discretizations on two recent CPUs using three mainstream compilers. Our experiments show that our approaches for cross-element vectorization achieve 30% of theoretical peak performance for many examples of practical significance, and exceed 50% for cases with high arithmetic intensities, with consistent speed-up over (intra-element) vectorization restricted to the local assembly kernels.
Introduction
The realization of efficient solution procedures for partial differential equations (PDEs) using finite element methods on modern computer systems requires the combination of diverse skills across mathematics, programming languages and high-performance computing. Automated code generation is one of the promising approaches to manage this complexity. It has been increasingly adopted in software systems and libraries. Recent successful examples include FEniCS (Logg et al. 2012), Firedrake (Rathgeber et al. 2016) and FreeFem++ (Hecht 2012). These software packages provide users with high-level interfaces for high productivity while relying on optimizations and transformations in the code generation pipeline to generate efficient low-level code. The challenge, as in all compilers, is to use appropriate abstraction layers that enable optimizations to be applied that achieve high performance on a broad set of programs and machines.
One particular challenge for generating high-performance code on modern hardware is vectorization. Modern CPUs increasingly rely on SIMD instructions to achieve higher throughput and better energy efficiency. Finite element computation requires the assembly of vectors and matrices which represent differential forms on discretized function spaces. This process consists of applying a local function, often called an element kernel, to each mesh entity, and incrementing the global data structure with the local contribution. Typical local assembly kernels suffer from issues that can preclude effective vectorization. These issues include complicated loop structures, poor data access patterns, and short loop trip counts that are not multiples of the vector width. As we show in this paper, general purpose compilers perform poorly in generating efficient, vectorized code for such kernels. Padding and data layout transformations are required to enable the vectorization of the element kernels (Luporini et al. 2015), but the effectiveness of such approaches is not consistent across different examples. Since padding may also result in larger overheads for wider vector architectures, new strategies are needed as vector width increases for the new generation of hardware.
Matrix-free methods avoid building large sparse matrices in applications of the finite element method and thus trade computation for storage. They have become popular for use on modern hardware due to their higher arithmetic intensity (defined as the number of floating-point operations per byte of data transfer). Vectorization is particularly important for computationally intensive high order methods, for which matrix-free methods are often applied. Previous works on improving vectorization of matrix-free operator application, or equivalently, residual evaluation, mostly focus on exposing library interfaces to the users. Kronbichler and Kormann (2017) first perform a change of basis from nodal points to quadrature points, and provide overloaded SIMD types for users to write a quadrature-point-wise expression for residual evaluation. However, since the transformation is done manually, new operators require manual reimplementation. Knepley and Terrel (2013) also transpose to quadrature-point basis but target GPUs instead. Both works vectorize by grouping elements into batches, either to match the SIMD vector length in CPUs or the shared memory capacity on GPUs. In contrast, Müthing et al. (2017) apply an intra-kernel vectorization strategy and exploit the fact that in 3D, evaluating both a scalar field and its three derivatives fills the four lanes of an AVX2 vector register. More recently, Kempf et al. (2018) target high order Discontinuous Galerkin (DG) methods on hexahedral meshes using automated code generation to search for vectorization strategies, while taking advantage of the specific memory layout of the data.
In this work, we present a generic and portable solution based on cross-element vectorization. Our vectorization strategy, implemented in Firedrake, is similar to that of Kronbichler and Kormann (2017) but is fully automated through code generation like that of Kempf et al. (2018). We extend the scope of code generation in Firedrake to incorporate the outer iteration over mesh entities and leverage Loopy (Klöckner 2014), a loop code generator based loosely on the polyhedral model, to systematically apply a sequence of transformations which promote vectorization by grouping mesh entities into batches so that each SIMD lane operates on one entity independently. This automated code generation mechanism enables us to explore the effectiveness of our techniques on operators spanning a wide range of complexity and systematically evaluate our methodology. Compared with an intra-kernel vectorization strategy, this approach is conceptually well-defined, more portable, and produces more predictable performance. Our experimental evaluation demonstrates that the approach consistently achieves a high fraction of hardware peak performance while being fully transparent to end users.
The contributions of this work are as follows: • We present the design of a code transformation pipeline that permits the generation of highperformance, vectorized code on a broad class of FEM models. • We provide a thorough evaluation of our code generation strategy and demonstrate that it achieves a substantial fraction of theoretical peak performance across a broad range of test cases.
The rest of this paper is arranged as follows. After reviewing the preliminaries of code generation for the finite element method in Section 2, we describe our implementation of cross-element vectorization in Firedrake in Section 3. In Section 4, we demonstrate the effectiveness of our approach with experimental results. Finally, we review our contributions and identify future research priorities in Section 5.
Preliminaries
The computation of multilinear forms using the basis functions spanning the discretized function spaces is called finite element assembly. When applying the matrix-free methods, one only needs to assemble linear forms, or residual forms, because matrix-vector products are essentially the assembly of linear forms which represent the actions of bilinear forms. Optimizing linear form assembly is therefore crucial for improving the performance of matrixfree methods. In Firedrake, one can invoke the matrixfree approach without changing the high-level problem formulation by setting solver options as detailed by Kirby and Mitchell (2018).
The general structure of a linear form L is where c i ∈V i , i = 1 . . . k, are arbitrary coefficient functions, and v ∈ V is the test function. L is linear with respect to v, but possibly nonlinear with respect to the coefficient functions.
be the set of basis functions spanning V . Define v i = L(c 1 , . . . , c k ; φ i ) ∈ R, then the assembly of L constitutes the computation of the vector v = (v i , . . . , v n ). In Firedrake, this is treated as a two-step process: local assembly and global assembly.
Local assembly
Local assembly of linear forms is the evaluation of the integrals as defined by the weak form of the differential equation on each entity (cell or facet) of the mesh. In Firedrake, the users define the problem in Unified Form Language (UFL) (Alnaes et al. 2014) which captures the weak form and the function space discretization. Then the Two-Stage Form Compiler (TSFC) (Homolya et al. 2018) takes this high-level, mathematical description and generates efficient C code. As an example, consider the linear form of the weak form of the positive-definite Helmholtz operator: Listing 1. Assembling the linear form of the Helmholtz operator in UFL.
Listing 1 shows the UFL syntax to assemble the linear form L as the vector result, on a 10 × 10 triangulation of a unit square. We choose to use the first-order Lagrange element as our approximation space. Listing 2 shows a C representation of this kernel generated by TSFC. We note the following key features of this element kernel: • The kernel takes three array arguments in this case: coords holds the coordinates of the current triangle, w 0 holds u i , the coefficients of u, and A stores the result. • The first part of the kernel (line 7 to line 15) computes the inverse and the determinant of the Jacobian for the coordinate transformation from the reference element Listing 2. Local assembly kernel the linear form of the Helmholtz operator in C. to the current element. This is required for pulling back the differential forms to the reference element. The Jacobian is constant for each triangle because the coordinate transformation is affine in this case. Otherwise, the Jacobian needs to be computed at each quadrature point.
• The constant arrays t0, t9 are the same for all elements. t0 represents the tabulation of the evaluation of basis functions at quadrature points, t9 represents the quadrature weights. • The ip loop iterates over the quadrature points, evaluating the integrand in (2) and summing to approximate the integral. The j loops iterate over the degrees of freedom, once inside the quadrature loop, and once upon output to the assembled array A. The extents of these loops depend on the integrals performed and the choice of function spaces respectively. • TSFC performs optimization passes on the loop nests.
In particular, it applies loop-invariant code motion which pulls invariant expression out of the loop nests into temporary arrays. This reduces the number of operations required while changing the structure of otherwise perfectly nested loops.
Global assembly
During global assembly, the local contribution from each mesh entity, computed by the element kernel, is accumulated into the global data structure. In Firedrake, PyOP2 (Rathgeber et al. 2012) is responsible for representing and realizing the iteration over mesh entities, marshalling data in and out of the element kernels. The computation is organized as PyOP2 parallel loops, or parloops. A parloop specifies a computational kernel, a set of mesh entities to which the kernel is applied, and all data required for the kernel. The data objects could be directly defined on the mesh entities, or indirectly access through maps from the mesh entities. For instance, the signature for the global assembly of the Helmholtz operator is: parloop(helmholtz, cells, r(cell2vert, RW), coords(cell2vert, R), x(cell2vert, R)).
Here helmholtz is the element kernel as shown in Listing 2, generated by TSFC; cells is the set of all triangles in the mesh; r, coords and x are the global data objects that are needed to create the arguments for the element kernel, where r holds the result vector, coords holds the coordinates of the vertices of the triangles which are needed for computing the Jacobian, and x holds the vector Listing 3. Global assembly code for action of the Helmholtz operator in C. representation of function u (as weights of basis functions). These global data objects correspond to the kernel arguments A, coords and w 0 respectively. The map cell2vert provides indirection from mesh entities to the global data objects, and each data argument is annotated with an access descriptor (R for read-only, RW for read-write access). In this example, all three arguments share the same map because first-order Lagrange element on triangles only have degreesof-freedom defined on the vertices, while the coordinate fields are also defined on the vertices.
Listing 3 shows the C code generated by PyOP2 for the above example. The code is then JIT-compiled when the result is needed in Firedrake. In the context of vectorization, this approach, with the inlined element kernel, forms the baseline in our experimental evaluation. We note the following key features of the global assembly kernel: • The outer loop is over mesh entities.
• For each entity, the computation can be divided into three parts: gathering the input data from global data structures (t3 and t4 in this case, which correspond to kernel arguments coords and w 0), calling the local assembly kernel, scattering the output data (t2) to the global data structure. • The gathering and scattering of data make use of indirect addressing via base pointers (dats) and indices (maps). • Different mesh entities might share the same degrees of freedom. • Global assembly interacts with local assembly via a function call (Line 23). This call can be inlined by the compiler, but it creates an artificial boundary for loop transformations at the source code level. This is the software engineering challenge that limits vectorization to a single local assembly kernel previously.
Vectorization
As one would expect, the loop nests and loop trip counts vary considerably for different integrals, meshes and function spaces that users might choose. This complexity is one of the challenges that our system specifically, and Firedrake more generally, must face in order to deliver predictable performance on modern CPUs, which have increasingly rich SIMD instruction sets.
In the prior approach to vectorization in our framework, the local assembly kernels generated by TSFC are further transformed to facilitate vectorization, as described by Luporini et al. (2015). The arrays are padded so that the trip counts of the innermost loops match multiples of the length of SIMD units. However, padding becomes less effective for low polynomial degrees on wide SIMD units. For instance, AVX512 instructions act on 8 doubleprecision floats, but the loops for degree 1 polynomials on triangles only have trip counts of 3, as shown in Listing 2. Moreover, loop-invariant code motion is very effective in reducing the number of floating-point operations, but hoisted instructions are not easily vectorized as they are no longer in the innermost loops. This effect is more pronounced on tensor-product elements where TSFC is able to apply sum factorization (Homolya et al. 2017) to achieve better algorithmic complexity.
Cross-element vectorization and Loopy
Another strategy is to vectorize across several elements in the outer loop over the mesh entities, as proposed previously by Kronbichler and Kormann (2017). This approach computes the contributions from several mesh entities using SIMD instructions, where each SIMD lane handles one entity. This is always possible regardless of the complexity of the local element kernel because the computation on each entity is independent and identical. One potential downside is the increase in memory pressure as the working set is larger.
For a compiler, the difficulty in performing cross-element vectorization (or, more generally, outer-loop vectorization) is to automate a sequence of loop transformations and necessary data layout transformations robustly. This is further complicated by the indirect memory access in data gathering and scattering, and the need to unroll and interchange loops across the indirections, which requires significantly more semantic knowledge than what is available to the C compiler.
Loopy (Klöckner 2014) is a loop generator embedded in Python which targets both CPUs and GPUs. Loopy provides abstractions based on integer sets for loop-based computations and enables powerful transformations based on the polyhedral model (Verdoolaege 2010). Loop-based computations in Loopy are represented as Loopy kernels. A Loopy kernel is a subprogram consisting of a loop domain and a partially-ordered list of scalar assignments acting on multi-dimensional arrays. The loop domain is specified as the set of integral points in the convex intersection of quasi-affine constraints, as described by the Integer Set Library (Verdoolaege 2010).
To integrate with Loopy, the code generation mechanisms in Firedrake were modified as illustrated in Figure 1. Instead of generating source code directly, TSFC and PyOP2 are modified to generate Loopy kernels. We have augmented the Loopy internal representation with the ability to support a generalized notion of kernel fusion through the nested composition of kernels, specifically through subprograms and inlining. This allows PyOP2 to inline the element kernel such that the global assembly Loopy kernel encapsulates the complete computation of global assembly. This holistic view of the overall computation enables robust loop transformations for vectorization across the boundary between global and local assembly.
Listing 4 shows an abridged version of the global assembly Loopy kernel for the Helmholtz operator, with the element kernel fused. We highlight the following key features of Loopy kernels: • Loop indices, such as n, i1, are called inames in Loopy, which define the iteration space. The bounds of the loops are specified by the affine constraints in domains. • Loop transformations operate on kernels by rewriting the loop domain and the statements making up the kernel. In addition, each iname carries a set of tags governing its realization in generated code, perhaps as a sequential loop, as a vector lane index, or through unrolling. • Multi-dimensional arrays occur as arguments and temporaries. The memory layout of the data can be specified by assigning tags to the array dimensions. • Dependencies between statements specify their partial order. Statement scheduling can also be controlled by assigning priorities to statements and inames.
For example, to achieve cross-element vectorization (by batching 4 elements into one SIMD vector in this example) we invoke the following sequence of Loopy transformations on the global assembly Loopy kernel, exploiting the domain knowledge of finite element assembly: • Split the outer loop n over mesh entities into n outer and n simd, with n simd having trip count of 4. The objective is to generate SIMD instructions for the n simd loops, such that each vector lane computes one iteration of the n simd loops. • Assign the tag SIMD to the new iname n simd.
This tag informs Loopy to force the n simd loop to be innermost, privatizing data by vector-expansion if necessary.
We highlight the change to the Loopy kernel after these transformations in Listing 5. Loopy supports code generation for different environments from the same kernel by choosing different targets. We introduced an OpenMP Target to Loopy which extends its existing C-language Target to support OpenMP pragmas, facilitating SIMD instruction generation.
Listing 6 shows the generated C code for the Helmholtz operator vectorized by grouping together 4 elements. Apart from the previously mentioned changes, we note the following details: • The n simd loops are pushed to the innermost level. Moreover, this transformation vector-expands temporary arrays such as t2, t3, t4 by 4, with the expanded dimension labeled as varying the fastest when viewed from (linear) system memory. This ensures their accesses in the n simd loops always have unit stride. • Loopy provides a mechanism to declare arrays to be aligned to specified memory boundaries (64 bytes in this example). • The n simd loops are decorated by pragma omp simd to inform C compilers to generate SIMD instructions. The exception is the writing back to the global array (Line 36), which is sequentialized due to potential race conditions, as different mesh entities could share the same degrees of freedom. • The remainder loop which handles the cases where the number of elements is non-divisible by 4 is omitted here for simplicity. After cross-element vectorization, all local assembly instructions (Lines 24-36) are inside the n simd loops, which always have trip counts of 4 and are stride 1. All loop-varying array accesses are stride 1 in the fastest moving dimension. There are no loop-carried dependencies in n simd loops. As a result, the n simd loops, and therefore all local assembly instructions, are vectorizable without further consideration of dependencies. This is verified by checking the x86 assembly code and running the program with the Intel Software Development Emulator.
Vector extensions
A more direct way to inform the compiler to emit SIMD instructions without depending on OpenMP implementation is to use vector extensions 2 , which support vector data types. These were first introduced in the GNU compiler (GCC), but are also supported in recent versions of the Intel C compiler (ICC) and Clang. Analogous mechanisms exist in various vector-type libraries, e.g. VCL (Fog 2017). To evaluate and compare with the directive-based approach from Section 3.1, we created a new code generation target in Loopy to support vector data types. When inames and corresponding array axes are jointly tagged as vector loops, Loopy generates code to compute on data in vector registers directly, instead of scalar loops over the vector lanes. It is worth noting that the initial intermediate representation of the loop was identical in each case, and that the different specializations were achieved through code transformation. Listing 7 shows the C code generated for the Helmholtz operator vectorized by batching 4 elements using the vector extension target. Here all vectorized (innermost) loops for local assembly are replaced by operations on vector variables. For instructions which do not fit the vector computation model, most noticeably the indirect data gathering (Line 18), or instructions containing builtin mathematics functions which are not supported on vector data types (Line 32), Loopy defaults to generating scalar loops over vector lanes, decorated with pragma Listing 6. Global assembly code for action of the Helmholtz operator in C vectorized by batching 4 elements.
Performance Evaluation
We follow the performance evaluation methodology of Luporini et al. (2017) by measuring the assembly time of a range of operators of increasing complexity and polynomial degrees. Due to the large number of combinations of experimental parameters (operators, meshes, polynomial degrees, vectorization strategies, compilers, hyperthreading), we only report an illustrative portion of the results here, with the entire suite of experiments made available on the interactive online repository CodeOcean (Sun 2019a).
Experimental setup
We performed experiments on a single node of two Intel systems, based on the Haswell and Skylake microarchitectures, as detailed in Table 1. Because we observe that hyperthreading usually improves the performance by 5% to 10% for our applications, we set the number of MPI processes to the number of logical cores of the CPU to utilize all available computation resources. Experimental results with hyperthreading turned off are available on CodeOcean.
The batch size, i.e., the number of elements grouped together for vectorization, is chosen to be consistent with the SIMD length. We use three C compilers: GCC 7.3, ICC 18.0 and Clang 5.0. The two vectorization strategies described in Section 3 are tested on all platforms. We use the listed Base Frequency to calculate the peak performance in Table 1. In reality, modern Intel CPUs dynamically reduce frequencies on heavy workloads with AVX2 and AVX512 instructions, which results in lower achievable performance. Running the optimized LINPACK benchmark binary provided by Intel gives a reasonable indication of peak performance for real applications.
For the benefit of reproducibility, we have archived the specific versions of Firedrake components used for experimental evaluation on Zenodo (Zenodo/Firedrake 2019). An installation of Firedrake with components matching the ones used for evaluation in this paper can be obtained following the instruction at https://www.firedrakeproject.org/download.html, with the following command: The evaluation framework is archived at (Sun 2019b). We measure the execution time of assembling the residual of five operators: the mass matrix ("mass"), the Helmholtz equation ("helmholtz"), the vector Laplacian ("laplacian"), an elastic model ("elasticity"), Listing 7. Global assembly code for action of the Helmholtz operator in C vectorized by 4 elements (using vector extensions).
We performed experiments on both 2D and 3D domains, with two types of mesh used for each case: triangles ("tri") and quadrilaterals ("quad") for 2D problems, tetrahedra ("tet") and hexahedra ("hex") for 3D problems. The arithmetic intensity of the operators are listed in Table 2. The memory footprint is calculated assuming perfect cachingit is thus a lower bound which results in an upper bound estimation for the arithmetic intensity. The triangular and tetrahedral meshes use an affine coordinate transformation (requiring only one Jacobian evaluation per element). The quadrilateral and hexahedral meshes use a bilinear (trilinear) coordinate transformation (requiring Jacobian evaluation at every quadrature point), which usually results in higher arithmetic intensities at low orders. In Firedrake, tensor-product elements (McRae et al. 2016) benefit from Table 2. Operator characteristics and speed-up summary, using GCC with vector extensions. AI: arithmetic intensity (FLOP/byte). D: trip count of loops over degrees of freedom. Q: trip count of loops over quadrature points. H: speed-up over baseline on Haswell, 16 processes, with vector extensions. S: speed-up over baseline on Skylake, 40 processes, with vector extensions. optimizations such as sum factorization to achieve lower asymptotic algorithmic complexity. They are therefore more competitive for higher order methods (Homolya et al. 2017).
We record the maximum execution time of the generated global assembly kernels on all MPI processes. This time does not includes the time in synchronization and MPI data exchange for halo updates. Each experiment is run five times, and the average execution time is reported. Exclusive access to the compute nodes is ensured and threads are pinned to individual logical cores. Startup costs such as code generation time and compilation time are excluded. We use automatic vectorization by GCC without batching, compiled with the same optimization flags listed earlier, as the baseline for comparison. Comparing with the crosselement strategy, the baseline represents the out-of-the-box performance of compiler auto-vectorization for the local element kernel. We note that cross-element vectorization does not alter the algorithm of local assembly except for the vector expansion, as illustrated by Listing 2 and Listing 6. Consequently, the total number of floating-point operations remains the same. The performance benefit from crosselement vectorization is therefore composable with the operation-reduction optimizations performed by the form compiler to the local assembly kernels.
Experimental results and discussion
Figures 2 to 5 show the performance of the helmholtz and elasticity operators on Haswell and Skylake, vectorized with OpenMP pragma as described in Section 3.1, and with vector extensions as described in Section 3.2. We indicate the fraction of peak performance achieved on the left axis, and the fraction of the LINPACK benchmark performance on the right axis. Figure 6 and 7 compare the roofline models (Williams et al. 2009) of the baseline and our cross-element vectorization implementation using GCC and vector extensions on Haswell and Skylake. The speed-up achieved is also summarized in Table 2. We analyze the data in the following aspects:
Compiler comparison and vector extensions
When vectorizing with OpenMP pragma, ICC gives the best performance for almost all test cases, followed by Clang, while GCC is significantly less competitive. The performance disparity is more pronounced on Skylake than on Haswell. However, when using vector extensions, Clang and GCC improve significantly and are able to match the performance of ICC on both Haswell and Skylake, whereas ICC performs similarly with OpenMP pragma and with vector extensions.
We use the Intel Software Development Emulator 7 to count the number of instructions executed at runtime for code generated by different compilers. The data indicate that although floating-point operations are fully vectorized by all compilers, GCC and Clang generate more load and store instructions between vector registers and memory when using OpenMP pragma for vectorization. One possible reason is that GCC and Clang choose to allocate short arrays to the stack rather than the vector registers directly, causing more load on the memory subsystem.
In light of these results, we conclude that vectorization with vector extensions allows greater performance portability on different compilers and CPUs for our application. It is, therefore, our preferred strategy for implementing crosselement vectorization, and is the default option for the rest of our analysis. set of data. On simple operators such as mass on tri and tetra, the kernels have simple loop structures and the compilers can sometimes successfully apply other optimizations such as unrolling and loop interchange to achieve vectorization without batching elements in the outer loop. The pattern of speed-up is consistent across Haswell and Skylake. Higher speed-up is generally achieved on more complicated operators (e.g. hyperelasticity), and on tensor-product elements (quad and hex), which generally correspond to more complicated loop structure and higher arithmetic intensity due to the Jacobian recomputation at each quadrature point.
Achieved fraction of peak performance
We observe that the fraction of peak performance varies smoothly with polynomial degrees for cross-element vectorization in all test cases. This fulfils an important design requirement for Firedrake: small changes in problem setup by the users should not create unexpected performance degradation. This is also shown in Figures 6 and 7 where the results Haswell cross-element vectorization mass -tri mass -quad mass -tet mass -hex helmholtz -tri helmholtz -quad helmholtz -tet helmholtz -hex laplacian -tri laplacian -quad laplacian -tet laplacian -hex elasticity -tri elasticity -quad elasticity -tet elasticity -hex hyperelasticity -tri hyperelasticity -quad hyperelasticity -tet hyperelasticity -hex Skylake cross-element vectorization mass -tri mass -quad mass -tet mass -hex helmholtz -tri helmholtz -quad helmholtz -tet helmholtz -hex laplacian -tri laplacian -quad laplacian -tet laplacian -hex elasticity -tri elasticity -quad elasticity -tet elasticity -hex hyperelasticity -tri hyperelasticity -quad hyperelasticity -tet hyperelasticity -hex are more clustered on the roofline plots after crosselement vectorization. The baseline shows performance inconsistency, especially on low polynomial degrees. For instance, for the helmholtz operator with degree 3 on quad, the quadrature loops and the basis function loops all have trip counts of 4, which fits the vector length on Haswell and results in better performance.
On simplicial meshes (tri and tetra), higher order discretization leads to kernels with very high arithmetic intensity because of the quadratic and cubic increases in the number of basis functions, and thus the loop trip counts. This is due to the current limitation that simplicial elements in Firedrake are not sum factorized. In these test cases, we observe that the baseline approaches cross-element vectorization for sufficiently high polynomial degrees. This is not a serious concern for our optimization approach because the break-even degrees are very high except for simple operators such as mass, and ultimately tensorproduct elements are more competitive for higher order methods in terms of algorithmic complexity.
We also observe that there exists a small number of test cases where the achieved peak performance is marginally higher than the LINPACK benchmark on Skylake, as shown in Figure 7. One possible reason for this observation is thermal throttling since our test cases typically run for a shorter period of time than LINPACK. We also note that these test cases correspond to high order hyperelasticity operator on tet which are not practically important use cases, since using tensor-product elements requires much less floating-point operations at the same polynomial order.
Tensor-product elements
We observe higher and more consistent speed-up for tensor-product elements (quad and hex) on both Haswell and Skylake. This is because, on these meshes, more computation can be moved outside the innermost loop due to sum factorization, which results in more challenging loop nests for the baseline strategy which attempts to vectorize within the element kernel. The same applies to the evaluation of the Jacobian of coordinate transformation, which is a nested loop over quadrature points after sum factorization for tensor-product elements.
The base elements of quad and hex are interval elements in 1D, thus the extents of loops over degrees of freedom increase only linearly with respect to polynomial degrees, as shown in Table 2. As a result, the baseline performance does not improve as quickly for higher polynomial degrees on quad and hex compared with tri and tet, resulting in stable speed-up for cross-element vectorization observed on tensor-product elements.
Conclusion and future work
We have presented a portable, general-purpose solution for delivering stable vectorization performance on modern CPUs for matrix-free finite element assembly for a very broad class of finite element operators on a large range of elements and polynomial degrees. We described the implementation of cross-element vectorization in Firedrake which is transparent to the end users. Although the technique of cross-element vectorization is conceptually simple and has been applied in hand-written kernels before, our implementation based on code generation is automatic, robust and composable with other optimization passes.
The write-back to global data structure is not vectorized in our approach due to possible race conditions. The newly introduced Conflict Detection instructions in the Intel AVX512 instruction set could potentially mitigate this limitation (Zhang 2016, Section 2.3). This could be achieved by informing Loopy to use the relevant intrinsics when generating code for loops with specific tags.
We have focused on the matrix-free finite element method because it is compute-intensive and more likely to benefit from vectorization. However, our methods and implementation also support matrix assembly. Firedrake relies on PETSc (Balay et al. 2017) to handle distributed sparse matrices, and PETSc requires certain data layouts for the input array when updating the global matrices. When several elements are batched together for crosselement vectorization, we need to generate code to explicitly unpack/transpose the local assembly results into individual arrays before calling PETSc functions to update the global sparse matrices for each element. Future improvement could include eliminating this overhead, possibly by extending the PETSc API.
The newly introduced abstraction layer, together with Loopy integration in the code generation and optimization pipeline, opens up multiple possibilities for future research in Firedrake. These include code generation with intrinsics instructions, loop tiling, and GPU acceleration, all of which are already supported in Loopy.
of Naval Research under grant number N00014-14-1-0117 and the US National Science Foundation under grant number CCF-1524433. AK gratefully acknowledges a hardware gift from Nvidia Corporation.
Supplemental material
Here we describe the operators used as the test cases for experimental evaluation. They are defined as bilinear forms, and we take their action in UFL to obtain the corresponding linear forms.
mass Here u and v are scalar-valued trial and test functions.
where I is the identity matrix, λ and µ are the Lamé parameters of the material, F is the deformation gradient, C is the right Cauchy-Green tensor, E is the Euler-Lagrange strain tensor. We define the Piola-Kirchhoff stress tensors as: Finally, we arrive at the residual form of this nonlinear problem: where b is the external forcing. To solve this nonlinear problem, we need to linearize the residual form at an approximate solution u, this gives us the bilinear form a: a = lim →0 r(u + δu) − r(u) , where the trial function is δu, the test function is v, and u is a coefficient of the operator. We use the automatic differentiation of UFL to compute the operator symbolically. | 7,971.6 | 2019-03-19T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Janus-Faced Myeloid-Derived Suppressor Cell Exosomes for the Good and the Bad in Cancer and Autoimmune Disease
Myeloid-derived suppressor cells (MDSCs) are a heterogeneous population of immature myeloid cells originally described to hamper immune responses in chronic infections. Meanwhile, they are known to be a major obstacle in cancer immunotherapy. On the other hand, MDSC can interfere with allogeneic transplant rejection and may dampen autoreactive T cell activity. Whether MDSC-Exosomes (Exo) can cope with the dangerous and potentially therapeutic activities of MDSC is not yet fully explored. After introducing MDSC and Exo, it will be discussed, whether a blockade of MDSC-Exo could foster the efficacy of immunotherapy in cancer and mitigate tumor progression supporting activities of MDSC. It also will be outlined, whether application of native or tailored MDSC-Exo might prohibit autoimmune disease progression. These considerations are based on the steadily increasing knowledge on Exo composition, their capacity to distribute throughout the organism combined with selectivity of targeting, and the ease to tailor Exo and includes open questions that answers will facilitate optimizing protocols for a MDSC-Exo blockade in cancer as well as for strengthening their therapeutic efficacy in autoimmune disease.
Myeloid-Derived Suppressor Cells (MDSCs) and Cancer
Cancer is one of the most frequent causes of death (1), which in part is due to the resistance of tumor cells to chemo-, radio-, and immunotherapy (2)(3)(4). This implies that after tumor spread, which might prohibit surgical excision, the likeliness of curative therapy steeply declines. Disappointing efficacy of adjuvant cancer therapies accounts particularly for immunotherapy, where frequently no or only weak responses are noted despite the presence of immunogenic tumor-associated antigens (5). In several tumor entities, MDSCs were found to account for resistance toward cancer immunotherapy (6) and additionally for poor responses to chemotherapy. Therefore, drugs were designed that, besides directly attacking the tumor cells, should hamper MDSC development or activation or drive MDSC into apoptosis (7). So far therapeutic trials with a focus on MDSC elimination to improve chemotherapy or immunotherapy are rare, which also accounts for combinations of adjuvant therapeutics (8, 9). The current options and possible modes of improvement attacking MDSC/MDSC-exosomes (MDSC-Exo) to support cancer chemo-and/or immunotherapy will be discussed.
MDSC in Autoimmune Disease and Allograft Transplantation
Autoimmune disease incidence is steadily increasing (10). Autoimmune diseases frequently exacerbate in young adults and progress in waves, which get more severe during time and become life threatening (11). Corticosteroid therapy, frequently used in progressed disease stages (12), is burdened by severe side effects, including dampening immune responses against bacteria and viruses (13). The option to booster autoimmune disease therapy with MDSC (14) gained in weight, when it was realized that MDSC are a strong stimulus for regulatory T cell (Treg) activation, a deficit in Treg contributing to autoreactive T cell expansion (15). There are several trials to integrate MDSC in autoimmune disease therapy, such as myasthenia gravis, arthritis, inflammatory bowel disease, and others, where good response rates were reported (14, 16-21).
Myeloid-derived suppressor cell-promoted downregulation of immune reactivity also is advantageous in allograft transplantation. This accounts for organ as well as hematopoietic stem cell (HSC) transplantation (22-26). Accordingly, drugs promoting MDSC expansion and/or activation and the transfer of MDSC were reported to support long-term allograft survival (27-30).
Having introduced the two faces of MDSC, this review will focus on MDSC and MDSC-Exo in cancer and autoimmune disease. After introducing MDSC and Exo, their mode of action in disease will be outlined. Knowledge on the crosstalk between MDSC/MDSC-Exo and their targets provides the fundament for established and forthcoming therapeutic interference.
MDSC: PHeNOTYPiC AND FUNCTiONAL CHARACTeRiZATiON
Myeloid-derived suppressor cells are a heterogeneous group of cells, characterized by myeloid origin, immature state, and mostly functional activity. In humans, MDSC are still difficult to isolate due to an inconclusive surface marker expression profile. However, there is consent on the differentiation between two subgroups defined as monocytic (M) and granulocytic MDSC (G-MDSC), which are differentiated on the basis of Ly6C high [monocytic MDSC (M-MDSC)] or Ly6G high (G-MDSC), M-MDSC exerting stronger suppressive activity (31-33). MDSC account for T cell exhaustion in chronic infections (34, 35), play a crucial role in cancer progression (31, 36), and are a major hindrance in cancer immunotherapy, hampering T cell recruitment and activation, while promoting M1 and Treg expansion (14, 37). On the other hand, MDSC are beneficial in overshooting immune reactions such as autoimmune diseases (24, 33) and allogeneic transplantation (18,33,38). Finally, though the activity of MDSC may vary with the pathophysiological conditions promoting their expansion, there is consent that T cells are major targets and that the response of the adaptive immune system is most severely affected (39).
Taken together, MDSC are immature myeloid cells that hamper mostly T cell, but also B cell and NK activity, at least in part by supporting Treg expansion and activation. They are a severe hindrance in cancer immunotherapy and in chronic infections.
Mostly in cancer immunotherapy drugs and drug combinations to prevent MDSC induction, activation and targeting as well as drugs to drive MDSC into apoptosis are experimentally and clinically explored to improve the efficacy of immunotherapy. Based on the same principle MDSC activity is suited to control undesired immunoreactivity in transplantation and autoimmune disease, the transfer of MDSC being a therapeutic option.
eXOSOMeS (exo)
Exosomes are small 40-100 nm vesicles delivered by most cells of an organism (50). They distribute throughout the body and are recovered in all body fluids (51). Exo express donor cellderived components. This finding stimulated Exo research as a non-invasive/minimally invasive tool for diagnosis, prognosis and therapy control (51, 52). Of particular importance was the notion that Exo components are function competent and deliver their messages into target cells (53, 54) such that Exo binding and uptake can severely modulate target structures and suffices for reprogramming target cells (54-57). Furthermore, Exo easily can be modulated in vitro (58). Thus, Exo are a most powerful intercellular communication system and are supposed to become a highly effective therapeutic tool in the near future (59, 60).
exo Biogenesis
Exosome biogenesis starts with the formation of early endosomes (EE), which can derive from the trans-Golgi network or from different internalized membrane microdomains, such as clathrincoated pits, tetraspanin and glycolipid-enriched membrane domains (GEM), or proteolipids in cholesterol-and ceramiderich compartments (61). EE move toward multivesicular bodies (MVB), the transport machinery varying for the different types of EE (62). During inward budding of EE into MVB, called intraluminal vesicles (ILV), vesicles receive their cargo. Loading of the small plasma that could contain ~100 proteins and 10,000 nucleotides (63) with proteins, coding and non-coding RNA and DNA are non-random processes (61). Sorting of proteins is facilitated by mono-ubiquitination, acylation or myristoylation (64, 65). For GEM-derived Exo, higher order oligomerization is important (66), where protein complexes and attached cytoplasmic components are retained (67). In raft-derived ILV, sphingolipids forming ceramide also contribute to vesicle loading (68). miRNA recruitment is guided by a zip code in the 3′-UTR and by coupling of RNA-induced silencing complex to components of the sorting complex. A specific EXOmotif (GGAG) controls miRNAs loading by binding to the heterogeneous ribonucleoprotein A2B1 (hnRNPA2B1), which binds to an RNA transport signal (A2RE) (69). Annexin-II plays a role in RNA sorting into ILV by binding specific RNAs (70). lncRNA also are selectively recruited by so far unknown mechanisms (71). Ras-related proteins regulate MVB movement toward the cell membrane (72). MVB fuse with the plasma membrane, ILV are released and are then called Exo (61).
Though there remain open questions on the precise biogenesis pathways, it is important to remember that due to differences in biogenesis, single cells can deliver different Exo (73,74). For judging on potential diagnostic and therapeutic validity, information on the Exo composition is a prerequisite.
exo Composition
Exosomes are composed of a lipid bilayer, which contains transmembrane proteins. The intravesicular content is composed of proteins, coding and non-coding RNA and DNA.
The lipid envelop of Exo contains phosphatidylcholine, phosphatidylethanolamine, phosphatidylinositol, prostaglandins, and lysobisphosphatidic acid and is enriched in sphingomyelin, cholesterol, GM3, and phosphatidylserine (75). The high phosphatidylserine content allows differentiating Exo from microvesicles (76) and tumor-derived Exo (TEX) lipid composition may be suited for diagnosis (77,78). Progress in lipidomics will provide further informations. Improvement in mass spectrometry (79) has greatly facilitated the characterization of Exo proteins, where >7,000 were identified so far (80). Constitutive Exo proteins are structural vesicle component or are involved in vesicle biogenesis and vesicle trafficking. Most abundant are tetraspanins (81), enriched 7-to 124-fold in Exo compared to the parental cells (82). Adhesion molecules, proteases, MHC molecules, HSPs, TSG101, Alix, annexins, cytoskeleton proteins, metabolic enzymes, cytosolic signal transduction molecules, and ribosomal proteins, some of which are recruited via their association with proteins engaged in biogenesis, are also abundantly recovered (83,84). Cell typespecific Exo proteins are so far most comprehensively explored for cancer and cancer stem cells (CSC), such as MART1, EGFRVIII, multidrug resistance gene 1, EpCAM, MET, mutant KRAS, and tissue factor (73,(85)(86)(87)(88). Notably, due to their location in internalization prone microdomains, all known CSC markers are recovered in TEX (89,90), which implies recovery of CSC marker expressing Exo in body fluids as most reliable for diagnosis.
Next generation sequencing allowed for rapid progress in Exo DNA, coding and non-coding RNA identification (91). Previous studies were mostly concerned about miRNA, which constitutes only 1-3% of the human genome, but due to multiple targets, controls about 30% of the coding genes. miRNA cleaves mRNA via argonaute (AGO) (perfect base pairing) or represses translation (imperfect binding) (92). Knowledge on miRNA greatly fostered progress in oncology, where miRNA could be linked to prognosis, disease progression, local recurrence, and metastasis (93), particularly the miR-200 family playing an important role in epithelial-mesenchymal transition (EMT) (94). miRNA also accounts for CSC maintenance (95), angiogenesis (96), and chemoresistance (97). Other miRNA, such as miR-34, -34a, -340, act as tumor and metastasis suppressors (98,99). miRNA also regulates tolerance induction and inflammation (100)(101)(102). A knockout of Dicer and Ago2 in HSC results in increased apoptosis and loss of hematopoietic cell reconstitution; Ago2 deletion is accompanied by deficient B and erythroid cell differentiation (103). A knockdown of Dicer results in T cell reduction (104,105) and a shortened survival rate and reduced antibody repertoire in B cells (106). Dicer and Drosha also are required for Treg regulation (107, 108), a ko promoting a lethal inflammatory disease. MiR-155 regulates NK maturation and activation by suppressing suppressor of cytokine signaling 1 and phorbol-12-myristate-13-acetate-induced protein 1 (109). In MDSC, upregulated miR-494 and -21 target phosphatase and tensin homolog (Pten), miR-155 targets Ship-1 and miR-210 Arg1, whereas downregulated miR-17-5p and -20a target STAT3 (110). In asthma, miR-20b promotes G-MDSC accumulation associated with a decrease in IL-3 and IL-13 (111) and miR-223 suppresses Arg1 and STAT3 in multiple sclerosis and autoimmune encephalitis (112). Thus, miRNA, besides being important in oncogenesis and tumor progression, regulates T cells, B cells, and components of the innate immune system including MDSC, which has severe bearing on inflammation and autoimmune diseases.
Taken together the ongoing analysis of Exo composition provided a plethora of informations, which strongly sustain the initial hypothesis of Exo as important intercellular communicators allowing sessile cells a systemic communication, which is equally important in physiology and pathology. For optimal therapeutic translation, further analyses with a focus on donor-dependent differences in Exo profiles are desirable.
exo Targeting and Uptake
Answering the questions how Exo find their targets and are there options to guide targeting is urgent for therapeutic considerations (122). Exo can bind to the extracellular matrix (ECM) or cells via specific receptor-ligand pairs, where binding accounts for matrix and cell modulation (123,124). Exo uptake also depending on target cell ligands, may require different target structures then binding and can have distinct consequences for the target cell (125,126).
Exosomes bind to and are taken up by selected target cells. Exo binding frequently involves (tetraspanin-associated) integrants (124,127), where ICAM1 will be one potential partner (128,129). Notably, different integrants bind distinct target cells. Thus, the α6β4 integrant binds cells in the premetastatic niche of the lung, whereas integrin αvβ5 binds cells in the premetastatic niche of the liver (127). A Tspan8-α4β1 complex binds to endothelial cells (EC) and EC progenitors, but a Tspan8-α6β4 complex hampers Exo uptake by EC (130). Other known binding partners are proteoglycans prevalently binding to galectins, selectins, and sialic acid binding lectins (131)(132)(133)(134). According to our experience, Exo binding is greatly facilitated by clusters of adhesion molecules in both Exo and target cells (84).
Exosome binding mostly is followed by uptake. There are two modalities for Exo uptake, fusion with the cell membrane (135,136) and, dominating, endocytosis, an active process that requires modulation of the actin cytoskeleton (135,(137)(138)(139). There are at least four modes of uptake, phagocytosis, macropinocytosis, clathrin-dependent endocytosis, and uptake by lipid rafts and caveolae. Phagocytosis proceeds via the formation of cup-like extensions, where the tips fuse and become internalized. Phagocytic markers like lysosomal-associated membrane protein 1 on Exo (140) and T-cell immunoglobulin and mucin domain containing (TIM)4 that recognizes phosphatidylserine on Exo facilitate the process (136,137,141). Exo uptake by macropinocytosis occurs, when lamellipodia fold back and fuse with the plasma membrane (142,143). Most frequently, Exo endocytosis proceeds via clathrin-coated pits, where dynamin contributes to the scission of clathrin-coated endocytosed pits (84,139,140). Finally, Exo can be internalized by rafts, cholesterol-, and glycolipid-enriched membrane microdomains, such as tetraspanin webs (84,139,144) or caveolae (145).
In brief, Exo uptake is an active process with a contribution of the cytoskeleton as well as fission and scission machineries to detach from the plasma membrane. Intracellular processing of the uptaken Exo varies between cells and requires further exploration (146). Though Exo may itinerate (147), they mostly are digested, their content modulating the target cell both directly or by stimulating signaling cascades, transcription, and silencing processes via the target cell's equipment (148)(149)(150)(151).
exo and Target Cell Reprogramming
Whether Exo-induced changes in target cells are due to the transferred content of Exo or to transfer-induced target cell reactions is still disputed. When Exo bind to the ECM, the Exo membrane-coat accounts for changes observed in matrix proteins and matrix structure. Instead, when Exo bind to, but are not taken up by the target cell, target cell modulation is promoted by Exo-initiated signal transduction and/or cleavage of proteins on the target cell membrane. When Exo are taken up by the target cell, an unequivocal answer is more critical. There are examples, demonstrating that changes in the target cell are directly due to the transferred Exo content. Thus, in prostate cancer cells, αvβ6 is transferred via Exo into an αvβ6-negative recipient cell and localizes to the cell surface, de novo αvβ6 expression by the recipient cell being excluded (152). Also, after dendritic cell (DC) loading with TEX, tumor antigens are processed and loaded into newly synthesized MHC molecules (137,153). The same principle will be valid for therapeutically tailored Exo loaded with large amounts of therapeutics drugs or miRNA or signaling checkpoint inhibitors (154)(155)(156). However, whether the naturally available amount of one type of Exo contains sufficient load to directly modulate targets is questionable. First, the small Exo plasma homes a limited amount of proteins and nucleotides; second, a TEX preparation from a cloned tumor line distinctly affects tumor cells, fibroblasts, EC, and hematopoietic cells. Thus, an impact on signal transduction and/or transcription/translation likely represents the dominating mode of uptaken Exo activity. The strong impact of DC-Exo uptake by the immune synapse also supports an initiator role of the transferred Exo content (157,158). The hypothesis is backed by activation or inhibition of B cells, NK, and neutrophils initiated by DC-, Mϕ-, stem cell (SC)-, or tumor cell-derived Exo (159)(160)(161)(162)(163). The important role of Exo for anterograde and retrograde information transfer via neurological synapses also argues for Exo-initiated activation of Zöller MDSC-Exo as a Therapeutic Option Frontiers in Immunology | www.frontiersin.org February 2018 | Volume 9 | Article 137 signal transduction, where Exo-promoted activation of signaling cascades was described to maintain plasticity under physiological conditions as well as to account for the spread of pathological proteins (164)(165)(166)(167). Thus, without excluding target modulation by uptaken Exo content, in most instances an incentive push by Exo superiorly covers the wide range of Exo activities.
exo Transfer and the Life Span of exo Information on the natural life span and that of transferred Exo is an additional prerequisite for therapeutic trials.
Where required, therapeutic rescuing can be further improved by tailoring with docking molecules (174,175). Tetraspanins and RGD peptides were described to target tumor cells or EC (176,177). Targeting oncogenic receptors or SC receptors offers an alternative strategy (178). Bacterial-derived extracellular mimetics additionally could facilitate generating large quantities of homogeneous Exo for vaccination and drug delivery (179).
Taken together, available data strongly support the feasibility of therapeutic Exo application to interfere with cancer progression, to balance angiogenesis, blood coagulation, and to regulate native or adaptive immune system responses. MDSC-Exo are engaged in all these processes.
MDSC-exo Characterization
Myeloid-derived suppressor cells are well characterized and there is a wealth of information on the impact of TEX on MDSC. Information on MDSC-Exo is limited and was mostly collected using MDSC-Exo derived from tumor-induced, immunosuppressive MDSC, which resemble the inflammatory MDSC in chronic infections. Thus, these data are valid for the differentiation between resting versus inflammatory MDSC-Exo in general.
Myeloid-derived suppressor cell-exosomes contain common Exo components such as annexins, tetraspanins, glycosylphosphatidylinositol-anchored CD177, cytoskeletal proteins, proteins engaged in vesicle biogenesis, and HSP. There is an abundance of proteasome subunits, histone variants and elongation factors, and metabolic enzymes that recovery in MDSC-Exo mostly corresponds to the recovery in MDSC. Comparing inflammatory with conventional MDSC-Exo showed a decrease of 33 proteins, some of which being involved in innate immune responses, such as complement components and chemotactic proteins. In addition, some cytoskeletal proteins like spectrin, ankyrin, and tubulin were reduced in inflammatory MDSC-Exo. Thirty proteins increased in inflammatory MDSC-Exo included GTP and ATP-binding proteins and proteins engaged in Exo-biogenesis facilitating budding or sorting (180).
Information on the RNA and DNA load of MDSC-Exo is largely missing, but is well explored in MDSC (188,189). To give a few examples, TGFβ promotes G-and M-MDSC induction and expansion via upregulation of miR-155 and miR-21, which target inositol phosphate-5-phosphatase D and Pten, leading to activation of STAT3 (190). During sepsis, miR-21 and miR-18b become strikingly upregulated, which is accompanied by pronounced immunosuppressive activity of MDSC prohibiting bacterial clearance (191). Mir-9 overexpression enhances MDSC functional activity. This is due to miR-9 targeting Runx1, an essential transcription factor in promoting MDSC differentiation (192). Doxorubicin treatment promotes miR-126a induction in MDSC. miR-126a + MDSC-Exo induce IL13+Th2 cells and rescue MDSC death in a S100A8/A9-dependent manner (193).
Taken together, the inflammatory MDSC-Exo membrane protein profile provides hints toward receptor-ligand pairs. Unfortunately, so far no selective ligands, e.g., for binding Treg or activated T cells were recovered. The reduced recovery of some inflammatory proteins in inflammatory MDSC-Exo suggests a possible contribution to the inefficacy of immune-response induction in cancer and chronic infections. The abundance of proteasome subunits as well as of histones and HMBG1, which is inflammation independent, is of great interest and should be further elaborated, some functional consequences being already defined. The finding that Doxorubicin treatment affects the MDSC-Exo miRNA profile with severe functional consequences also should spur research on this topic.
MDSC-exo Activities in Cancer
As Exo are supposed to be most important intercellular communicators, can be easily modulated in vitro and are simple to store for therapeutic application, a detailed knowledge on MDSC-Exo activities will open a wide range of new and promising therapeutic applications. However, gaining insight is a demanding task. This relates to the heterogeneity of MDSC, the delivery of distinct Exo subpopulations by individual cells, the differences in Exo delivered by MDSC during maturation in the BM versus "inflammatory" MDSC. In addition, Exo have more than one target, which becomes aggravated by the distribution of Exo throughout the body and the cooperativity of different cells/ subpopulations particularly in the immune system. This implies that a whole range of potential targets needs to be analyzed for MDSC-Exo promoted alterations.
Though not directly approaching MDSC-Exo, there is compelling evidence that TEX induce and affect MDSC. TEX are taken up by myeloid cells in the BM and switch their differentiation toward MDSC. Also, tumor growth-promoting activity of MDSC depends on PGE2 and TEX-provided TGFβ that induce upregulation of Cox2, IL6, VEGF, and Arg1 in MDSC (194). Furthermore, TEX-associated Hsp72 triggers toll-like receptor (TLR)2/myeloid differentiation primary response gene 88 (MyD88)-dependent Stat3 activation in MDSC, which exert pronounced immunosuppressive activity (195). The finding was confirmed using MyD88-ko mice, which additionally revealed a reduction in CCL2 (196). TEX also promotes MDSC expansion in the BM through activation of STAT3, upregulated iNOS, which strengthens the immunosuppressive capacity of MDSC (197). Breast cancer-TEX distribute to the lung are taken up bone-marrow-derived cells and promote accumulation of MDSC in lung and liver. In addition, TEX inhibit through activation of M-MDSC T cell activation and TH1 cytokine production (198). BM stroma cell Exo, which are crucial in multiple myeloma development, are taken up by MDSC, induce their expansion, and survival through STAT3 and STAT1 pathway activation and induction of anti-apoptotic BclXl and Bcl2 family apoptosis regulator and promote NO release by MDSC increasing their suppressive activity on T cells (199).
Functional analysis of freshly ex vivo harvested MDSC-Exo was mostly restricted to the impact on myeloid cells. The authors report that the proinflammatory S100A8/9 heterodimer is chemotactic for MDSC (180). Furthermore, MDSC and, less prominently, MDSC-Exo convert tumoricidal M1-Mϕ to tumor growth-promoting M2-Mϕ by switching off IL12 production (180). Of special functional interest is the recovery of ubiquitinated histones and HMGB1 (182,183), which exert proinflammatory activity, contribute to systemic inflammation and organ failure, and drive autoimmune diseases (200)(201)(202). HMBG1, a chaperone for many inflammatory molecules in MDSC, promotes the development of MDSC from BM progenitors, increases IL10 production by MDSC and contributes to downregulation of the T cell homing receptor CD62L (203,204). The conversion of monocytes into MDSC-like cells and the differentiation of bone marrow cell into M-MDSC proceeds via the p38/NFκB/Erk1/2 pathway (205). In the context of chemoresistance, which in part relies on MDSC, MDSC-Exo miR-126a induces expansion of TH2, inhibits TH1 proliferation, and IFNγ secretion and supports angiogenesis. In a feedback loop, chemoresistance is transferred into the donor MDSC (193).
Therapeutic interference with MDSC-exo in Cancer
There are excellent reviews on the therapeutic use of Exo (206,207) as well as on attacking MDSC in cancer (208,209) including approaches with a focus on improving the efficacy of immunotherapy (6,210). So far, only a limited number of reports was concerned about directly attacking MDSC-Exo in cancer as a therapeutic option.
Alternatively, an antibody blockade may be envisaged that prevents MDSC-Exo docking on target cells. According to the enrichment of tetraspanins, anti-CD9 was shown to prohibit breast cancer cell metastasis (221). We used anti-Tspan8 to block pancreatic cancer TEX, Tspan8 being abundantly expressed on pancreatic CSC-TEX (222). The antibody blockade sufficed to hamper angiogenesis and premetastatic niche establishment, but had a minor impact on MDSC (169). A MDSC-Exo selective antibody blockade remains to be explored.
An interesting approach is the use of proton pump inhibitors, toxic byproducts generated by the altered metabolism of cancer cells being expelled by proton transporters (223). Proton pump inhibitors concomitantly contributing to hamper the release of Exo by affecting the acid milieu in the tumor surrounding (224), the release of MDSC-Exo may be inhibited concomitantly, which could contribute to facilitate recruitment of effector immune cells. Similar considerations account for a blockade of the S100A8/9 marker on MDSC-Exo (180). Alternatively, a blockade of premetastatic niche formation was achieved by a blockade of CCL2 that prevented MDSC-Exo recruitment (225). A blockade of CD47 and its ligand Tsp and, less efficiently the signal regulator protein α, highly expressed on MDSC-Exo, also hampered MDSC-Exo chemotaxis and migration (185).
As an alternative approach, extracorporal hemofiltration is used for Exo elimination. Originally established as an affinity plasmapheresis for the elimination of TEX, it is being adapted to remove hepatitis C virions and is being explored to remove immunosuppressive Exo (226,227). Progress in MDSC-Exo proteomics may provide means for a selective removal. There remains the problem of MDSC-Exo in cancer being mostly located within the tumor tissue or recruited to potential target, e.g., EC and premetastatic organ tissue rather than in the serum. Coping with Exo regeneration may also become demanding.
Last not least, Exo or Exo surrogates can be loaded with drugs, toxins, non-coding RNA to be delivered toward MDSC or MDSC-Exo to directly prohibit their immunosuppressive, angiogenesis and cancer-spread promoting activities (175,178,228,229). The field is rapidly expanding, tailoring Exo, or surrogates also for repair, e.g., in artherosclerosis or thrombosis (230)(231)(232). There remains the demand for selective binding as, e.g., miRNA interfering with MDSC activities may promote tumor growth (189,193,233). Finally, great efforts are taken to replace Exo by nanoparticles that could allow for easier and homogeneous production (179). First trials attacking MDSC to improve cancer immunotherapy revealed encouraging results (234)(235)(236)(237).
Thus, there are several promising options to interfere with the immunosuppressive and tumor growth-promoting activity of "inflammatory" MDSC and MDSC-Exo. There is need improving target selectivity. However, as the tumor milieu/TEX contribute to the recruitment and expansion of "inflammatory" MDSC/MDSC-Exo, targeting TEX may be considered under selected conditions as an alternative. Targeting TEX would be less demanding, as TEX mostly are equipped with oncogenes or CSC markers (222,238,239) that are not as widespread as inflammatory MDSC-Exo markers.
MDSC and MDSC-exo in Autoimmune Disease
While the abundance of MDCS/MDSC-Exo in cancer creates a milieu of therapy resistance, autoimmune disease progression is favored by the inefficacy of immune response regulation by immunosuppressive cells and factors (240). This accounts for the paucity of MDSC and Treg (240,241), where the latter may be linked or be due to the former (242) and frequently is accompanied by an increase in TH17 (243). However, opposing findings were also reported.
Myeloid-derived suppressor cells only recently achieved attention in autoimmune diseases, initially in animal models such as experimental autoimmune encephalomyelitis (EAE), where a deficit in CCR2, which is required for MDSC recruitment, was accompanied by milder EAE (244). However, depending on the model and the readout system, opposing findings were also reported (245). This diversity of findings accounts for a wide range of studies on the recovery of MDSC and their suppressive activity in autoimmune diseases. There are, at least, two reasons for this confusion. First, the disease state is important. With progressive tissue destruction concomitantly to the dysregulated autoimmune effector cells, an inflammatory milieu is generated, which, in fact, supports MDSC activation. Second, a failure to detect a decrease in MDSC and/or Treg in the peripheral blood or peripheral lymphoid organs in autoimmune disease may be irrelevant (246), as the frequency in the autoimmune diseaseaffected organ can differ significantly. To give an example, while Treg are rare in the peripheral blood, in non-lymphoid tissues, the frequency of Treg ranges from 30-60% of the total CD4+ population (247).
At the present state of knowledge, there is an urgent need for additional information on MDSC/MDSC-Exo presence and activity in autoimmune disease-affected organs. Instead, there is consent that chronic infections rely on an abundance of "inflammatory" MDSC/MDSC-Exo, which prevent appropriate activation of the adaptive and the innate immune system (251,252). This knowledge, in fact, could provide a helpful guide toward MDSC/ MDSC-Exo as a therapeutic option in autoimmune disease.
MDSC and MDSC-exo as a Therapeutic Option in Autoimmune Disease
There are excellent reviews on the link between chronic infections, immune regulation, and the associated hindrance of autoimmune disease development and progression (14, 39). MDSC/ MDSC-Exo playing an important role, these inflammatory MDSC/MDSC-Exo may well provide a guide toward correcting overshooting reactions in autoimmune disease.
Thus, several reports demonstrating parasite infections being associated with a significant decrease in incidence or severity of immune diseases in animal models, the protective effect being due to Treg, alternatively activated Mϕ and changes in the cytokine profile (253)(254)(255). In chronic hepatitis C virus infection, a striking increase in M-MDSC was noted that expressed high level pSTAT3 and IL-10 and induced Treg expansion, where depletion of MDSC increased IFNγ production by CD4+ effector T cells (256). In human immunodeficiency virus-1 infections, too, MDSCpromoted Treg expansion and inhibited T cell function, a hallmark of chronic infections (257). In tuberculosis, the accumulation of MDSC prevented immune effector cell-mediated bacteria evasion (258). The interference of inflammatory MDSC/MDSC-Exo in cancer with immunotherapy was already outlined in detail.
As bacteria, parasites and viruses that cause chronic inflammation would rather provide a danger than a therapeutic option, chemical compounds that provoke delayed type hypersensitivity may be better suited to induce "inflammatory" MDSC. This option is well explored in alopecia areata (AA), most efficiently treated by the contact sensitizer squaric dibutylester (SADBE) (259,260). SADBE treatment provokes a strong expansion of MDSC that inhibit autoreactive T cell activation and support Treg expansion. The effect is abolished by ATRA treatment (261). Notably, SADBE treatment can be replaced by the transfer of MDSC (262). In EAE, it was demonstrated that helminth products stimulate the production of TH2 cytokines and suppress TH1 and TH17 responses, the therapeutic efficacy exceeding that of corticosteroid treatment (263). Another option are statins, which are cholesterol lowering drugs, also described to induce immunosuppression. This was confirmed in acute and chronic dextran sodium sulfate (DSS)-induced colitis in mice, where statin-induced attenuation of colitis was due to expansion of MDSC (264).
Thus, the exploration of inflammatory MDSC has opened a path toward their therapeutic use in autoimmune disease. These studies clearly demonstrated therapeutic efficacy of MDSC in experimental autoimmune disease models (14,265). In addition, good progress already was achieved replacing the infectious agents by synthetic compounds.
Autoimmune disease corrections by Exo, mostly by mesenchymal stem cell (MSC)-Exo, but also by DC-Exo were repeatedly described. To give a few examples, in diabetes susceptible mice, islet MSC release Exo that express endogenous retroviral antigens, which induce potent T and B cell responses (266). Application of MSC-Exo in experimental autoimmune uveitis exerted a therapeutic effect that was due to inhibiting the chemoattractive effects of CCL2 and CCL21 on inflammatory cells (267). Exo from miR-146a overexpressing DC suppress experimental myasthenia gravis by inducing an antigen-specific shift from TH1/TH17 to TH2/Treg (268). However, Exo from different donor cells or at different stages of disease may exert opposing activities. Thus, at early stages in chronic HBV infection, hepatic NK produce IFNγ in response to hepatic Mϕ. Hepatic Mϕ are stimulated by infected hepatocyte-Exo, which contain viral nucleic acids, via MyD88, toll-like receptor adaptor molecule (TICAM) and mitochondrial antiviral signaling protein to express NKG2D ligand. On the other hand, immunoregulatory miR-21 becomes upregulated in infected hepatocytes and is transferred via Exo in Mϕ suppressing IL12p35 expression, which counteracts the host innate immune response (269). For more comprehensive information, excellent reviews are recommended that outline the interplay between Exo from different donor cells and the activity of MDSC in autoimmune disease (270)(271)(272).
There is, to my knowledge, only one report explicitly describing the role of MDSC-Exo in autoimmune disease. Mice with DSS-induced colitis were treated with G-MDSC-Exo. G-MDSC-Exo sufficed for a significant decrease in disease severity and a reduction in the inflammatory cell infiltrate. TH1 cells were reduced and Tregs were augmented in the draining lymph node; in the serum IFNγ and TNFα were reduced. Inhibition studies pointed toward the impact of G-MDSC-Exo largely depending on Arg-1 (273).
Having described that the therapeutic efficacy of a chronic contact eczema in AA largely depends on the expansion of MDSC and that SADBE treatment can be replaced by MDSC application (261,274), we proceeded controlling for the activity of MDSC-Exo in AA-affected mice. MDSC-Exo preferentially target in vitro and in vivo activated T cells, NK and most avidly Treg. Furthermore, an mRNA analysis of spleen cells of MDSC-Exo treated AA-affected mice showed a most striking increase in FoxP3 and Arg-1. These findings suggest MDSC-Exo strongly promoting Treg expansion and hampering innate immune reactions as well as T cell activation directly and via Treg.
Taking the knowledge collected in cancer and chronic infections on the power of inflammatory MDSC-Exo opened a path for a new wave of autoimmune disease treatment. Modalities to circumvent the potential danger of naturally arising inflammatory MDSC/MDSC-Exo have been suggested and are further explored in ongoing studies.
CONCLUSiON, OPeN QUeSTiONS, AND OUTLOOK
I. The discovery of Exo and other extracellular vesicles has revolutionized cell biology offering sessile cells to communicate over long distance (275). Though difficult to catch due to their heterogeneity (Figure 1A), where even a single cell delivers distinct Exo, great efforts are taken to answer open questions -We still poorly understand the process of loading the Exo plasma during biogenesis, including the enrichment of nuclear proteins, proteasome subunits, and components of the splicing machinery ( Figure 1B). of "inflammatory" MDSC-Exo provides a handicap. In cancer, MDSC recruitment and expansion are driven by TEX, which express cancer-related markers. Therefore, depletion of TEX instead of MDSC-Exo could provide an alternative. In concern of therapeutic MDSC-Exo substitution, "inflammatory" MDSC-Exo preferentially should be generated from synthetic compound stimulated MDSC, which avoids unwanted support of immunosuppression in response to naturally inflammatory stimuli. Irrespective of these alternatives, -the high prevalence of MDSC-Exo uptake by Treg and activated T cells suggests selective targets, which should be defined. V. This review focuses on MDSC-Exo and their activities in cancer and autoimmune disease. Nonetheless, the widespread activity particularly of SC-Exo (280) in physiology, including developmental patterning and the embryonicmaternal crosstalk (281,282), in rejuvenation, regeneration, and repair (283) should, at least, be mentioned. SC-Exo act via signal transduction and the transfer of ). The nature of initiating triggers, target structures, and molecular pathways of progression remain to be defined. Clarification would greatly assist "therapeutic" Exo/Exo mimetic furnishing. Personal view: recovery of selected membrane markers of MDSC-Exo would be highly desirable. Should there be no selective markers, a binding unit for the target cell could be introduced. In concern about vesicle loading during biogenesis, the abundant recovery of proteasome subunits, histones, and splicing complex components requires special attention. It is conceivable that integration of these components rather than the small amount of transferred proteins, coding/non-coding RNA and DNA initiates target cell reprogramming by modulating transcription, translation, and metabolism. These activities will be well supported by MDSC-Exo binding to the T-cell and B-cell synapses, the receptor complexes and the adjacent accessory molecules being targeted by their counterparts on MDSC-Exo and being prone for internalization and initiation of signaling cascades. FcR and FcR-like molecules may cope with similar tasks in NK, granulocytes and Mϕ. Further progress in MDSC-Exo content elaboration and recovery in target cells will provide the answer, whether it appears more suitable loading MDSC-Exo with effector or initiator molecules.
11
Zöller MDSC-Exo as a Therapeutic Option Frontiers in Immunology | www.frontiersin.org February 2018 | Volume 9 | Article 137 non-coding RNA (284,285) and are suggested being a most potent therapeutics by maintaining stemness and inducing reparative programs (286,287). There is justified hope on their therapeutic efficacy in SC transplantation, repair, and transplant acceptance (24, 288,289).
Patrolling through the body to control for burglar and killers and to start the alarm clock was the privilege of cells of the innate immune system. For a long time, it was missed that they also control via Exo the response of the adaptive immune system, they had initiated. Taking into account that Exo are still newcomers in cell biology and all the excellent work collected during a short period, for which I apologize having cited only few, I am confident that open questions are quickly answered. This will provide a means to correct for overshooting and vanishing responses evolving in long-lasting diseases, such as cancer, chronic infections, and autoimmune diseases. The ease of tailoring Exo (290) will fortify therapeutic efficacy. Last, not least, provided open questions on Exo targeting and function-relevant components are answered (Figures 1D,E), Exo mimetics are expected to provide a homogeneous and reliably reproducible therapeutic agent (179,291).
AUTHOR CONTRiBUTiONS
The author confirms being the sole contributor of this work and approved it for publication. | 8,387.4 | 2018-02-02T00:00:00.000 | [
"Biology",
"Medicine"
] |
THE USE OF OPTICALLY ACTIVE O-ALKYL ESTER HYDROCHLORIDES OF L-PHENYLALANINE AND L-TYROSINE AS CHIRAL MICELLAR MEDIA FOR THE CATALYSIS OF DIELS-ALDER REACTIONS
The effect of a range of O-alkyl ester hydrochloride surfactants derived from L-phenylalanine and L-tyrosine as catalysts on the Diels-Alder reaction between cyclopentadiene and methyl acrylate was studied. Both chain lengths (C8-C14) and head groups of the surfactants were found to influence the yield and selectivity of the Diels-Alder product. The C10 derivatives of both phenylalanine and tyrosine surfactants gave the highest yields and selectivity. Phenylalanine ester hydrochlorides showed better catalytic activity than the tyrosine derivatives. Adduct optimum yield was obtained at a concentration relating to their critical micelle concentration (CMC) values. The Diels-Alder reaction was also found to be favored in acidic condition (pH 3) as well as in the presence of lithium chloride (LiCl) as salting out agent.
INTRODUCTION
The Diels-Alder reaction is one of the most important carbon-carbon bond forming reactions and has been involved in key steps in the making of important intermediates that leads to the synthesis of anti-cancer and anti-viral drugs such as Taxol and Tamiflu, respectively [1,2].Since then, Diels-Alder reactions have formed an important part of the synthetic repertoire of making intermolecular and intramolecular cyclic compounds.
Different strategies of enhancing asymmetric Diels-Alder reactions have been reported [3,4].With this view, the choice of catalyst to enhance the yield and selectivity of Diels-Alder reactions has also been an important issue that needs to be addressed [5,6].Catalysts such as Lewis acids [7,8] have been known to enhance Diels-Alder reactions but their use has been restricted since they decompose in the presence of a small amount of water and cannot be reused [9].
The use of surfactants in assisting a variety of organic reactions is highly promising for basic and applied research [10,11].Micelle forming surfactants have been widely used as reaction medium for many important organic reactions since micelles form organized assemblies that affect the rates of chemical reactions and the position of chemical equilibrium [12,13].The use of surfactants in micellar medium offers the possibilities for reaction control due to special properties such as solubilization, pre-orientation, microviscosity, polarity and charge effects that surfactants can confer [14].These effects influence the organic reactions by affecting the yield, regio and stereochemistry of the products.
The idea of micellization for the rate-enhancement of Diels-Alder reactions dates back to the 1980s, where higher yields were obtained when using water as solvent compared to other nonpolar solvents [15][16][17][18].Hence, surfactants offer the possibility for organic reactions to occur in aqueous media, and from the viewpoint of green chemistry, water is safer, harmless and environmentally benign [19].
However, there has been limited work on the use of chiral micellar media to catalyze Diels-Alder reactions.Amino acids are useful synthons from the chiral pool that help to provide a cost effective way of synthesizing surfactants as chiral catalysts.Optically active surfactants derived from S-leucine and phenylalanine have been reported to be effective catalysts for the reaction between nonyl acrylate and cyclopentadiene [20].
In continuation for the search of effective chiral micellar catalysts for Diels-Alder reactions, a range of pre-synthesized ester hydrochloride surfactants derived from L-phenylalanine and Ltyrosine [21] were used as novel chiral micellar-based catalysts for the reaction between methyl acrylate and cyclopentadiene.The effect of chain length and head groups of the surfactants on the reaction yields and selectivity were investigated.The reaction conditions such as concentration of surfactants, temperature, time and solvent variation were also studied in view of investigating the optimum conditions for which these surfactants can act as effective catalysts for the Diels-Alder reaction.
1 H and 13 C NMR spectra were recorded at 250 MHz and 62.9 MHz, respectively, on a Bruker electro spin NMR spectrometer using CDCl 3 , D 2 O and DMSO-d 6 as solvents.GC-MS analysis was carried out on a Clarus 500 GC-Clarus 560S Mass Spectrometer using an SGE BPX5 capillary column (30 m × 0.32 mm × 0.5 µm), helium gas as carrier with a flow rate of 1.50 mL/min, injector temperature of 240 o C, detector temperature of 270 o C and oven temperature program 100 o C (hold for 2 min), ramp rate of 15 o C/min to 280 o C (hold for 15 min), flow rate 1.50 mL/min.The cycloadduct isomers were identified by matching their mass spectra with those in the NIST library.The order of product retention time of the isomers was determined from literature data [22].
General method for the Diels-Alder reaction
Cyclopentadiene was obtained by thermal cracking of dicyclopentadiene at 160 o C. Cyclopentadiene (0.32 mL, 3.80 mmol) and methyl acrylate (0.17 mL, 1.90 mmol) were added to an aqueous solution of the surfactant and the reaction mixture was stirred at room temperature for 72 hours.The mixture was extracted with diethylether (3 × 20 mL).The organic phase was dried over anhydrous sodium sulfate, filtered and the excess solvent was removed under vacuo to yield the crude Diels-Alder adduct which was purified by column chromatography using hexane/ethyl acetate in a ratio of 2:1.The pure product was obtained in 95% yield and was analyzed using GC/MS.
RESULTS AND DISCUSSION
The Diels-Alder reaction between methyl acrylate and cyclopentadiene gives rise to a mixture of exo (thermodynamic) and endo (kinetic) products (Figure 1) [23].The reaction was initially carried out in the presence of the commercially available surfactant cetyltrimethyl ammonium bromide (CTAB) in water.The presence of CTAB afforded higher product yield compared to the reaction where no catalyst was used.At the critical micelle concentration (CMC), which is the concentration at which surfactant monomers cyclize to form micelles, an increase in product yield of up to 95%, together with a lower endo-exo ratio were observed.
L-Phenylalanine and L-tyrosine O-alkyl ester hydrochloride surfactants of varying lengths (C 8 to C 14 ) (Figure 2) which have been previously synthesized by our group were tested as potential chiral micellar catalysts for the Diels-Alder reaction between methyl acrylate and cyclopentadiene.The studies were carried out at the CMCs of the surfactants and the results are summarized in Table 1 [21].An increase in the yield of the Diels-Alder adduct was obtained as the chain length of the ester hydrochlorides was increased from C 8 to C 10 for both the phenylalanine (Table 1, entries 3 and 7) and tyrosine (Table 1, entries 16 and 20) series.This may be due to the increase in the hydrophobic character of the surfactant molecules which favors the interaction with the reacting substrates.The maximum catalytic efficiency was obtained with an alkyl chain length of C 10 for both surfactants.However, a further increase in the chain length from C 10 to C 14 resulted in a decrease in the yield (Table 1, entries 14, 15, 27, 28) (Figure 3).This can be attributed to the coiling effect of the surfactant of higher chain lengths which can potentially alter the orientation of the substrates resulting in reduced yields [24].An increase in the percentage of the endo adduct was observed with increasing chain length of phenylalanine alkyl esters while for tyrosine, the percentage of the endo adduct decreases from C 8 to C 10 The yield of the Diels-Alder adduct was found to increase with the reaction time.At 72 h, the cyclic adduct was obtained in 98 and 70% yield using the C 10 derivatives of phenylalanine and tyrosine ester hydrochlorides respectively (Table 1, entries 7 and 20).When the Diels-Alder reaction was carried out for longer reaction times the endo:exo ratio was found to decrease.This is in line with what has been reported in literature, whereby longer reaction times cause retro reaction to occur readily, favoring the formation of the more thermodynamically stable exo product [23].
The optimum yield was obtained when the reaction was carried at 20 o C (Table 1, entries 3, 7, 14, 15, 16, 20, 27, 28).Increasing the temperature of the reaction beyond 20 o C resulted in a change in the micellar structure of the catalyst which reduces the yield of the product and favoring at the same time the formation of the kinetic endo product [25].
The formation of the Diels-Alder adduct was found to be dependent on the nature of solvent.For both the tyrosine and phenylalanine hydrochloride surfactant series, increasing the polarity of the solvent from hexane to THF and to water caused an increase in the yield of the reaction.This is in line with what has been previously reported, where the use of aqueous media was found to enhance Diels-Alder reaction [26].Non-polar solvents reduced both the yield and the selectivity (Table 1, entry 21) of the cycloadduct since reverse micelles are known to be formed in these media [27,28].Surprisingly, when DCM was employed using either tyrosine or phenylalanine hydrochloride the yield obtained was negligible (Table 1, entries 9 and 22).This is maybe due to the fact that the structural orientation of both Phe-10 and Tyr-10 surfactants under these conditions do not favourably interact with the reacting substrates.This is in line with Sousa's work who demonstrated how a variation of catalysts in DCM can affect the reaction yield as a result of how they interact with the reacting substrate [29].
From Table 1, it can be deduced that in general phenylalanine ester hydrochloride surfactants proved to be more of a potential chiral micellar catalyst than the tyrosine analogues in terms of both the yield and endo/exo ratio.This may be explained by the different orientations adopted at the micellar interface by the phenylalanine and tyrosine ester hydrochlorides.When the reacting substrates are introduced in the aqueous micellar solutions, dienes and dienophiles come in close proximity of the micellar structure causing them to bind within the micelles enhancing the reactivity.Therefore, the rate of reaction of surfactant-assisted catalysis for Diels-Alder reactions in water will depend on the nature of surfactant, overall influence of hydrophobic effects, electrostatic interactions and the accompanying medium effects.In the case of phenylalanine esters, the phenyl ring remained folded away from the aqueous medium within the micellar core together with the hydrophobic tail allowing better π-π-stacking between the aromatic ring and reacting substrates.In case of the tyrosine dodecyl ester hydrochloride, the OH moiety in tyrosine pulled the aromatic ring towards the aqueous layer thus causing less π-πstacking (Figure 5) [30][31][32].Figure 5 shows how the reacting substrates may interact with phenylalanine and tyrosine dodecyl ester hydrochlorides which has led to the different yields and endo/exo behaviors.
Phenyl ring within micellar core After the successful acceleration of the Diels-Alder reactions with cationic surfactants, the study was extended to explain how the yield and selectivity of the cyclo-adduct can be affected using different concentrations of catalysts before, at and after the critical micelle concentration of the catalyst.CTAB was initially used to study the effect of micellization on the yield and selectivity of the Diels-Alder reaction (Figure 6).At a concentration of 0.02 mM of CTAB which is well below its CMC, a low yield was observed.At the onset of the CMC of CTAB, an abrupt increase in the yield of the Diels-Alder adduct was observed.The yield then decreases upon further increase in concentration above the CMC of CTAB.Micellization was also found to affect the selectivity of the Diels-Alder reaction, whereby an increase in the concentration of CTAB to its CMC and above increased the ratio of the endo over the exo adduct.The effect of micellization was also studied with the tyrosine esters.The results are shown in Table 2.
Table 2. Influence of CMC value on the yield and selectivity of the reaction between cyclopentadiene and methyl acrylate-water, 72 h, 20 o C.
All the tyrosine esters showed similar trends as that observed with CTAB.When their concentrations are below their respective CMC values, the surfactants exist in the monomeric form.At the CMC, a steady increase in the yield of the Diels-Alder adduct was observed due to the formation of micelles in the reaction medium, which helped in the solubilization and orientation of the reactants within the micellar core, hence favouring the yield and selectivity of the reaction.A further increase in the concentration above the CMC values resulted in greater selectivity of the Diels-Alder adduct towards the endo product.This might be due to a change in the shape of the micelles when the concentration of the surfactant is increased well above its CMC [33] which favoured the kinetic endo product over the thermodynamic one.However, at this particular concentration, a drop in the yield of the product was observed which might be explained by the change in the micellar structure which rendered the reaction unfavorable.The pH of the reaction medium has been known to play a vital role in the yield and selectivity of the product [20,34].The effect of pH was investigated using CTAB and dodecyl tyrosine ester hydrochloride in water (Table 3).The pH was adjusted by dropwise addition of 2 M HCl (pH < 3) or 2 M NaOH (pH > 3) and was monitored using a pH meter.At extreme acidic conditions, pH 1, the cyclo-adduct was obtained in only 13% yield.This can be attributed to the breakdown of self-aggregates of the micelles.Increasing the pH to 3 was found to enhance both the yield and selectivity of the product (entry 2, Table 3).This is in line with previous studies whereby it was reported that pH 3 provides favourable polarization of the acrylate molecule in protic media stabilizing the micellar aggregates [20].At higher pH, surfactants exist as the free amine and can form hydrogen bonding with the acrylate which is no longer protonated, thus stabilizing the reactive intermediates and hence favoring the reaction and also the preferential formation of the thermodynamically stable exo isomer.
The effects of using a salting-out agent was also investigated.LiCl was added as a salting out agent which helped remove the reactants from the aqueous pseudo-phase, increasing the complexation of the substrate to the micelles.Moreover, increasing concentration of chloride counter ions caused a shrinkage in the Stern layer (region in the micelle which contains the polar head groups as well as counter ions tightly bound that interact with the aqueous exterior) which lead to a greater pre-orientation thus forcing the reactants to come closer thus resulting in enhanced yield (Table 3, entry 6).As expected, the reaction yield of the cycloadduct increased to 84%.However, a large drop in the endo/exo ratio was observed showing that the production of the exo isomer was favoured.
CONCLUSION
We have demonstrated that our pre-synthesized surfactant compounds derived from L-tyrosine and L-phenylalanine are promising chiral micellar catalysts for Diels-Alder reactions.We also compared and carried out for the first time a comparative catalytic study of these amino acid surfactants with cationic CTAB under varying conditions.The C 10 derivatives for both phenylalanine and tyrosine generated higher yields (up to 98%) and selectivities (high endo/ratio up to 92%) of the Diels-Alder product.The ester hydrochloride surfactants showed optimum activity when used at a concentration corresponding to their respective CMC values, under an acidic condition (pH =3) as well as in the presence of LiCl as salting-out agent.
Figure 3 .
Figure 3.Effect of chain lengths of L-phenylalanine and L-tyrosine on reaction yields of the Diels-Alder adduct.
Figure 4 .
Figure 4. Effect of chain lengths of L-Phenylalanine and L-Tyrosine on reaction selectivity.
Figure 5 .
Figure 5. Postulate model showing the interactions between L-phenylalanine and L-tyrosine catalysts with cyclopentadiene and methyl acrylate.
Figure 6 .
Figure 6.Effect of varying the concentration of CTAB on yield and selectivity.
Table 1 .
and then increases from C 12 to C 14 .Reactions between cyclopentadiene and methyl acrylate carried out at the surfactants' CMC * .
[21] was studied using conductivity measurements and has been previously published[21].
Table 3 .
Variation of pH and salting-out agent on the reaction between cyclopentadiene and methylacrylatewater, 72 h, 20 o C. | 3,633 | 2018-01-19T00:00:00.000 | [
"Chemistry"
] |
Nuclear Modification Factor for Charged Pions and Protons at Forward Rapidity in Central Au+Au Collisions at 200 GeV
We present spectra of charged pions and protons in 0-10% central Au+Au collisions at $\sqrt{s_{NN}}=200$ GeV at mid-rapidity ($y=0$) and forward pseudorapidity ($\eta=2.2$) measured with the BRAHMS experiment at RHIC. The spectra are compared to spectra from p+p collisions at the same energy scaled by the number of binary collisions. The resulting nuclear modification factors for central Au+Au collisions at both $y=0$ and $\eta=2.2$ exhibit suppression for charged pions but not for (anti-)protons at intermediate $p_T$. The $\bar{p}/\pi^-$ ratios have been measured up to $p_T\sim 3$ GeV/$c$ at the two rapidities and the results indicate that a significant fraction of the charged hadrons produced at intermediate $p_T$ range are (anti-)protons at both mid-rapidity and $\eta = 2.2$.
Introduction
One of the reasons for studying heavy-ion collisions at high energies is to search for the predicted Quark-Gluon Plasma (QGP), a deconfined state of quarks and gluons, and to investigate the properties of this state of matter at extremely high energy densities.High p T hadrons, primarily produced from the fragmentation of the hard-scattered partons, are considered a good probe of the QGP [1,2,3].Due to induced gluon radiation, hard-scattered partons will suffer a larger energy loss in a hot dense medium of color charges than in color neutral matter.This results in fewer charged hadrons produced at moderate to high p T ; the hadrons are said to be suppressed.Indeed, all four experiments at RHIC have observed that high p T inclusive hadron yields in central Au+Au collisions are suppressed as compared to p+p and d+Au interactions at mid-rapidity [4,5,6,7].However, it was also discovered that the yields of protons and anti-protons at intermediate p T (1.5-4.5 GeV/c) are comparable to those of pions and not suppressed at mid-rapidity as compared to elementary nucleon-nucleon collisions [5,8].These experimental results have motivated several suggestions on how hadrons are produced at intermediate p T [9,10,11,12], such as the possibility that boosted quarks from a collectively expanding QGP recombine to form the final-state hadrons [10,11,12].Among the interesting results from the BRAHMS experiment is that at forward pseudorapidity η = 2.2 inclusive negatively charged hadrons are suppressed in both central Au+Au and minimum-bias d+Au collisions [4,13].This raises the possibility that initial-state effects such as gluon saturation may also influence hadron production at intermediate p T [14,15,16].
To explore the effect of the nuclear medium on intermediate p T particle production, we present in this paper the invariant p T spectra of charged pions and protons measured by the BRAHMS experiment at RHIC up to 3 GeV/c in central Au+Au collisions at √ s N N = 200 GeV at both mid-rapidity and forward pseudorapidity η = 2.2.The spectra are then compared to reference data from p+p collisions at the same energy scaled by the number of binary collisions N bin by using the nuclear modification factor: where d 2 N AA /dp T dy is the differential yield per event in the nucleus-nucleus (A+A) collision, and σ pp inel and d 2 σ pp inel /dp T dy are the total and differential cross section for inelastic p+p collisions, respectively.
Experiment and data analysis
The BRAHMS experiment [17] consists of event characterization detectors and two independent magnetic spectrometers, the mid-rapidity spectrometer (MRS) and the Forward Spectrometer (FS), both of which can be rotated in the horizontal plane around the beam direction.For the present studies the MRS was positioned at 90 degrees and the FS at 12 degrees with respect to the beam direction.Collision centrality is determined from the charged particle multiplicity measured by multiplicity detectors as described in [18].The trajectories of charged particles are reconstructed in the tracking devices (time projection chambers and drift chambers).The resulting straight-line track segments in two detectors located on either side of a magnet are then matched and the particle momentum is determined from the deflection of the track in the magnetic field.The intrinsic momentum resolution of the spectrometers at maximum magnetic field setting is δp/p = 0.0077p for the MRS and δp/p = 0.0008p for the FS [13], where p is written in units of GeV/c.In the MRS charged particles are identified using a time-of-flight wall (TOFW), whereas in the FS a time-of-flight wall (H2) and a ring imaging Cherenkov (RICH) detector are used for particle identification (PID).To identify charged pions and protons using time-of-flight detectors, 2σ standard deviation PID cuts in the derived m 2 and momentum space are imposed for each species.With a timing resolution of σ TOFW ∼ 80 ps in the Au+Au runs, protons and pions can be well separated from kaons up to momenta 3.2 GeV/c and 2.0 GeV/c, respectively.For pions above 2 GeV/c, an asymmetric PID cut is applied, i.e. the region where the pion and kaon 2σ cuts overlap is excluded for PID, and the pion yield in the region is obtained by assuming a symmetric PID distribution about the mean pion mass squared value.This allows the pion p T spectrum to be extended to 3 GeV/c, at which point the kaon contamination of pions is estimated to be less than 5% and is accounted for in the systematic errors.For the FS PID in the present analysis, H2 is used only for the low momentum data.With a timing resolution of σ H2 ∼ 90 ps, protons and pions can be identified up to 7.1 GeV/c and 4.2 GeV/c, respectively with a 2σ separation.Above 7.1 GeV/c, an asymmetric PID cut is applied and the proton yields in the overlap region are estimated by assuming a symmetric PID distribution about the mean proton mass squared value.Between 7.9 GeV/c and 9 GeV/c, the Cherenkov threshold for protons, the RICH detector is used to determine the kaon contamination of the proton spectrum.At 9 GeV/c the contamination of protons by kaons is estimated to be less than 6%.Above 9 GeV/c, protons are identified by using the RICH to veto pions and kaons.To identify pions, the RICH is directly used to separated pions from kaons well from momentum of 2.5 GeV/c up to 20 GeV/c.The invariant differential yields 1 2π p T dηdp T at forward rapidity) were constructed for each spectrometer setting.As discussed in [19] the differential yields were corrected for geometrical acceptance, tracking and PID inefficiencies, in-flight decay of pions, the effect of absorption and multiple scattering.The pion contamination by hyperon (Λ) and neutral K 0 S decays were investigated in [20] and found to be less than 5% in the MRS and 7% in the FS, respectively.The contribution to proton spectra by the Λ decays was estimated with a GEANT [21] simulation where an exponential distribution in p T with inverse slope taken from the PHENIX and STAR measurements [22,23] for both (anti-)protons and (anti-)lambdas was generated for several spectrometer settings.By taking the ratio of Λ( Λ) to p(p) yields of 0.89 (0.95) [22] in 0-10% central Au+Au and 0.45 (0.55) [23] in p+p collisions at √ s N N = 200 GeV and assuming a constant behavior in the rapidity interval |y| ≤ 2.2 as indicated by HIJING model [24], it is found that the fraction of protons originating from Λ( Λ) decays is at a maximum value around 35-40% in central Au+Au and 27-30% in p+p collisions and decreases with p T .In the following correction for feed-down from the (anti-)lambda decays has been applied, whereas the contamination of pions due to weak decays has not been corrected but is accounted for in the systematic errors.
Particle spectra
The top row of Figure 1 shows the p T spectra of charged pions (left panel) and protons (right panel) at mid-rapidity in 0-10% central Au+Au and p+p The systematic errors are estimated to be less than 15% for pions and 18% for (anti-)protons.For the reference spectrum the systematic error is estimated to be less than 19%.For clarity, some spectra are scaled vertically as quoted.Bottom row: p T spectra for π − and p at forward rapidity η = 2.2 in 0-10% central Au+Au and p+p collisions at √ s N N = 200 GeV.The systematic errors are estimated to be 14% for pions and 17% for anti-protons.collisions at √ s N N = 200 GeV.Also shown in the left panel of the figure is the measured spectrum of (π + + π − )/2 in p+p collisions, where pions can only be identified up to 1.5 GeV/c with TOFW for 2003 p+p runs.We thus constructed a reference spectrum shown as a solid line by dividing the neutral pion spectrum in p+p collisions measured by PHENIX [25] by the spectrum from PYTHIA simulation [26] at the same rapidity range and then multiplying the results by the (π + + π − )/2 spectrum from PYTHIA.The spectra of (anti-)protons in p+p collisions are measured by the BRAHMS spectrometer but to a smaller p T coverage compared to those in Au+Au collisions due to a worse TOF resolution in the p+p runs.The spectra have been corrected for the trigger inefficiency [13] and fitted with an exponential function as shown in solid lines with the rapidity density and the inverse slope parameter of 0.101 ± 0.004 (0.098 ± 0.004) and 0.304 ± 0.005 (0.285 ± 0.005 GeV) for proton (antiproton), respectively.The error bars are statistical only.The systematic errors in the measured spectra, which come from the uncertainties in the momentum determination, the time-of-flight measurements and ring radius reconstruction procedures, and the uncertainties in the corrections estimations, are estimated to be less than 15% for pions and 18% for (anti-)protons.The systematic error in the reconstructed reference spectrum for charged pions is estimated to be less than 19%.
The bottom row of Figure 1 shows the p T spectra for π − (left panel) and p (right panel) at forward rapidity η = 2.2 in 0-10% central Au+Au and p+p collisions at √ s N N = 200 GeV.Solid lines are curves fit to the π − and p spectra in p+p collisions.The spectra are constructed in terms of d 2 N/dp T dη because the rapidity coverages of the FS at 12 degrees for pions and protons are different, making a comparison of anti-proton to pion yields difficult.In addition, since the Jacobian effect is largest at mid-rapidity and gets rather small at larger rapidities at intermediate p T range as we focused on in this paper, we expect the conclusions drawn from the spectra expressed in terms of d 2 N/dp T dη should be the same as from those of d 2 N/dp T dy.
Nuclear modification factor
In Figure 2 Similar to the unidentified charged hadrons [4] at both mid-rapidity and forward rapidity, R AuAu for charged pions increases monotonically up to ∼ 1.5 GeV/c and levels off at a value below unity above 1.5 GeV/c indicating that charged pions yields are suppressed with respect to p+p collisions at intermediate p T .Furthermore, the π − yields at forward rapidity show a similar or even stronger suppression, indicating that nuclear effects other than parton energy loss (jet quenching) might be contributing to the strong suppression.The suppression at midrapidity around p T ∼ 2 GeV/c is smaller (about 30%) than the suppression that has been reported for neutral pions [27] and which is seen at forward rapidity.This difference can -to a large extent -be attributed to the construction of the reference spectrum and has been accounted for by the systematical error for π − at mid-rapidity.Another interesting feature shown in the figure is that the anti-proton yields at both mid-rapidity and η = 2.2 are not suppressed at p T > 1.5 GeV/c.In the present ratios, most systematic errors cancel out.Remaining systematic errors arising from PID efficiencies, acceptance corrections, corrections for nuclear interactions with detector etc. are estimated to be less than 12% at both y = 0 and η = 2.2.Also shown in the figure are the corresponding ratios for p+p collisions at √ s N N = 200 GeV.There is a clear increase of the p/π − ratios at intermediate p T in central Au+Au collisions relative to the level seen in p+p collisions (see also [8,28]).This enhancement is most likely due to the interplay of several final-state effects and possibly a new hadronization mechanism other than parton fragmentation.Calculations based on a parton recombination scenario [12] with a collective flow at the partonic level appear to be able to qualitatively describe the data at mid-rapidity.
Summary
In summary, the BRAHMS measurements demonstrate a significant suppression of charged pions at intermediate p T at both mid-rapidity and forward rapidity for 0-10% central Au+Au collisions at √ s N N = 200 GeV.Such a strong suppression is believed to be caused primarily by the parton losing energy when traversing the partonic (i.e.characterized by color degrees of freedom) medium created in central Au+Au collisions.The strong π − suppression at forward rapidity suggests that the hot dense partonic medium may also exist in the forward rapidity region and that there might be other nuclear effects such as gluon saturation contributing to the suppression.However, the suppression is not observed for (anti-)protons at intermediate p T at either mid-rapidity or forward pseudorapidity η = 2.2.p/π − ratios in central Au+Au show an enhancement of (anti-)proton production relative to the p+p collisions at intermediate p T .All these observations are consistent with a picture where a dense strongly interacting partonic matter with a strong collective flow is most likely formed in central Au+Au collisions over a large rapidity range which results in the strong suppression of charged pion yields and boosts the protons to higher transverse momentum.
Fig. 1 .
Fig. 1.Top row: p T spectra of charged pions (left panel) and protons (right panel) at mid-rapidity in 0-10% central Au+Au and p+p collisions at √ s N N = 200 GeV.The error bars are statistical only.The systematic errors are estimated to be less than 15% for pions and 18% for (anti-)protons.For the reference spectrum the systematic error is estimated to be less than 19%.For clarity, some spectra are scaled vertically as quoted.Bottom row: p T spectra for π − and p at forward rapidity η = 2.2 in 0-10% central Au+Au and p+p collisions at √ s N N = 200 GeV.The systematic errors are estimated to be 14% for pions and 17% for anti-protons.
Fig. 2 .
Fig. 2. Nuclear modification factors for π − and p measured for 0-10% central Au+Au collisions at √ s N N = 200 GeV at mid-rapidity (left panel) and η = 2.2 (right panel).Error bars represent statistical errors; the systematic errors are indicated by horizontal lines.The dotted and dashed lines indicate the expectations of participant scaling and binary scaling, respectively.The shaded bars represent the systematic errors associated with the determination of these quantities.Systematic errors other than the uncertainties in N bin determinations are estimated to be 20% except for π − at mid-rapidity, where they are around 24%.
Fig. 3 .Figure 3
Fig. 3. p/π − ratios at both mid-rapidity and η = 2.2 for 0-10% central Au+Au collisions at √ s N N = 200 GeV.The error bars show the statistical errors only.The systematic errors are estimated to be less than 12% at both y = 0 and η = 2.2.The corresponding ratios in p + p collisions at √ s N N = 200 GeV are sketched as solid line and dotted line, respectively. | 3,577 | 2006-10-16T00:00:00.000 | [
"Physics"
] |
A Synchronous Prediction Model Based on Multi-Channel CNN with Moving Window for Coal and Electricity Consumption in Cement Calcination Process
The precision and reliability of the synchronous prediction of multi energy consumption indicators such as electricity and coal consumption are important for the production optimization of industrial processes (e.g., in the cement industry) due to the deficiency of the coupling relationship of the two indicators while forecasting separately. However, the time lags, coupling, and uncertainties of production variables lead to the difficulty of multi-indicator synchronous prediction. In this paper, a data driven forecast approach combining moving window and multi-channel convolutional neural networks (MWMC-CNN) was proposed to predict electricity and coal consumption synchronously, in which the moving window was designed to extract the time-varying delay feature of the time series data to overcome its impact on energy consumption prediction, and the multi-channel structure was designed to reduce the impact of the redundant parameters between weakly correlated variables of energy prediction. The experimental results implemented by the actual raw data of the cement plant demonstrate that the proposed MWMC-CNN structure has a better performance than without the combination structure of the moving window multi-channel with convolutional neural network.
Introduction
The cement industry is considered as an energy intensive sector [1].The energy consumption of the entire cement manufacturing process depends largely on the electricity and coal consumption [2].Accurate and timely energy consumption prediction is of great significance for reasonable energy scheduling, energy saving, and a reduction in production costs [3,4].As one of the most important processes of cement production, the cement calcination process is to calcine ground raw material into cement clinker.Traditionally, electricity and coal consumption are measured mainly by sensors and weighing machines, respectively, however, the changing trend and coupling relationship of electricity and coal consumption in cement production cannot be detected, which result in the faultiness in providing guidance to energy scheduling and production optimization.Due to the lack of a coupling relationship between electricity and coal consumption, the optimal solution for the optimal control of the production process may not be in the solution space, which hinders the monitoring and optimization of the cement manufacturing process and the formulation of a process control strategy.Consequently, it is necessary to investigate a method that can realize the synchronous prediction of electricity and coal consumption.
Accurate and reliable synchronous prediction of multi energy consumption indicators is complicated by the fact that the cement calcination process consists of multiple production process variables.The relationships between cement production process variables and energy consumption indicators have the characteristics of time-varying delay, uncertainty, and nonlinearity.For example, as the amount of raw materials increases, the electricity and coal consumption required for the calcination process increase.It takes 50-60 min, varying with the different conditions, for the cement raw material to be converted into calcined clinker.As a result, the electricity and coal consumption required for the calcination of raw materials in the rotary kiln cannot be calculated using the current amount of raw materials, and can only be manually adjusted by the operators based on historical experience.Using the current number of process variables to predict the energy consumption indexes will result in data misalignment and low accuracy.Therefore, the impact of the time-varying delay on the prediction of electricity and coal consumption of the cement calcination process needs to be considered.In production, a large amount of production process data are recorded, which makes it possible to synchronously predict electricity and coal consumption by data driven methods.
On the basis of the collection of numerous industrial data and the development of data driven methods, various data driven forecasting approaches have been proposed for prediction or diagnosis in the industrial field.For example, fault detection using statistical regression [5], electricity system load forecasting [6], and tendency prediction of the blast furnace hearth thermal state [7] using support vector machine (SVM) and electricity consumption prediction [8].Furthermore, in the cement manufacturing industry, power consumption is predicted by the multiple non-linear regression algorithm [9] and empirical mode decomposition based hybrid ensemble model [10].
However, the above investigations all focused on a single index prediction strategy, which is insufficient for coal and electricity synchronous prediction in the cement calcination process because the coupling relationship of electricity and coal consumption cannot be obtained.The optimal control solution of the cement calcination process obtained from single index prediction model may not be in the actual solution space, which is invalid as it cannot be used to optimize the cement calcination process.
Although many multi-task process studies have been done in many fields such as the forecasting of the maximum connections in wireless communication [11], the prediction of solar radiation with multi-time scale and multi-component [12], quality monitoring of wind turbine blade icing processes [13], multi-task-oriented production layout in manufacturing factories [14], and multi objective task scheduling in cloud computing [15], there are still obstacles caused by time-varying delay and parameter redundancy.Time-varying delay between variables is considered to be one of the characteristics of process industry.It is difficult to build accurate prediction models without considering the time delay problem.The operating conditions are constantly changing in actual cement production, which causes the time delay between the process variables and the target variables, which vary with the process time.Thus, the timing matching of process variables and target predictors loses its effect due to the time-varying delay.The parameter redundancy is another feature.Synchronous prediction models have some advantages that make them indispensable.However, in the simultaneous prediction of multi-energy indicators of the cement calcination process, there are weak correlations between some input process variables, which have impacts on the prediction accuracy when simultaneously used as input data.
According to the above analyses, a multi-indicator predication model was established to simultaneously predict the coal and electricity consumption in the cement calcination process.On one hand, this reduces the modeling burden in the practical application of industrial systems, which contributes to the efficient use of industrial system resources.On the other hand, it provides a precise prediction of coal and electricity consumption in the cement calcination process, which contributes to the reasonable production scheduling and energy planning of the cement calcination process.In addition, the multi-output prediction model can well explain the strong coupling relationship between the prediction indicators, which is consistent with the process mechanism of the cement calcination process.Therefore, research into this issue is of great significance to promote the development of the cement industry.
The proposed model combines moving window with convolutional neural networks (CNN) to reduce the negative impact of time-varying delay on the prediction of cement energy consumption.The moving window integrates time-varying delay information hidden in time series data into the input layer of the CNN.As a result of powerful feature extraction capability, CNN is not only used in image processing [16], but also in manufacturing industry [17], which is used to extract the data characteristics of variables in the cement calcination process.
The multi-channel structure of CNN was designed to reduce the negative impact of the redundant parameters of weakly correlated variables on cement energy consumption prediction in which the single index prediction strategy was first used to predict the electricity and coal consumption separately, with the purpose of reducing the coupling relationship, and then, the coupling relationship was rebuilt before the final output in order to fuse the variables' characteristics.The design of a multi-channel structure in this paper not only preserves the practical application advantages of multi-output models, but also reduces the negative impact of parameter redundancy on the energy consumption prediction.
The combination of a moving window, multi-channel structure with CNN contributes the two characteristics of this paper: (1) A method combining moving window and multi-channel convolutional neural networks is proposed to accommodate time-varying delay information implied in the time series data according to the analysis of the mechanism of cement calcination.The time-varying delay is a kind of unique phenomenon in process industries such as the cement industry.The variable data in the previous period are accommodated by moving window, which is more efficient for feature extraction by CNN with a one-dimensional convolution kernel.The adoption of CNN effectively reduces the excessive parameters brought by moving windows and avoids complex timing matching.
(2) The multi-channel structure of MWMC-CNN was designed to predict coal and electricity consumption simultaneously, which effectively addresses the parameter redundancy problem caused by correlated variables of the cement calcination process.In addition, the adverse impact on energy consumption indicators of the synchronous prediction caused by the coupling relationship of the cement production process variables is also eliminated.Finally, the coupling relationship of electricity and coal consumption was rebuilt to provide references for the optimization of the cement calcination process.
The actual raw data of the cement plant were used to train the model and the experimental results demonstrate its superiority.
The rest of this paper is organized as follows.In Section 2, we introduce the related works about the existing literature related to data driven forecasting.Section 3 describes the technological process of a cement rotary kiln and the selection of prediction variables.The proposed MWMC-CNN is detailed in Section 4. The models with different structures were simulated, and the prediction results compared with least square support vector machine (LSSVM), CNN, and long short term memory (LSTM) in Section 5. Our conclusions and future research directions are given in Section 6.
Related Work
Due to the extensive usage of sensors, a large number of industrial data are recorded, which creates the basis for the optimization and operational control of the system by data driven methods.There have been many data driven prediction investigations for energy consumption based on statistical regression, SVM, tree-based method, ANN, and LSTM.
Statistical regression is a direct method for data driven, which is a predictive model technique that studies the relationship between independent variables and dependent variables, and is widely used for short-term predictions in many industries.Bianco applied linear regression to predict urban power consumption [18] and Catalina applied it to forecast the thermal energy consumption of buildings [19].The advantage of linear regression is that the structure is simple, but its nonlinear ability is limited and it is easily interfered by outliers.Compared with linear regression, principal component analysis and partial least squares regression analysis have faster speed, better nonlinear ability, and a good effect in the short-term load forecasting of the power system [20].Although the methods of statistical regression are simple and easy to implement, they are susceptible to outliers, however, while dealing with complex problems, they are also prone to overfitting.In addition, statistical regression methods cannot eliminate the negative impact of time-varying delays on prediction accuracy.
SVM is a kind of typical machine learning method that has strong nonlinear ability, and the computational complexity depends on the number of support vectors, so it still has better performance even if the sample space has a higher dimension [21].These advantages show that it can be used to conduct energy analysis by continuous time series data in industrial fields.Based on SVM, Wang achieved the prediction of hydropower consumption [22] and Assouline estimated the solar photovoltaic potential [23].Furthermore, some investigations were implemented to optimize SVM, where the optimal parameters of each kernel function [24] and optimization algorithms were obtained [25].Although SVM can deal with the nonlinear regression problem and its parameter can be adjusted by the corresponding adaptive algorithm, the performance of SVM is still limited when the amount of training data is large [26].The scale of time series data with time-varying delays recorded from the production line is usually very large, thus support vector machine cannot perform well when dealing with these data.
Tree-based methods divide the predictor space into sub-regions, and then the predicted results based on statistical indicators are obtained, which can be used not only for classification but also regression.Tree-based models are widely used to predict critical bus voltage and load in PV buses [27], the heat load of residential buildings [28], and the hourly performance forecasting of ground source heat pump systems [29].The use of tree-based methods in industry have also been investigated by scholars such as the directed edge weight prediction of industrial Internet of Things [30] and the analysis of industrial data in cognitive manufacturing [31].Regression trees split the training dataset into distinct and non-overlapping regions, which is inappropriate for energy consumption forecasting in cement manufacturing because of the strong coupling in cement production data.
ANN is a data-driven method of artificial intelligence that has a complex structure and is used for information processing problems that cannot be solved by theoretical analysis [32].It is applied to industrial process analysis because of the advantages of adaptive learning, fast optimization, and strong nonlinear ability [33].ANN has been used to forecast electricity consumption [34,35] annually or household electricity consumption daily and hourly [36] as well as day-ahead electricity price [37].Moreover, it is also used for the prediction of Polish gas consumption [38].Although the above studies fully utilized the advantages of artificial neural networks, appropriately selected the variables associated with the target variables, established a model for the corresponding actual situation, and achieved certain results, ANN takes a set of handcrafted features as the input and it is difficult to obtain the most useful features relevant to the estimation task [39].
LSTM is good at processing time series prediction because of its special structure, which is relevant to time factor.Thus, LSTM is applied to predict energy consumption, for example, gas consumption [40].Besides, it is widely utilized for electricity consumption prediction such as the energy consumption of housing [41] and commercial buildings [42], medium-and long-term power forecasting [43], and electricity load forecasting in the electric power system [44].Although all of the above investigations achieved the prediction of electricity consumption and are used as references for electric departments or companies for decision-making in power production and dispatching, there is only one forecasting index that is discrepant with the purpose of optimizing the industrial production process.
Deep learning not only shows excellent performance in image analysis [45], speech recognition [46], and text understanding [47], but it is also applied in the industrial field [48], which inspires us to apply deep learning to predict electricity and coal consumption synchronously.
CNN is a popular structure of deep learning, which was first proposed by Y. LeCun et al. (1998) [49].The characteristics of weight sharing, local connection, and pooling reduction parameters give it computational advantages in the processing of big data [50] and additional feature engineering research on data is unnecessary [51], thus some scholars have applied CNN to industrial analysis such as big data classification of the Internet of Things [52] and civil infrastructure crack defect detection [53].Since the inputs of the above two studies are two-dimensional data, which is different to one-dimensional time series data, the difference may cause the loss of the advantages of one-dimensional time series data [54].In response to this situation, convolutional neural networks with one-dimensional convolution kernels are used in industrial fields such as industrial machine health status detection [55] and fault diagnosis for the condition monitoring of gearbox [56].In the above two studies, the CNN with one-dimensional convolution kernel is more effective than the traditional structure in the extraction of the characteristics of time series data, which inspired this paper because of the nonlinear and non-stationary cement prediction energy synchronous production.
Process Analysis and Variable Selection of Cement Industry
The cement calcination system is the main energy consumption subsystem in the cement production process.Aiming at this subsystem, the relevant variables reflecting energy consumption and the time range of the moving window were selected for the proposed method.This section describes the mechanism of the cement calcination process and the reasons for selecting the parameters required for the model.
The cement calcination process can be roughly divided into three parts: preheating and pre-decomposition, calcination, and cooling, during which the cement raw material is converted into calcined clinker.The cement calcination process is shown in Figure 1.First, the cement raw material is sent into the cyclone preheater to preheating.The cement raw material in the preheater is in suspension due to the action of the ID fan, which increases the contact area between the gas and raw material.The high temperature gas is discharged from the rotary kiln and the decomposition furnace exchanges heat with the cement raw material sufficiently, and the larger contact area contributes to the decomposition of carbonate in the raw material.Second, the cement raw material is sent into the rotary kiln that rotates at a constant speed for calcination, which contributes to the rapid decomposition of the carbonate of the cement raw material and the progress of other physical and chemical reactions in the rotary kiln.Finally, the high temperature clinker from the kiln is cooled by the grate cooler until the temperature can be withstood by a subsequent process.It takes about 50-60 min for the cement raw material to be converted into calcined clinker.Therefore, the electricity consumption of per ton cement production (ECPC) and the coal consumption of per ton cement production (CCPC) of the current batch are reflected by the production process variables in the past hour.
As shown in Figure 1, the main energy consumption of the cement calcination process consists of coal consumption and electricity consumption; different energy consumptions are marked by different color fonts.A large amount of coal is consumed to maintain the high temperature in the entire cement rotary kiln and the preheater of the decomposition furnace.The cement rotary kiln flips a lot of raw materials and the ID fan transports the high temperature gas, so these processes consume a lot of electric energy.Furthermore, the decomposition furnace and the rotary kiln both consume a large amount of coal to maintain the temperature.Therefore, some production process variables affect both the consumption of electricity and coal, which causes an uncertain coupling relationship between electricity consumption and coal consumption.In addition, the energy consumption of cement calcination systems in the future is related to the working conditions of the past, which reflects the energy consumption of the past period, affecting that in the future.
From the above analysis, we can see that the electricity and coal consumption of the cement calcination process are affected by the amount of cement raw material and the energy consumption of the past period.Therefore, the amount of cement raw material, the ECPC, and CCPC at historical moments were selected for the prediction of coal and electricity consumption.In addition, the coal consumption is impacted by the temperature in the preheater, decomposition furnace, and rotary kiln, so the process variables selected for coal consumption prediction were as follows: primary cylinder temperature, decomposition furnace coal consumption, decomposition furnace temperature, kiln temperature, secondary air temperature, and kiln head coal consumption.Electricity consumption is influenced by process variables such as kiln current and fan speed, so the ID fan speed, EP fan speed, Kiln average current were selected for the prediction of electricity consumption.
The Establishment of the MWMC-CNN Model
According to the above analysis, time-varying delay, parameter redundancy, and coupling are the main barriers in the forecasting of coal and electricity consumption simultaneously, thus we propose a MWMC-CNN structure that employs a multi-channel CNN model and moving window technique to solve the time-varying delay and data redundancy problem of energy consumption prediction in the cement calcination process.The moving window technique was employed to establish the input layer of the MWMC-CNN model to solve the time-varying delay problem.The multi-channel structure was adopted in the proposed model instead of the traditional structure to solve the parameter redundancy problem.This section describes the components of the proposed MWMC-CNN model and the specific steps of the energy prediction algorithm for the cement calcination process.
The Structure of the MWMC-CNN Model
The structure of the proposed model can be divided into three parts.The first part is a time-varying delay data input layer, where the cement calcination process variables are processed in time series.The second part is the data feature extraction layers in which different cement calcination process variables are convolved and pooled separately.The third part is a regression prediction layer that integrates the output data of two independent channels and finally outputs the predicted energy consumption value.Then, the Adam algorithm propagates back errors that come from the output values of MWMC-CNN and the training tags to layers of MWMC-CNN, and then modifies the weights until the errors are less than the expected values.The structure of the MWMC-CNN model is shown in Figure 2. The first step in this section is to select the key variables that affect the energy consumption of the cement calcination process and normalize the selected variables data X as follows: According to the analysis of the energy consumption of the cement calcination process in Section 3, the amount of raw material (X 1 ), the ECPC at historical moments (X 2 ), and the CCPC at historical moments (X 3 ) affect the coal and electricity consumption in future.Therefore, they were selected as input variables in both channels.
The selected six variables for the A channel were as follows: ID fan speed (X A 1 ), kiln average current (X A 2 ), EP fan speed (X A 3 ), the amount of raw material (X 1 ), ECPC at historical moments (X 2 ), and CCPC at historical moments (X 3 ).As for the B channel, the selected nine variables were as follows: primary cylinder temperature (X B 1 ), decomposition furnace coal consumption (X B 2 ), decomposition furnace temperature (X B 3 ), kiln temperature (X B 4 ), secondary air temperature (X B 5 ), kiln head coal consumption (X B 6 ), the amount of raw material (X 1 ), ECPC at historical moments (X 2 ), and CCPC at historical moments (X 3 ).
Second, the time intervals of the data selection window are selected.As shown in Figure 3, the time interval of selecting data was selected to be s, which means that the time series data of the cement production process variables that contain the time-varying delay characteristic are sent to the input layer.The variable data of past s time stamps correspond to the predicted value of energy consumption indicators after p time stamps, which means that the t−s~t period variable data in the X are adopted, to predict the energy consumption indicators at t + p time stamps.The size of the moving window should be larger than the interval of time-varying delay, which means that the time delay information contained in the time series data can be extracted by the model to eliminate its influence on prediction accuracy.Finally, the data selection window is continuously sliding at unit time on the input time series data to realize the construction of the time series moving window.As shown in Figure 3, different rows represent different variables of the A input channel.During the model training process, the cement calcination process variables data are continuously selected by the rolling data selection window and there are always m groups of data, which are arranged into a matrix in the input sequence.For instance, the amount of raw material (X 1 ) selected by the moving window is expressed as The other variables selected are in the same form as Equation ( 2).The data selected as the A input channel by the time series moving window are as follows: x where x (s×6) represents the input matrix of A channel with s rows and 6 columns, which means that the input sequence contains 6 variables in the past s sampling times.
The data selected as the B input channel by the time series moving window are as follows: x where x (s×9) represents the input matrix of B channel with s rows and 9 columns, which means that the input sequence contains 9 variables in the past s sampling times.
The Structure of the MWMC-CNN Data Feature Extraction Layer
Weak correlation exists among some input variables, which reduces the prediction accuracy.The multi-channel structure of CNN was designed in the data feature extraction layers of the proposed model to solve the parameter redundancy problem between weakly correlated variables.
As shown in Figure 4, the variable data of the cement calcination process selected by the moving window become the input time series data.Then, the time series data enter the data feature extraction layer of the proposed model, which performs convolution and pooling operations on the selected time series data through two independent channels.The A channel is used to extract the time series information about electricity consumption during the cement calcination process, and the B channel is used for the coal consumption.Convolution on time series data using one-dimensional convolution kernels is considered to be efficient.The one-dimensional convolution kernel w u , rather than square, is used by the proposed model to extract the time delay information implied in the time series data of the cement calcination process.The size of the one-dimensional convolution kernel is h × 1, and the convolution stride is 1, which means that the kernel with length h moves by one step every time.For the A channel, the input time series data are convoluted by n one-dimensional convolution kernels as follows: where w i represents the convolution kernel; the biases of w i is b i • x (s×6) , which represents the input time series data with dimension of s × 6, meaning that the input data has s rows and 6 columns.a represents the data after they are convolved and activated.σ relu (.) represents the relu activation function, which is defined as follows: The input time series data are activated by the relu function after being convolved by n convolution kernels, and n feature maps are output as the input of the pooling layer.Every feature map is equivalent to a data matrix.To compress the characteristics of feature maps, the one-dimensional pooling kernels were used as follows: where a represents the output of convolution layer.q (((s−h+1)/k)×6) i represents the data after pooled by the one-dimensional pooling kernels.The proposed model adopts one-dimensional average pooling as follows: One-dimensional average pooling takes an average for every k value in the column and the pooling stride is k.In the proposed method, the dimension of input feature maps is (s − h + 1) × 6 and the dimension of output feature maps is ((s − h + 1)/k) × 6, which means that the time series data are compressed vertically by the one-dimensional pooling kernel with a size of k × 1 and stride of k.The length of feature map matrices is reduced while the number of which is unchanged after the one-dimensional pooling.
The above processes are one convolution layer and one pooling layer of the A channel, which is constructed by repeating the processes multiple times, and the structure of the B channel is similar to A except for the size of the input data matrix.
The Regression Prediction Layer of MWMC-CNN
Although the interference problem of weakly correlated variables does not exist in the regression prediction layer, it exists in the process of convolution and pooling, thus the multi-channel structure was adopted by the proposed model to solve this problem.The fully connected layer integrates the time-varying delay features extracted from the two CNN channels, rebuilding the coupling relationship of coal and electricity consumption.Therefore, the time-series data that have been convolved and pooled multiple times in each channel are integrated into a column as the input of the fully connected layer.As shown in Figure 2, the A channel outputs T 1 neurons, the B channel outputs T 2 neurons, thus the fully connected layer is inputted T 1 + T 2 neurons.Each input data of the fully connected layer is connected to the T neurons, which are also the output units of the fully connected layer, thus every neuron of the fully connected layer has T 1 + T 2 weights.The outputs of the full connection layer are as follows: where ρ u is the input data of fully connected layer; w z represents the weight of the fully connected layer neuron; b z represents the bias corresponding to w z ; and y z represents the output of the fully connected layer neurons.
The output data of fully connected layer are linearly weighted and summed to obtain the outputs of the proposed model as follows: where y α represents the predicted values of the proposed model; y 1 represents the ECPC of the cement calcination process; y 2 represents the CCPC of the cement calcination process; and y z represents the output data of neurons of the fully connected layer.
Parameter Adjustment Algorithm of MWMC-CNN
Traditional CNN usually uses the stochastic gradient descent method for back propagation, but it is prone to overfitting and local optimal solutions.To prevent these disadvantages, it can be replaced by the Adam algorithm [57], which iteratively updates the neural network weights based on training data.Adam designs independent adaptive learning rates for parameters w and b by calculating the first moment estimation and second moment estimation of the gradient.The Adam algorithm is used to perform backpropagation to adjust the parameters by inversely adjusting the weights w and the biases b of the multi-channel convolutional neural network model proposed in this paper.For the regression prediction problem, it is convenient to use the mean square error as the objective function to measure the gradient increases or decreases with error, which contribute to the convergence of the model.The study of this paper is the synchronous regression prediction problem of coal and electricity consumption in the cement calcination process.The MSE (mean squared error) is performed as the objective function of the proposed model, which is implemented as follows: where ŷ represents the true value of energy consumption in cement calcination process; y α (w, b; x) represents the predicted energy consumption of the proposed model; and x represents the time series data of the cement calcination process, which are input into the A channel and B channel of the proposed model to predict ECPC and CCPC, respectively.The following process is the update process of weights w and bias b by the Adam algorithm.The default parameters of the Adam algorithm are preserved, which include the learning rate r (r = 0.001), the exponential decay rate of the moment estimate β 1 and β 2 (β 1 = 0.9, β 2 = 0.999); and the constant δ (δ = 10 −8 ).
First and second moment variables and time steps are initialized to be zero (m 0 = 0, v 0 = 0, t = 0).The error converges to a certain value ε (ε = 0.001), which is regarded as the stopping criterion of the proposed model.In the training process, time steps increase step by step (t = t + 1) and in every step, L sets of data are randomly selected from the cement calcination process variable training dataset as follows: where x (l) represents a randomly selected set of training samples.The average gradient g w t and g b t at time step t of the L samples are calculated according to the adopted objective function as follows: The biased first moment estimates based on the gradient are updated as follows: The biased second moment estimates based on the gradient are updated as follows: The deviations of the first moment estimates based on the biased first moment estimate are corrected as follows: The deviations of the second moment estimates based on the biased second moment estimates are corrected as follows: where β t 1 and β t 2 represent the values of β 1 and β 2 at time step t, respectively.The correction values of the parameter ∆w t and ∆b t are calculated based on the above deviations as follows: The parameter update process of the Adam algorithm acts on every element, thus the weight of every neuron of the proposed model is adjusted separately.For the training of the proposed model, the parameters were continuously updated by the Adam algorithm until the objective function error was less than convergence error ε.
Research on MWMC-CNN Algorithm
The above analysis is the forward propagation process of the proposed MWMC-CNN model, which can be divided into three parts: the moving window input part, the convolution and pooling part, and the fully connected output part.In the MWMC-CNN training process, the Adam algorithm is used to fine-tune parameters.This section gives a full description of the pseudo code of the MWMC-CNN model Algorithms 1 and 2. The details of the formulas in the algorithm are in Section 4.1.
Algorithm 1 Energy consumption prediction algorithm of MWMC-CNN
Input A: ID fan speed (X A 1 ), EP fan speed (X A 2 ), kiln average current (X A 3 ), the amount of raw material (X 1 ), the ECPC at historical moments (X 2 ), and the CCPC at historical moments (X 3 ) Input B: Primary cylinder temperature (X B 1 ), decomposition furnace coal consumption (X B 2 ), decomposition furnace temperature (X B 3 ), kiln temperature (X B 4 ),secondary air temperature (X B 5 ), kiln head coal consumption (X B 6 ), the amount of raw material (X 1 ), the ECPC at historical moments (X 2 ), and the CCPC at historical moments (X 3 ).
) Convolution and pooling: (Step 2.1) Convolution and activation:
A channel: a
Results
In this section, we used a MWMC-CNN model with two channels where each channel contained three convolutional layers and three pooling layers; a CNN model with three convolution layers and three pooling layers; a LSSVM model in which p = 0.03 and g = 0.01; and a LSTM model with two LSTM layers where each layer contains 48 hidden units to predict the electricity consumption and coal consumption of the cement calcination process.The experimental results were compared and analyzed to verify the superiority of the proposed model.In total, 12,500 datasets containing the electricity and coal consumption and cement calcination process variables stated in Section 4.1.1sampled by sensors on the production line of a cement manufacturing enterprises in China were selected for the experiment, in which every variable in every group has one value, so every variable has 12,500 values.A total of 11,500 groups of all the data were used for model training, while the remaining 1000 groups were used as the test set.
The root mean square error (RMSE), the mean relative error (MRE), and the mean absolute error (MAE) are used as the indicators of predictive performance, whose formulae are respectively shown as: where ŷi is the sample value and y i is the predictive value.m is the size of the forecasting sample.
Parameter and Structure Adjustment Experiment of MWMC-CNN
The depth of the CNN and the size of the convolution kernel affect the performance of the proposed model.In order to find the parameters that make the proposed model perform better, we conducted a parameter adjustment experiment to compare the errors of the MWMC-CNN with different depths and different convolution kernel sizes under the actual cement calcination process data.The experimental results are shown in Figure 5.
We performed ten trials on MWMC-CNN with different sizes of the convolution kernel.All the above comparison experiments were optimized by the same optimization algorithm with the same parameters except for the depth and kernel size.The mean value of prediction errors of MWMC-CNN with six layers are represented by the darkcolored rectangles, and the line segment at the top of rectangle represents the variance of the experimental prediction error.The corresponding mean values and variances of the experimental prediction errors of MWMC-CNN with four layers are represented by light-colored rectangles and line segments above them, respectively.
As shown in Figure 5, the mean value and variance of experimental prediction error of MWMC-CNN with six convolution layers and a kernel size of 10-1 were the smallest.The above experimental results show that MWMC-CNN with six convolution layers is more suitable for feature extraction of the time series data of the cement calcination process.Using a convolution kernel size of 10-1 is beneficial to the improvement in the prediction performance of the proposed model.
Experiments Comparison of MWMC-CNN and Other Models
In order to show the intuitiveness of the experimental results, the MWMC-CNN, CNN, LSSVM, and LSTM were trained and tested under the same datasets, respectively.As a kind of highly competitive shallow network, LSSVM is widely used in industry field, so it is reasonable to choose LSSVM for the comparison.As a kind of time-dependent model, LSTM is designed to extract time series information and simulate long-term dependencies and short-term dependencies.The performance of time series information extraction and hidden feature mining of the proposed model can be verified by a comparison with LSTM and CNN, respectively.The experimental results are shown in Figures 6-10.Figures 6 and 7 indicate that all the models learned the characteristics of coal and electricity consumption synchronous prediction, which was the basis of the test.Figure 9 shows the loss curves of the models.Figures 8 and 9 are the forecasting results, which demonstrate the superiority of the proposed model.Figure 8 shows the loss curves of CNN, LSTM, and MWMC-CNN.Due to the learning mechanism of hyperplane mapping, we did not draw the loss curve of LSSVM.It can be seen that all the losses converged to around 0. The fluctuation range of LSTM was the largest.The loss of MWMC-CNN was very close 0 and lower than that of CNN.In addition, the convergence speed of MWMC-CNN was faster than CNN.
To evaluate the actual prediction abilities of the four prediction models, we tested the models using the test dataset, which was different with training dataset.The test result curves of the electricity and coal consumption of the four models are shown in Figures 9 and 10, respectively.
In Figures 9 and 10, the blue line represents the original data samples while the red line represents the test output values of the corresponding model.Below every subgraph is the error curve of the corresponding model for the prediction index.It can be seen that the prediction curves of both the electricity and coal consumption of LSSVM were the worst and CNN was better than LSSVM, but worse than LSTM, which could match the actual value of electricity and coal consumption essentially but the precision was lower than MWMC-CNN.Through the comprehensive analysis of power and coal consumption synchronous prediction error curves of the four models, it can be concluded that the error curves of MWMC-CNN were the most stable among the four models for power and coal prediction synchronously.The experimental results verify the validity of the proposed model in this paper and demonstrate that the MWMC-CNN model has better generalization ability and higher precision for multi-energy index synchronous forecasting.The test errors of the four models are shown in Table 1.As shown in Table 1, the minimum error was achieved when the convolution kernel size of MWMC-CNN with six convolution layers was 10-1.In all of the above results, deep networks such as CNN and LSTM performed better than the shallow network such as LSSVM in the time series prediction of the cement calcination process, which indicates that deep networks such as CNN are more suitable for the research in this paper.Compared with CNN and LSTM, MWMC-CNN had higher precision, the RMSE of MWMC-CNN was better than of 35.7% for CNN and 23.9% for LSTM; the MRE of MWMC-CNN was better than of 38.3% for CNN and 22.9% for LSTM; the MAE of MWMC-CNN was better than of 37.3% for CNN and 21.8% for LSTM.The results indicate that the proposed model has better time series feature extraction ability.
Conclusions
In this paper, we proposed a multi-channel convolutional neural network (MWMC-CNN) structure with a moving window based on the analysis of the mechanism of the cement calcination process and coupling relationship between the cement production process variables, which reduces the impact of time-varying delay in the cement calcination process on prediction accuracy while reducing the burden of industrial system modeling.The experimental results demonstrate the outstanding performance of the proposed model compared with the CNN model.In addition, according to different industrial mechanisms, the proposed model can be expanded to more output or input to deal with more complicated time series data, which provides a reference for the multi-output soft measurement modeling of the process industry and time series analysis of industrial big data.In our future studies, the proposed method will be expanded to predict the consumption of other kinds of energy or the energy consumption of other production processes, which provides a reference for energy scheduling and intelligent optimization control of the process industry.
Figure 2 .
Figure 2. MWMC-CNN model structure.4.1.1.The Time-Varying Delay Input Layer Structure of MWMC-CNN The moving window technique was employed to establish the input layer of the MWMC-CNN model to solve the time-varying delay problem.The establishment can be divided into three parts: variables selection, window parameter selection, and time series moving window construction.The first step in this section is to select the key variables that affect the energy consumption of the cement calcination process and normalize the selected variables data X as follows:
Figure 3 .
Figure 3. Time-varying delay input layer structure of the A channel of MWMC-CNN.
2
several times in each channel.(Step 3) Full connection and output: (Step 3.1) Full connection:
Figure 5 .
Figure 5. Error comparison of different structure MWMC-CNN.(a) RMSE comparison of different structure MWMC-CNN; (b) MRE comparison of different structure MWMC-CNN; and (c) MAE comparison of different structure MWMC-CNN.
Figure 10 .
Figure 10.Coal consumption test and error curves of LSSVM, CNN, LSTM and MWMC-CNN.In order to evaluate whether LSSVM, CNN, LSTM, and MWMC-CNN learned the characteristics of training data, we carried out prediction experiments using the training dataset.The more overlapped the two lines, the more characteristics have been learned.The training result curves of the electricity and coal consumption of the four models are shown in Figures6 and 7, respectively, in which the blue line represents the original data Update biased 2nd moment estimate of b t ) mw t ← m w t /(1−β t 1 ) (Compute bias-corrected 1st moment estimate of w t ) mb t ← m b t /(1−β t 1 ) (Compute bias-corrected 1st moment estimate of b t ) vw t ← v w t /(1 − β t 2 ) (Compute bias-corrected 2nd moment estimate of w t ) vb t ← v b t /(1 − β t 2 ) (Compute bias-corrected 2nd moment estimate of b t ) ∆w t ← −r • mw t /( vw t + δ) (Calculate parameter update value of w t ) ∆b t ← −r • mb t /( vb t + δ) (Calculate parameter update value of b t ) w t ← w t−1 + ∆w t (Update parameter w t ) b t ← b t−1 + ∆b t (Update parameter b t )
Table 1 .
Error comparison of different models. | 9,919 | 2021-06-23T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Dispersive readout of Majorana qubits
We analyze a readout scheme for Majorana qubits based on dispersive coupling to a resonator. We consider two variants of Majorana qubits: the Majorana transmon and the Majorana box qubit. In both cases, the qubit-resonator interaction can produce sizeable dispersive shifts in the MHz range for reasonable system parameters, allowing for sub-microsecond readout with high fidelity. For Majorana transmons, the light-matter interaction used for readout manifestly conserves Majorana parity, which leads to a notion of QND readout that is stronger than for conventional charge qubits. In contrast, Majorana box qubits only recover an approximately QND readout mechanism in the dispersive limit where the resonator detuning is large. We also compare dispersive readout to longitudinal readout for the Majorana box qubit. We show that the latter gives faster and higher fidelity readout for reasonable parameters, while having the additional advantage of being fundamentally QND, and so may prove to be a better readout mechanism for these systems.
Majorana-based quantum computing requires a scheme for the measurement of MZMs [27][28][29][30][31][32][33]. Such measurements take on an especially important role in measurement-only approaches to topological quantum computation, where they replace braiding for implementing quantum logic gates [25,26,34,35]. Ideally, measurements will be fast, high-fidelity, and quantum non-demolition (QND). The QND property ensures that the measured observable is a conserved quantity, and constrains the post-measurement state to be an eigenstate of the observable, such that repeated measurements give the same outcome. This is a critical requirement for measurement-only topological quantum computation with MZMs, where the measurements determine the dynamics of the system.
Looking broadly at measurement schemes for solid state quantum computing, a standard approach has been to couple a qubit-state-dependent charge dipole to the electric field of a resonator, which is used as a measurement probe [36]. A QND readout scheme for Majorana qubits based on parametric modulation of a qubitresonator coupling was recently introduced in Ref. [30]. However, the workhorse of measurement schemes for solid state qubits is dispersive readout, which has been very successful for superconducting [37,38], semiconduct-ing [39][40][41], and hybrid semiconductor-superconductor qubits [42,43]. In these schemes the resonator is tuned far off-resonance from the qubit frequency, and acquires a qubit-state-dependent frequency shift. It is natural to ask whether such a dispersive readout scheme can offer similar advantages for Majorana qubits, and to what extent it can satisfy the stringent QND requirements that are demanded by measurement-only topological quantum computation.
In this paper, we investigate a dispersive readout scheme for two prototypical Majorana qubits: the Majorana transmon [24,[44][45][46] and the Majorana box qubit [25,26]. These two designs are distinguished by whether the two topological wire segments that host the four MZMs form two distinct superconducting charge islands or a single island with a uniform superconducting phase, respectively. We calculate the qubit-statedependent dispersive shift that arises when these Majorana qubits are capacitively coupled to the electric field of a readout resonator. The size of these dispersive shifts directly determine the rate at which one can perform qubit readout by driving the resonator and observing the phase shift of the reflected field [47]. It also therefore determines the clock frequency in measurement-only approaches to quantum computation with MZMs.
Majorana qubits differ from conventional superconducting charge qubits, such as the Cooper pair box and the transmon [48], as a dispersive shift for a Majorana qubits can arise despite the fact that the interaction with the resonator does not induce (virtual) transitions between the two logical qubit states. Instead, the shifts result from (virtual) transitions to excited states outside the qubit subspace. This is especially beneficial for the Majorana transmon, where dispersive shifts arise through a qubit-resonator interaction that conserves the Majorana charge parity. For the Majorana box qubit, the Majorana parity is only approximately conserved in a limit of large frequency detuning from the resonator. Topological superconductors (gray rectangles) host MZMsγi, and qubit readout corresponds to a measurement of iγ2γ3. Majorana wavefunctions can be made to overlap either by (a) direct tunneling or (b) tunneling to a proximal quantum dot (gray circle). The resonator (illustrated by an LC oscillator) is capacitively coupled to the island charge. Alternative resonator-island coupling geometries are possible, including coupling to the quantum dot in (b). (a) Majorana transmon qubit: the topological superconductors form two distinct superconducting islands, shunted by a Josephson-junction with energy EJ (one of the islands may be grounded). The level diagram illustrates low-energy eigenstates labeled |g, ± and |e, ± . The blue and red arrows indicate allowed transitions induced by the qubit-resonator interaction. These allowed transitions conserve Majorana parity. (b) Majorana box qubit: the topological superconductors are shunted by a trivial superconductor, and the device forms a single superconducting island. The level diagram labels dressed eigenstates of the coupled superconducting island-dot system. In this case, the resonator induces transitions between dressed states of different dot occupancy and Majorana parity.
qubits, we find that dispersive shifts can be comparable to those of conventional transmon qubits [38] and nanowire quantum dots [41]. Specifically, for reasonable system parameters, we predict dispersive shifts in the MHz range. Measuring the qubit necessarily requires lifting the MZM degeneracy, and the corresponding qubit frequency is in the range 1-2 GHz. Our results suggest that sub-microsecond high-fidelity QND readout is feasible for Majorana qubits.
The remainder of the paper is organized as follows. We give a high-level introduction to dispersive coupling between a Majorana qubit and a resonator in Section II. In Section III and Section IV we describe in detail the dispersive coupling for a Majorana transmon and a Majorana box qubit, respectively. For both qubit variants, we calculate the dispersive frequency shift of a readout resonator from second order Schrieffer-Wolff perturbation theory for a range of system parameters using numerical diagonalization. We also provide simple, approximate analytical expressions. We estimate the resulting measurement timescales and fidelities (in the absence of any unwanted qubit decoherence or noise in the system parameters, see e.g. [33,49,50]) in Section V. We compare the results for dispersive readout to the longitudinal readout scheme introduced in Ref. [30]. Finally, in Section VI we discuss the implications of our results for dispersive readout of Majorana qubits.
II. LIGHT-MATTER INTERACTION FOR MAJORANA QUBITS
We begin by presenting a high-level overview of the interaction between a Majorana qubit and an electromagnetic resonator, focusing on the dispersive coupling regime. Such a coupling provides the underlying physical mechanism that can be used for dispersive qubit readout [36].
The resonator-based readout schemes considered in this paper involve capacitive coupling of the charge degree of freedom of the measured system (the qubit) to the electric field of a nearby resonator. That is, we have an interaction Hamiltonian of the form whereQ = eN is a charge operator for the qubit system, is the voltage bias on the qubit due to the resonator, and λ quantifies the interaction strength.
For simplicity, we model the resonator by a single harmonic oscillator mode with annihilation (creation) operatorâ (â † ). The physics resulting from the coupling described by Eq. (1) depends on the internal level structure of the qubit system. Given that the charge number operatorN can have off-diagonal matrix elements in the qubit eigenbasis, the absorption or emission of a resonator photon can induce a transition between eigenstates in the qubit system. The internal level structure of the qubit can lead to selection rules where only certain transitions are allowed. We will show below that different Majorana qubit designs give rise to different selection rules, and discuss the consequences of this for QND readout.
We consider two distinct types of Majorana qubits, illustrated in Fig. 1. Each topological superconductor hosts a pair of Majorana edge modes. Due to charge conservation, a minimum of two topological superconductors are required to encode a Majorana qubit, for a total of four MZMs. We identify two broad classes of Majorana qubits, depending on whether these topological superconductors form one or two distinct superconducting charge islands. The Majorana transmon qubit is representative of a configuration where the two topological superconductors form two distinct islands, and the relevant charge degree of freedom for readout is the difference in charge between these two islands. Variations of this configuration can include grounding one of the two superconducting islands, and/or introducing a Josephson tunnel coupling between the islands and ground [24]. However, it should be noted that connecting one of the islands to ground in this manner could increase the rate of quasiparticle poisoning events [50]. For the Majorana box qubit (also referred to as the Majorana loop qubit), the two topological superconductors are shunted by a trivial superconductor to form a single superconducting island. In this case, a charge dipole can be formed by tunnel coupling to a proximal quantum dot, providing a mechanism for readout.
The physics of these devices is described in more detail in the following sections, and we here only give a high-level discussion of their internal level structure and selection rules. For the Majorana transmon in Fig. 1 (a), energy levels can be labeled |g, ± , |e, ± , . . . , where g, e, . . . denote a transmon-like ladder of eigenstates, and ± denotes the eigenvalue of iγ 2γ3 = ±1, the Majorana parity we wish to measure. As indicated in the level diagram in Fig. 1 (a), only transitions that conserve the Majorana parity are allowed. This means that the Majorana parity is conserved during readout and that the interaction is manifestly QND with respect to this quantity. This stronger-than-usual form of QND measurement stems from the fractional and non-local nature of the MZMs [25], and was dubbed topological QND (TQND) measurement in Ref. [30].
For the Majorana box qubit, MZMs are tunnel coupled to a proximal quantum dot, as illustrated in Fig. 1 (b). The dot might be formed naturally between two topological superconductors due to the boundary conditions set by the superconducting/semiconducting interface [51]. In our analysis we assume that this quantum dot has wellseparated energy levels, and for simplicity only a single level that is energetically accessible.
Charge tunneling between the topological superconducting island and the dot provides a mechanism for readout. Because the superconducting island charge is no longer conserved, the eigenstates of the qubit-dot sys-tem are dressed states where the Majorana edge modes are partially localized on the dot. These dressed states are illustrated in the level diagram in Fig. 1 (b). In a readout protocol, the tunnel coupling should be turned on gradually, such that the system evolves adiabatically from the bare to the dressed eigenstates, and we label the dressed eigenstates by the states they are adiabatically connected to in the absence of tunneling. Qubit readout corresponds to distinguishing the dressed states adiabatically connected to the degenerate qubit groundspace.
As indicated in the level diagram in Fig. 1 (b), the relevant transitions for coupling to the resonator involve a single charge transfer from the island to the dot, or vice versa. This charge transfer also flips the (dressed) Majorana parity. In this case, an approximately QND interaction can still be achieved in the dispersive regime, where the resonator is far detuned from any internal transition, such that the resonator-induced transitions indicated in Fig. 1 (b) are purely virtual. However, it is a notable difference between the Majorana transmon and the Majorana box qubit readout scheme that the former has the advantage of a manifestly QND interaction independent of whether the system is in the dispersive regime or not.
As an aside, this last point can be contrasted to the readout scheme proposed in Ref. [30], where modulation of a system parameter is used to activate a parity conserving qubit-resonator coupling. This coupling arises independently of the frequency detuning of the resonator from any internal qubit transition. With the scheme in Ref. [30] it is therefore possible to achieve strong qubitresonator coupling in a regime where all parity nonconserving processes are heavily suppressed and can be neglected. We return to a brief comparison with Ref. [30] in Section V and Appendix E.
For the purpose of qubit readout, real transitions between qubit-system eigenstates are undesirable. An effective interaction suitable for readout is recovered from Eq. (1) in the dispersive regime. This refers to a coupling regime where the resonator frequency is far offresonance from any relevant transitions between qubit states that are allowed by the selection rules. The transitions to higher energy levels indicated in Fig. 1 (a,b) are then only virtual transitions. In this situation, Eq. (1) can be treated perturbatively, leading to an effective interaction of the form (for both types of Majorana qubit) Here ω r is the resonator frequency, ω q is the energy splitting between the two eigenstates used to encode a qubit, andσ z is the corresponding logical Pauli-Z operator. In general ω r,q include Lamb shifts due to the qubit-resonator coupling. Finally, χ q is the qubitstate-dependent dispersive frequency shift of the resonator. Under this Hamiltonian the qubit states can be distinguished by detecting a phase shift of the resonator under a coherent drive at the resonator fre-quency [36,37,47,48]. The speed of such a measurement is set by the magnitude of the dispersive shift χ q . We give a derivation of Eq. (2) starting from Eq. (1), for a generic multi-level system, in Appendix A. Throughout this paper we compute χ q for three different qubit types labeled q ∈ {t, mt, mb}, for a conventional transmon, a Majorana transmon, and a Majorana box qubit, respectively.
It is important to emphasize that although Eq. (2) is QND with respect to the logicalσ z operator, this Hamiltonian is an approximation to the underlying light-matter interaction, Eq. (1). The TQND property of the Majorana transmon refers to the fact that parity protection is manifest at the more fundamental level of Eq. (1). As discussed briefly above, and in more detail in the following, dispersive readout for the Majorana box qubit is not TQND in the same strong sense as for the Majorana transmon. Both the Majorana transmon and the Majorana box qubit, however, share the feature that no transitions are allowed between the two lowest energy eigenstates used to form a qubit. Instead (virtual) transitions out of the qubit subspace are used to realize a readout mechanism. This is in contrast to conventional superconducting charge qubits, such as the transmon and the Cooper pair box, where the light-matter interaction causes transitions between the energy eigenstates that define the qubit [48]. In this case, the readout mechanism introduces a source of error in the form of Purcell decay, wherein the qubit may relax via emission of a photon via the resonator [52]. (We note that we have restricted our notion of measurement back-action to the readout mechanism itself. Additional unwanted effects that may be introduced such as quasiparticle poisoning [50] or heating are not treated here.) In the following sections, we describe the dispersive readout schemes for the Majorana transmon and the Majorana box qubit in detail.
III. MAJORANA TRANSMON QUBIT
A. Model for the qubit A Majorana transmon qubit, shown in Fig. 1 (a), consists of two distinct charge islands that are shunted by a Josephson junction. Each island, labeled α ∈ {L, R}, is in a topological superconducting phase and has electron number operatorN L,R and dimensionless superconducting phase operatorφ L,R , satisfying [N α , e iφ β /2 ] = δ αβ e iφ β /2 , with α, β ∈ {L, R}. The charging energy and conventional Cooper pair tunneling between the two islands is captured by a Hamiltonian whereN ≡ (N L −N R )/2 andφ ≡φ L −φ R , E C is the charging energy due to capacitive coupling of the two islands, n g represents an offset charge, and E J is the Josephson coupling due to Cooper pair tunneling across the Josephson junction. The transmon regime is characterized by E J E C [48]. Note that we here use a convention whereN counts the number of electrons rather than number of Cooper pairs, such that we have the following action on charge eigenstates: We have neglected the capacitances of each superconducting island to ground, and assume the long-island limit where MZMs located on the same island are wellseparated.
Variations of the Majorana transmon include grounding one of the two islands (such that we can set e.g. ϕ R = 0 andφ =φ L ), and/or introducing a Josephson coupling to a bulk superconductor in addition to the Josephson coupling between the two islands [24]. These variations are qualitatively similar, and our results extend to these cases without any significant modification.
To read out this qubit, the MZMs corresponding tô γ 2 andγ 3 are brought together and the combined parity iγ 2γ3 is measured. When these two MZMs are brought together (see Fig. 1), their interaction is governed by a tunneling Hamiltonian [4,24,44,46] where E M is proportional to the wavefunction overlap of the MZMs. This expression accounts for the fact that the qubit loop might enclose an external flux Φ x , where in we have defined ϕ x = 2πΦ x /Φ 0 with Φ 0 = h/2e the magnetic flux quantum. In Appendix B we compare the direct tunneling model Eq. (5) with a model where the two islands are coupled to a common quantum dot, acting as a mediator. The main outcome of this comparison is that, when the energy penalty to occupy the quantum dot becomes large, the two models are equivalent. The full Majorana transmon qubit Hamiltonian iŝ Since iγ 2γ3 commutes withĤ MT , the eigenstates can conveniently be labeled by two quantum numbers |j, a where j = g, e, f, . . . denote a transmon-like ladder of eigenstates and a = ± denotes the eigenvalue of iγ 2γ3 = ±1. The level structure is shown in Fig. 2 A simplified Hamiltonian can be found by following the standard approach of treating the transmon degree of freedom as a Kerr nonlinear oscillator [36,48]. To keep the discussion simple, we set the offset charge and external flux to zero, n g = 0, ϕ x = 0, for the remainder of this section. We can introduce ladder operators viâ Taylor expanding the cosφ term to fourth order inφ, substituting the expressions above, and dropping fast rotating terms, we obtain the standard re-sultĤ where ω t = √ 8E J E C − E C is the transmon energy and E C is the anharmonicity.
Repeating the steps for the Majorana termĤ M yieldŝ with coefficients Under the above approximations, we see that the energy splitting of the two lowest energy levels with unequal Majorana parity, |g, ± , which we will label ω mt , is On the other hand, the "transmon transitions" |g, + ↔ |e, + and |g, − ↔ |e, − (indicated in Fig. 2) have energy splittings respectively. As we will show, despite that there is no charge matrix element between the two logical qubit states |g, ± , the fact that the two transition frequencies ω ± are non-degenerate for E M > 0 nevertheless leads to a iγ 2γ3 -dependent dispersive shift of the resonator.
B. Dispersive interaction with a resonator
The Majorana transmon qubit can be read out via a resonator that is capactively coupled to the island charge, as schematically illustrated in Fig. 1. This interaction has the formĤ while the resonator Hamiltonian is given byĤ r = ω râ †â . Here λ 2(C c /C r )E C R K /4πZ r quantifies the capacitive coupling strength, with C c the coupling capacitance, C r the resonator capacitance, Z r = L r /C r the resonator characteristic impedance, and R K = h/e 2 the resistance quantum.
We numerically diagonalize the full Hamiltonian H MT , Eq. (6), to calculate the dispersive shift χ mt defined in Eq. (A5). To assist with interpreting our results, we first calculate dispersive shifts for a conventional transmon qubit, which corresponds to the limit E M = 0.
Conventional transmon.-In this case, the qubit is encoded in the two eigenstates |0 ≡ |g, a and |1 ≡ |e, a where the choice of a = ± is arbitrary. The spectrum is shown in Fig. 2 (a). There are two primary transitions that contribute to the conventional transmon dispersive shift χ t , defined in Eq. (A5). Namely, the qubit transition |g, a ↔ |e, a , with frequency ω t , and the transition |e, a ↔ |f, a with frequency approximately given by ω t − E C / . We numerically calculate χ t as a function of the detuning parameter ∆ t ≡ ω t − ω r in Fig. 3 (b). The singularities at ∆ t = 0 and ∆ t = E C / correspond to values of ω r where the resonator is resonant with the |g, a ↔ |e, a and |e, a ↔ |f, a transitions, respectively. The regime between these two singularities, where the dispersive shift changes sign, is known as the straddling regime [48]. to χ t , come from transitions |g, + ↔ |e, + with frequency ω + , and |g, − ↔ |e, − with frequency ω − , as indicated in Fig. 2. The frequencies ω ± are close to ω t , approximately given by Eq. (12). As 2E M ω mt increases and becomes comparable to ω t , χ mt approaches a comparable magnitude to χ t , the dispersive shift for a conventional transmon, as shown in Fig. 3 (a,c). We note that the strength of the dispersive shift χ mt also depends on the offset flux ϕ x , as shown in Fig. 3 (d). Care must be taken to ensure that ϕ x = π, where χ mt vanishes and changes sign.
We can also find an approximate analytical expression for the dispersive shift. To this end, we substitute Eq. (7) intoĤ int and make a rotating wave approximation to find where This form clearly shows how energy exchange with the resonator leads to transitions between transmon levels within the same parity sector (|g, ± ↔ |e, ± , |e, ± ↔ |f, ± , etc.). From Eq. (14) we find a simple approximate expression for χ mt (see Appendix C) This is compared to the result based on exact diagonalization of the qubit Hamiltonian in Fig. 3 (c). We emphasize that Eq. (16) is a somewhat crude approximation similar in accuracy to the standard approximation used for conventional transmon qubits (Eq. (3.12) in Ref. [48]).
IV. MAJORANA BOX QUBIT
A. Model for the qubit The Majorana box qubit, shown in Fig. 1 (b), is an alternative design for a qubit based on MZMs that has been studied in the context of measurement-only topological quantum computing [25,26]. In this qubit design, the topological superconductors are shunted by a (trivial) superconducting bridge instead of a Josephson junction. This model can be thought of as a limiting case to the Majorana transmon, where E J /E C → ∞. In this limit, we haveφ R →φ L , such that the previous charge and phase operators,N = (N L −N R )/2 andφ =φ L −φ R , are zero. Instead, the relevant degree of freedom is the total chargeN tot =N L +N R , and the corresponding conjugate phaseφ tot ≡ (φ L +φ R )/2. The device acts as a single island with charging energŷ where E tot quantifies the charging energy due to capacitive coupling of the island to ground (and to the resonator), which we neglected for the Majorana transmon. The charge and phase operatorsN tot ,φ tot act on charge eigenstates analogously to Eq. (4), whereN tot now counts the total charge on the superconducting island consisting of the two topological superconductors. The Majorana box qubit also comes in several qualitatively similar variations. When the two topological superconductors are aligned horizontally in series as in Fig. 1 (b) (formed from a single nanowire), the qubit is also refereed to as a Majorana loop qubit. Alternatively, the two topological superconductors can be arranged in parallel with the superconducting shunt perpendicular to the nanowires [25]. One can also consider additional MZMs per island, used as ancilla modes for measurement-only topological quantum computing. In this case, a Majorana box qubit with four MZMs is called a tetron, with six MZMs a hexon, and so on. Our results can be generalized to these variations.
The coupling of the two topological superconductors due to a non-zero overlap of the Majorana modes corresponding toγ 2 andγ 3 can be modeled using Eq. (5) withφ → 0. However, to properly account for the movement of charge that leads to a coupling to the resonator, we take one step back and explicitly include coupling to bound states in the semiconducting region between the two topological superconductors. In the limit of where the energy penalty to occupy these bound states is large compared to the tunnel coupling, the HamiltonianĤ M can be recovered as an effective description. Including such bound states as intermediate degrees of freedom is however necessary to correctly capture the coupling to the resonator that results from charge tunneling to the semiconductor. In the proposals in Refs. [25,26], this description is moreover very natural because a gate-defined quantum dot is explicitly introduced to mediate a tunable interaction between the nanowires.
We model the quantum dot between the two topological superconductors by a single fermionic operatord, satisfying {d,d † } = 1. This degree of freedom is illustrated in Fig. 1 (b). The dot is described by a Hamiltonian H d = εd †d , with ε the dot occupation energy. Tunneling between the island and the dot is modeled by a Hamiltonian [53] where t L,R ≥ 0 are the tunneling amplitudes between the two respective topological superconductors, and we have included the possibility of an external flux, ϕ x , threading the qubit loop. The full Majorana box qubit Hamiltonian is thusĤ We show the spectrum in the uncoupled case t L,R = 0 in Fig. 4 (a). As with the Majorana transmon qubit, each state shown in Fig. 4 (a) is two-fold degenerate, a degeneracy that splits when we include non-zero tunneling |t L,R | > 0, as shown in Fig. 4 (b).
The dot-island HamiltonianĤ MB conserves the total chargeN tot +d †d , such that the Hamiltonian can be diagonalized block by block, following Ref. [30]. After a unitary transformationĤ MB =Û †Ĥ MBÛ we havê The functions ε c (n), ε m (n) and E(n) are given in Eq. (D9). As is clear from Eq. (20), the eigenstates ofĤ MB can be labeled by three quantum numbers |N, n d , a : the island charge N ∈ Z, the dot occupancy n d = 0, 1 and the Majorana parity a = ±. The dressed eigenstates ofĤ MB , Eq. (19), are thus related to the bare charge states of the uncoupled system (i.e., when t L,R = 0) through where the unitary transformation is defined in Appendix D 1. The labels on the left-hand side here designate hybridized degrees of freedom, in particular, when t L,R > 0, the Majorana fermion hybridizes with the dot, and a = ± refers to the corresponding "dressed" Majorana parity.
To keep the discussion simple, we from now on focus on the sector with zero total dot-island charge,N +d †d = 0, and set t L,R = t, n g = 0 for the remainder of this section. Within the zero total charge sector, Eq. (20) takes the formĤ where we have dropped a constant term, andĉ satisfies {ĉ,ĉ † } = 1 and describes the movement of an electron from the dot to the island within the zero total charge sector. The coefficients are given by with δ = E tot + ε the energy penalty for moving charge from the island to the dot. For small t/δ the second term in Eq. (22) moreover reduces to Eq. (5), since In the opposite limit, if the chemical potential of the quantum dot is tuned such that ε = −E tot and the energy penalty to occupy the dot δ = 0, then ε m ∼ t.
B. Dispersive interaction with a resonator
As with the Majorana transmon, we can read out the logical state by coupling the qubit to a resonator. There are essentially two options for engineering a dipolecoupling by capacitively coupling to the resonator. Either the resonator voltage can be (predominately) coupled to the dot, or (predominantly) to the superconducting island. The key requirement for readout is that the resonator must be sensitive to the movement of charge between the superconducting island and the dot, and the two coupling schemes are in that sense equivalent (as shown, e.g., in Ref. [30].) The two choices might, however, have different practical advantages and disadvantages; in particular, stronger coupling may be possible by coupling to the island. We here focus mainly on capacitive coupling to the superconducting island charge, for concreteness, but we emphasize that our results apply equally well to both schemes. The qubit-resonator interaction is thus, in analogy with Eq. (13), given bŷ We perform the same unitary transformation that led to Eq. (22), and again set t L,R = t, n g = 0 and project onto the subspace with zero overall charge, to find an interaction in this subspace of the form (see Appendix D 1) where We note that, in the case where the quantum dot is resonant with the island and δ = 0, then g c = g m = 0 and |g ± | = λ/2. For the alternative choice of coupling the resonator to the dot, simply replaceN tot →d †d in Eq. (25) and the above results still apply with a sign change λ → −λ in Eq. (27) [30]. From the second and third line of Eq. (26) we see that, in this frame, the resonator induces a transition that involves moving an electron from the dot to the island and flipping the Majorana parity iγ 2γ3 . The energy difference corresponding to this transition is ε c ± ε m = f ± , depending on the state of the Majorana degree of freedom.
Having diagonalized the Majorana box qubit Hamiltonian, it is straight forward to use the second order Schrieffer-Wolff formula Eq. (A4) to obtain an analytical expression for the qubit-state dependent dispersive shift. Under a rotating wave approximation for the resonatorqubit interaction we find where we assume ε c > ε m . We compare Eq. (28) to a numerical diagonalization of the qubit Hamiltonian, following the same procedure as for the Majorana transmon qubit, to extract the qubitdependent dispersive shift χ mb . In Fig. 5 (a) we show χ mb as a function of ∆/|g + | for different tunneling strengths t L,R = t, where ∆ ≡ f + / − ω r . As the Majorana modes hybridize with the dot, the qubit splitting and the dispersive shifts grow larger, also shown in Fig. 4 (b). As with the Majorana transmon, the dispersive shift χ mb depends on the offset flux ϕ x , as shown in Fig. 5 (c). Again, care must be taken to ensure ϕ x = π. However, in contrast to the Majorana transmon, the dependence is more favourable, leading to a wider range of ϕ x where χ mb is close to its maximum possible magnitude.
Our results show that the energy scale of the dispersive shifts for the Majorana box qubit are comparable to the Majorana transmon qubit for comparable ratios between resonator coupling strengths and detuning ∆/g. A more detailed comparison is made in Section V.
V. READOUT TIMES AND FIDELITIES
In this section we calculate estimates of the timescales and fidelties of the Majorana qubit readout schemes presented in the previous sections. The key results are presented in Fig. 6. It is important to note that these estimates have been obtained for an idealised situation where the dispersive approximation is assumed to be valid, and no noise or decoherence is included beyond the dephasing caused by the measurement itself. These results are therefore not meant to be quantitative predictions for the measurement fidelity in an experiment, but serves to compare the speed and fidelity for different qubit types: the conventional transmon, Majorana transmon and the Majorana box qubit. For the Majorana box qubit, we also compare dispersive readout to longitudinal readout, see Appendix E.
Our predictions of the dispersive shifts of the Majorana transmon and Majorana box qubit as functions of qubit frequency ω q are compared in Fig. 6 (a). To make this comparison, we fix the value of ∆/g = −10 where g ∈ {g t , g + }, see Eqs. (15) and (27c), and ∆ ∈ {ω + − ω r , f + / − ω r }, see Eqs. (12) and (23c), for the Majorana transmon and the Majorana box qubit, respectively. We also include the dispersive shift of a conventional transmon, for which we use g = g t and ∆ = ω t −ω r . In other words, g and ∆ quantify the relevant coupling strength and resonator detuning for each qubit type, respectively.
We observe that for smaller qubit frequencies (corresponding to weaker MZM interaction energies), the Ma-jorana box qubit produces larger dispersive shifts than the Majorana transmon for the same value of ∆/g. Nevertheless, both variants may achieve dispersive shifts in the MHz regime for reasonable parameters, comparable to conventional transmon qubits [38] and the recent demonstration of a nanowire quantum dot readout in Ref. [41].
The qubit-state-dependent phase shift that arises during dispersive coupling allows for readout of the qubit by probing the resonator at its resonant frequency. The size of the dispersive shift χ directly determines the rate at which this phase shift can be resolved to a given fidelity. We quantify this effect with the signal-to-noise ratio (SNR) for a heterodyne measurement of the resonator output field. To simplify the treatment, we consider an idealized situation with unit efficiency measurement and no additional noise or decoherence, such that the qubitdependent response of the resonator is Gaussian. In this case an analytical form for the SNR can be found [36,54]: where | | is the amplitude of the resonator drive and τ is the measurement time. The SNR will in general depend on the resonator damping rate κ, and we have set κ = 2χ to give the optimal SNR at long integration times [36,54]. The measurement fidelity can be related to the SNR through We emphasize that these results hold for the ideal, dispersive Hamiltonian Eq. (2). In other words, the dispersive approximation is assumed to be valid. It should be noted, however, that this approximation will break down for large photon numbers [36].
From these expressions, we calculate the expected measurement infidelities 1 − F for each qubit as a function of integration time τ at ω q /2π = 1 GHz in Fig. 6 (b). We have chosen the resonator drive strength such that n/n crit = 1/5, wheren = 2( /κ) 2 is the resonator photon number and n crit ≡ (∆/2g) 2 . The latter can be thought of as a rough measure of when the dispersive approximation is expected to break down [36].
For comparison we also show the infidelity of a longitudinal readout scheme for the Majorana box qubit in Fig. 6 (b). We have chosen parameters such that κ and n are equal between the dispersive and longitudinal cases, which for these parameters correspond to a modulation of the longitudinal coupling strength byg z /2π 10 MHz. As shown in Fig. 8 in Appendix E such a modulation can be achieved by a very modest modulation in either the tunnel coupling or external flux. It is noteworthy that longitudinal readout gives a much faster (and thus higher fidelity) readout for this modest value of parametric modulation. For example, doubling the modulation amplitude translates to a readout that is roughly twice as fast.
Finally, we calculate the measurement integration time required to achieve a measurement fidelity of 99.99% for the dispersive readout protocols as a function of qubit splitting ω q , shown in Fig. 6 (d). For the chosen system parameters and assumptions, both Majorana qubits may achieve high-fidelity dispersive measurements in a fraction of a microsecond. Furthermore, the Majorana box qubit, which produces a larger dispersive shift, benefits from a faster readout time at the same value of ∆/g and qubit frequency.
VI. CONCLUSIONS
Our results are very promising for dispersive readout as a means to measure Majorana qubits quickly and with high fidelity. We have calculated the qubit-dependent dispersive shifts of a readout resonator for Majorana transmons and Majorana box qubits, under a simple capacitive coupling of the resonator to the qubit. This dispersive shift can be used to readout the state of the qubit by measuring the phase shift of a resonant probe tone on the resonator. We find that the dispersive shift for Majorana qubits of both types can be in the MHz range for reasonable parameters. These results are encouraging, as they indicate that well-established and extremely successful readout techniques can be adopted from the circuit QED context [36,38].
There are some key differences in the QND nature of dispersive readout for a Majorana transmon compared to a Majorana box qubit. For the Majorana transmon, the qubit-resonator interaction manifestly preserves the Majorana parity, independent of the detuning of the readout resonator from the relevant transitions between qubit energy levels. This protection originates from the fact that bothĤ MT in Eq. (6) andĤ int in Eq. (13) commute with iγ 2γ3 . The Majorana parity is therefore preserved independently of whether the perturbative dispersive approximation H disp , Eq. (2), is valid. As a result, the dispersive readout of a Majorana transmon is quantum non-demolition in a stronger sense than for conventional charge qubits.
For the Majorana box qubit, the situation is different. Here, coupling to the resonator is induced by tunneling of charge from the qubit island to a nearby quantum dot. In a readout scheme, the tunnel coupling should be turned on adiabatically, such that the system evolves into dressed joint eigenstates of the qubit-dot system [see Eq. (20)]. However, the interaction with the resonator induces transitions between dressed eigenstates of different Majorana parity; see Eq. (26). A quantum nondemolition readout is therefore only approximately recovered in a limit where the relevant transition frequency for moving an electron between the island and the dot is far detuned from the resonator frequency, leading tô H disp in Eq. (2). The readout is therefore no longer QND when the dispersive approximation breaks down, which can happen, e.g., for large photon numbers. It is worth noting that the joint Majorana-dot parity is conserved by Eq. (26). If one can also perform high-fidelity, QND measurements of the dot, it may be possible to confirm the Majorana parity by using a subsequent dot measurement after decoupling the two systems [33].
These fundamental differences makes a quantitative comparison of readout fidelity and speed more challenging. In Section V we compared the two qubits for equal qubit energy splitting, and at a fixed value of coupling strength relative to resonator detuning, g/∆, and fixed value of resonator photons relative to n crit ≡ (∆/2g) 2 . With this choice, our results suggest that the Majorana box qubit produces larger dispersive shifts, and may therefore enjoy a faster readout. However, because the breakdown of the dispersive interaction will manifest itself differently for the two qubits, this comparison might not be fair. In particular, the performance of dispersive readout for large photon numbers requires further study.
We also draw attention to the functional dependence of the dispersive shift on flux that threads the relevant loops for each respective qubit, shown in Fig. 3 (d) and Fig. 5 (c). With the likelihood that offset fluxes are present in the system, and given some distribution of these between different qubits, a challenge for the scalability of the (measurement-based) approaches is the requirement to locally tune the flux for each qubit in order to maximize readout fidelity. From this perspective we find that the Majorana box qubits are favourable, since flux tuning is likely only necessary for a limited number of qubits that have offsets very close to ϕ x = π.
Finally, we note that the longitudinal readout protocol introduced in Ref. [30] is entirely independent of the resonator detuning, and can therefore be used in a regime where the parity breaking terms for the Majorana box qubit are negligible (corresponding to a regime where the dispersive shift is negligible). Our results moreover show that longitudinal readout may lead to even faster and higher fidelity readout in practice, given that a reasonable parametric modulation is possible.
ACKNOWLEDGMENTS
We thank Andrew Doherty and Torsten Karzig for useful discussions. This research was supported by the Australian Research Council, through the Centre of Excellence for Engineered Quantum Systems (EQUS) project number CE170100009 and Discovery Early Career Research Award project number DE190100380. Fig. 3(a). Here ε/h = 20 GHz and tL,R = t is tuned such that the energy splitting between the two lowest levels, ωmt, is equal to the corresponding case from Fig. 3(a).
This "indirect" model, wherein the MZMs interact via virtual occupation of the quantum dot can be compared to a "direct" interaction, Eq. (5) [46]: whereφ =φ L −φ R . The two models agree when δ = ε + E C is large relative to the tunnel couplings t L,R , where E C is the charging energy due to capacitive coupling between the two topological superconductors, from Eq. (3). To demonstrate this, we have numerically plotted the spectrum and dispersive shifts of a Majorana transmon qubit for both interaction terms in Fig. 7. | 9,791.6 | 2020-08-31T00:00:00.000 | [
"Physics"
] |
How players across gender and age experience Pokémon Go?
The purpose of this study is to provide insights into player experiences and motivations in Pokémon Go, a relatively new phenomenon of location-based augmented reality games. With the increasing usage and adoption of various forms of digital games worldwide, investigating the motivations for playing games has become crucial not only for researchers but for game developers, designers, and policy makers. Using an online survey (N = 1190), the study explores the motivational, usage, and privacy concerns variations among age and gender groups of Pokémon Go players. Most of the players, who are likely to be casual gamers, are persuaded toward the game due to nostalgic association and word of mouth. Females play Pokémon Go to fulfill physical exploration and enjoyment gratifications. On the other hand, males seek to accomplish social interactivity, achievement, coolness, and nostalgia gratifications. Compared to females, males are more concerned about the privacy aspects associated with the game. With regard to age, younger players display strong connotation with most of the studied gratifications and the intensity drops significantly with an increase in age. With the increasing use of online and mobile games worldwide among all cohorts of society, the study sets the way for a deeper analysis of motivation factors with respect to age and gender. Understanding motivations for play can provide researchers with the analytic tools to gain insight into the preferences for and effects of game play for different kinds of users.
Introduction
One of the most notable gaming phenomena during the current decade that attracted the public toward games has been Pokémon Go by Niantic Labs. Pokémon Go is a freemium/ free-to-play mobile-based augmented reality game launched initially in the USA on July 6, 2016, which has now been widely embraced across the globe. Since its launch, the game has captured exceptional attention in the media as well as in the gaming community. Pokémon Go has inspired many people toward physical and outdoor activity that led to its recognition as the best idea for physical exercise by Suomen Latu (The Outdoor Association of Finland). Furthermore, the game has also been characterized as a highly useful channel for uniting young as well as the older generation in outdoor activities. 1 During the first 6 months of its launch, the game has also set a number of records including fastest game to be at the top of the charts in the App Store and Google Play. 2 Pokémon Go has also been the most downloaded game (and application) during the first week on the App Store. 3 In 2016, Pokémon Go was among the top search terms in Google, further highlighting its global popularity. 4 Pokémon Go is a predecessor of Ingress that was also published by Niantic Labs (a spin-off company from Google) back in 2012. By employing GPS technology, Ingress-a free-to-play location-based multiplayer game, developed a complex and robust digital narrative by combing augmented reality with geo-media [13,30]. Borrowing the science fiction narrative, the players of the game were set as the agents that contend for controlling real-world locations through the game. Although Ingress was not able to attain the same hype and attention as Pokémon Go, it managed to demonstrate the strong and bright potential of location-based augmented reality applications as well as set a path not only for Pokémon Go, but also for future augmented reality applications. Pokémon Go and Ingress are positioned as location-based augmented reality games as they employ mobile tracking technology by supplementing them with highly playful and close-to-reality interactions. In essence, Pokémon Go and other location-based augmented reality games derive the concept from geocaching, also referred as "GPS-enabled treasure hunt" [4,52]. Geocaching is a location-based outdoor activity in which an item is concealed at a location anywhere in the world and its longitude and latitude coordinates are published for other geocachers. The geocachers then use a GPS-enabled device to track and find the hidden treasure [4,52]. Pokémon Go (and many other locationbased augmented reality games) replicates and augments the core mechanics of geocaching as it revolves around finding Pokémon creatures (instead of hidden treasure) and battling others with the collected Pokémon. Seeing groups of people playing Pokémon Go, exploring new places, and gathering together and socially interacting with other players in certain "PokéStops," which has become commonplace, is analogous to the geocaching context [4,52].
Although Pokémon Go is not the first mobile game combining the virtual with the physical, it represents one of the first real experiences of location-based gaming bundled with social interactivity features to a wider audience [14,59]. Pokémon Go represents a new breed of mobile games that moved gaming from indoors to outdoors, requiring people to walk, jog, or cycle to catch a new Pokémon and exercise to hatch Pokémon eggs among many other activities. The phenomenon associated with outdoor mobile activism through a game (e.g., Ingress and Pokémon Go) that improves one's health has already been witnessed in geocaching applications and has been endorsed by various health organizations and watchdogs 5 as well as by the academic community [2,4,52,59].
Location-based augmented reality games (more specifically Pokémon Go) are a novel gaming genre that syndicates multifaceted features within them to attract a wider gamer demographic than is typically the case with digital games. Due to their adoption and engagement across a diverse population, these games provide a number of gratifications and experiences, as well as raise concerns that are a departure from conventional digital games. For instance, features such as physical activation, outdoor exploration, social interactivity, and health benefits that accrue from those activities, as well as privacy and trust issues that arise due to location and collaboration, are hallmarks of this new gaming genre. While there is a plethora of research on various motives associated with games [24,37,58,65], negative gaming outcomes [11,12,34], demographic variations [9,20,21,57], and gamers behaviors on specific games [12,17,28,35], the novelty and rapid popularity of location-based augmented reality games (particularly Pokémon Go) across the highly diverse global audience provides an opportunity to revisit many of the core issues and findings of gaming research [1,10,23,38].
Understanding uses and motivations in digital games is highly essential and relevant not only for scholars but also for game developers and designers, as in recent years games have witnessed massive adoption by non-gamers (and casual gamers) and have become an integral vein of entertainment media. Besides informing variations of in-game attitudes and behaviors among various age and gender groups, a number of demographic-based nuances associated with the usage and adoption of technology in general and games in particular can be observed [57]. Prior research highlights that female gamers differ in playing styles, motives, level of participation, genre choices, expertise, and preferences when compared to their male counterparts [25,43,67]. Similarly, there are significant differences among usage, adoption, motivations, and expertise of young and old gamers [21,32,34].
Recent research indicates that Pokémon Go has been highly efficacious in appealing to a wide range of gamers, from new gamers to hardcore fans, as well as a large number of females and older age groups. For instance, the study carried out with 18-75 year olds Pokémon Go players in USA illustrates that the game supported the wellbeing of the players in terms of friendship formation and 4 https ://www.googl e.com/2016. 5 http://news.heart .org/Pokémon-go-bring s-video -games -outsi de/.
strengthening as well as walking outdoors [8]. Another study by Kogan et al. [40] investigating the physical and psychological gains through Pokémon Go use by those aged 18-over 50 years old (predominantly females) revealed that the participants spend more time with family and pets after they started playing the game. Furthermore, the players began spending more time outdoors for walking and exercising since they started to play the game [40].
On the other hand, research has also portrayed some of the adverse effects associated with the usage of the game. The study investigating a random sample of 4000 tweets indicates the game posing distraction to drivers, passengers as well as pedestrians that lead to hampered public safety [3]. Another study points out the privacy issues associated with Pokémon Go and similar games as they lack adequate protection, especially for children, such as safety reminders when contacting new users, hiding location by default, and clear processes on safeguarding concerns [49].
As Pokémon Go has also been credited for making digital games mainstream via mobile phones, it remains an intriguing research question as to whether the gratifications (and to a certain extent concerns) vary across different player groups. Additionally, one should also consider whether mobile location-based augmented reality games depict behavioral patterns similar to those indicated by gaming studies or if they differ substantially. In the current study, we investigate two key demographics (gender and age) and how they shape and influence various motivational factors in the context of location-based augmented reality gaming. Furthermore, we investigate the differences that emerge across each of the demographic variables. Thus, this study poses the following research questions: RQ1: How do various gratifications and concern for privacy vary across gender in the context of Pokémon Go? RQ2: How do various gratifications and concern for privacy vary across different age groups in the context of Pokémon Go?
While this study is investigating the relationship between demographic factors (age and gender) and the gratifications people derive from playing Pokémon Go and while any game such as Pokémon Go has a player base representing a certain kind of demographic make-up, the relationships between demographic factors and gratifications are generally thought to be fairly stable across games and game genres. Pokémon Go can be considered a blueprint of contemporary location-based augmented reality games. Therefore, while the results of the study first and foremost address this relationship in Pokémon Go, the results can also be cautiously generalized to the context of location-based augmented reality games and furthermore cautiously to games in general. Therefore, the present study contributes to our knowledge on the intricate relationships between demographic factors and how we experience games.
Uses and gratifications framework
The Uses and Gratification (U&G) is a theory that explores how and why people use technology to fulfill their needs and motives [55]. This framework assumes that people use technology to satisfy their needs and motivations that play an important role in influencing an individual's intention to use certain technology. The U&G approach assumes that (1) audiences actively participate, (2) audiences previous experience with technology helps them make motivated choices, and that (3) audiences use technology as one way to satisfy everyday needs [55]. The U&G framework is one of the media-use theories that is commonly used by researchers and offers a broad application for understanding media usage. The U&G framework has also been characterized as a highly useful and effective media-use paradigm for diagnosing uses and gratifications of a given technology or service as well as its recurring usage [36]. More recently, the U&G framework has been considered highly suitable by studies in new media research as a tool for explaining media choices of people [23,41,[46][47][48]53].
In relation to understanding motives and experiences, researchers from various domains have opted for the U&G framework to investigate motives associated with various genres and forms of games including video games in general [37,58], online games [50,68], social games [12,28], and mobile games [64]. By employing the U&G framework as a theoretical foundation for examining gaming motivations, a number of existing gratifications for other media have been validated and some new ones being identified. Some of the notable gratifications include enjoyment, fantasy, escapism (hedonic motives), social interaction/connection and social presence (social motives), and self-presentation, self-expression, and achievement (utilitarian motives) [12, 27-29, 35, 58, 70].
More recently, the framework has also been adapted to explore and understand the gratifications and motives associated with Pokémon Go. More details on the application of the U&G framework in some of the notable studies on Pokémon Go are presented in Table 1.
Gender differences
The role of gender and issues related to various aspects of IS adoption, usage, and attitudes has been actively debated Gratification including enjoyment, network externalities, community involvement, and the need to collect motivate players' engagement and their continued intentions to play the game since early 2000, for example, in Computer-mediated communication [16], online shopping [26], information seeking [6], and more recently social media [45][46][47][48]. Although digital games are more popular among and played overwhelmingly by males, the rooted notion of digital games being overshadowed by adolescent males and perceived generally as a stereotypical male activity [43,57] has been confronted as games have become mainstream for a wider demographic population [39,70]. Although female gamers are markedly under-represented in the digital gaming arena, the participation of female gamers has been increasing rapidly and the age span of gamers has also been widening exponentially [20,25,67,70,71]. Due to this shift, better understanding of gamer motives, behaviors as well as concerns and how various gamer demographics differ is called for. It is important to understand whether changing gaming devices, such as smartphones, and novel experiences such as gaming outside, have any effect on gender participation.
Age differences
In relation to behavioral intentions of users in computermediated environments, age has been one of the predominant factors exerting effects on a number of constructs such as self-efficacy, skill acquisition, trust, willingness to adopt, social/pragmatic dependency, and outcome expectations [44]. Prior research on online information seeking [56], cellular phones [18], tablet devices [44], and social networking sites [46][47][48], suggests that perceptions, usage, access, adoption, as well as diffusion of various technologies, vary substantially among younger and older age groups.
Much of games research concentrates on studying adolescents and young gamers [9,11,59]. Although computer games are played and embraced by all age groups, the research literature indicates that popularity and diffusion of digital games is much higher among adolescents and younger age groups. More recently, studies on old gamers have begun to appear focusing on health and cognitive perspectives [15,51]. Despite the ever-increasing popularity of digital games among older age groups, comparisons between adolescents and older age groups, usage, motives, and attitudes have been highly limited. In the gaming literature, gaming has often been characterized as one of the core leisure activities for many adolescents. Consequently, research within academia has given limited attention to older groups of active gamers as it has predominantly focused on adolescents and young gamers. Studying active older gamers and comparing their motivations, activities, attitudes and concerns in contrast with younger gamers have been even more limited.
Data and participants
The data for this study were collected through a webbased survey. By employing the virtual snowball sampling method, the survey targeted participants across the globe who were current players of Pokémon Go or had recently played the game. Virtual snowball sampling is a highly popular technique that has been predominantly employed by a number of recent gaming [12,28,64] and social media studies [7,22,[46][47][48]. The survey was initially publicized on a number of gaming research mailing lists as well as the Twitter profile of the first author. To minimize the inbuilt biases that are usually observed in traditional snowball sampling, the participants were not invited personally and the survey was marketed in different channels. The main aim of employing this technique was to reach a wider audience/geography and thus actually providing a better base for the generalization of the results [5]. In the brief description of the promotional text, we requested readers to forward/post the survey on relevant forums/ social media channels. During the course of 1 month, the survey link was tweeted by a number of gaming professionals, academics, and research groups. Furthermore, the survey was posted on a number of Pokémon Go Facebook fan pages and groups. Three days prior to the survey closure, we requested that owners/admins of all the traceable forums post a reminder text for their audience.
All the questions in the survey were mandatory. The study procedures were consistent with the ethical principles defined by The Finnish Advisory Board on Research Integrity. Participation in the survey was voluntary, and users could withdraw at any time. During the one-month period, 1315 respondents completed the survey. After the data cleaning process, the final data set consisted of 1190 valid responses. Table 2 lists demographic information of the respondents.
The instrument used in the study was developed from the relevant previous literature including measures for general gaming behavior, uses and gratifications, and concerns for privacy aspects intentions and enjoyment. The gratifications adopted in the current study were specifically chosen due to their significance in the seminal gaming literature and more recently within the context of Pokémon Go (see Table 1). With regard to privacy concerns, we opted to study its role on age and gender as it has often been regarded as a salient factor impacting the use of new media [45][46][47][48]61].
Individual questions were adapted to a Pokémon Go context. As Pokémon Go is one of the first mobile games blending game elements with the actual environment based on the player's location, and one where physical exploration of the location is a central game play mechanic, we included a new self-developed construct to measure physical activity and extending virtual-world exploration and immersion concepts to the physical world. All items were measured using seven-point Likert-type scale answers, from "strongly disagree" (1) to "strongly agree" (7). Participants were also asked about the reasons for starting Pokémon Go (Table 3).
Results
The reliability of the survey instrument was established by testing each construct's reliability and content validity using Cronbach's-α. All scores were above the recommended level of 0.70 [60]. Assumptions for normality were tested for all independent variables using histograms. As the first step in our analysis, an independent samples t test was conducted to evaluate how gender affects playing behavior and perceived motivations (Table 4). An alpha level of 0.05 was used for all statistical tests.
Based on the results, there were statistically significant differences between Males and Females in all constructs except Immersion. In order to inspect the impact of age on gaming experience while controlling for the time spent playing, a oneway analysis of covariance (ANCOVA) was used. Results of the one-way ANCOVA analysis for different age groups are presented in Table 5. An alpha level of 0.05 was used for all statistical tests.
The results show that age has a statistically significant impact on Social Interaction, Achievement, Immersion, Coolness, Privacy Concerns, and Nostalgia dimensions while controlling for time spent playing. Except for Privacy Concerns, time spent playing is significant covariate for dependent variables. For Privacy Concerns, time spent To further explore how age impacts these dimensions, Tukey's HSD post-hoc comparisons of age groups were conducted. The Tukey's HSD post-hoc tests indicated that there are significant differences in the adjusted mean scores for Social Interaction, Achievement, and Immersion between age groups 1-2 and 6 (respondents below 25 years and over 40 years). For Privacy Concerns and Nostalgia, Tukey's HSD post-hoc tests revealed significant differences in the adjusted mean scores between age groups 1-2 and 4-6 (respondents below 25 years and over 30 years). For Coolness, Tukey's HSD post-hoc test did not indicate any significant differences between age groups.
The inter-correlation matrix ( Table 6) and means plots for these constructs showed that for Achievement, Coolness, Social Interaction, and Nostalgia, the score decreased with age. However, for Privacy Concerns, the score increases slightly with age. For the perceived Enjoyment or Physical Exploration, age seems to have no significant impact. The two-way multivariate analysis of variance (two-way MANOVA) revealed that there is no statistically significant interaction effect between gender and age on the combined dependent variables, F(35, 4932) = 1.448, p = 0.063; Wilks' Λ = 0.958.
Discussion
The present study examined the impact of two key demographic factors, namely gender and age, on a number of gratifications in the context of a mobile location-based augmented reality game-Pokémon Go. Furthermore, the concerns associated with the privacy of personal and gaming data were addressed in the study.
Evidently, nostalgic associations with the game (childhood memories) and the gaming characters (attachment with Pokémon characters) were the key reasons that influenced the participants (reported by more than half of the respondents) to start playing the game (see Table 3). Moreover, social aspects and word of mouth (e.g., recommendations by family/friends and known people playing the game) also persuaded many participants (over one-fourth) to outset their interactions with Pokémon Go. Intriguingly, positive reviews about the game on the web as well as the popularity indices and prominence in various digital distribution platforms (Google Play and App Store) were not considered a significant trigger for playing Pokémon Go. These motivations inducing their Pokémon Go game play may be influenced by the structural characteristics of online games. Previous research [33] has highlighted that specific structural characteristics of online games (such as character development, socialization, and chat features) may induce playing. Therefore, it could be speculated that these characteristics may induce longer playing sessions as users of new online games might enjoy the social aspects.
One of the most interesting findings of this study relates to the gender differences found in game experiences. Previous studies suggest consistently that males enjoy games more than females and highlights female preference for relationship-(social) and coolness-based motivations [20,25,63]. Based on our data, females (M = 5.65) perceive greater game enjoyment in Pokémon Go than males (M = 5.24). Even though enjoyment is rated as significantly more important by females, the mean scores for this gratification are the second highest among males. These results are somewhat contrary to the popular belief that, in general, males enjoy gaming more than females [20,25,63]. There can be multiple reasons behind this finding, e.g., type of game, game features, as well as the gamer population. The sample in our study is quite broad as compared to most of the prior gaming studies that focus predominantly on young male gamers [9,11,21,37,50,58]. The current study has a fairly wellbalanced sample representing a number of age groups as well as good representation of gender groups. Furthermore, an overwhelming majority of the study sample had fewer than five games installed on their smartphones (62.7%), indicating that the sample size predominantly consists of highly casual gamers. Similarly, almost half of the study participants played games somewhere between 30 min and 2 h every day. Considering the background of Pokémon Go, i.e., the causal nature of the game coupled with novel concepts, and most prominently, enjoyment supported in gaining more attraction among female players than males. It is also quite probable that the enjoyment motive (as well as other gratifications) is associated strongly with the game type and genre. For instance, many games studies have repeatedly indicated that males enjoy the violent features of the games far more than females [25,57]. As Pokémon Go is a mobile casual game that attracts a large gamer demographic, it is highly likely that prior findings might not be valid for this game.
Female players also scored higher on physical exploration gratification associated with playing Pokémon Go. This is one of the other novel findings of this study, as we are unaware of any other study in the context of gaming that has explored this gratification. With respect to female players, it is highly likely that physical exploration highly correlates with and induces enjoyment gratification. Due to a highly novel gaming feature that encourages them to visit/explore new places as well as aids in improving their health and fitness, women seem to be enjoying the game much more than their male counterparts. The likelihood of being attracted by the physical exploration features of a game can be a response to the fact that women are generally more cautious and concerned about their health, and that leads them to use various technologies for seeking health-related information and support more actively than men [6]. Furthermore, they experience a higher degree of enjoyment while seeking health-related information through technology [6], which can be a possible explanation of this strong linkage.
Results also point out that males are more attracted by the nostalgia associated with the Pokémon Go game and characters. The core reason lies in the fact that in the timeframe of Gameboy, Pokécards, and the Pokémon series (late 1990s and early 2000s), gaming was predominantly a male activity as games were played mostly by teenage/young men. During that specific era, very few women were into gaming or watching anime/TV series, which was primarily malecentric [66]. Also, the social aspects of the Pokémon Go gaming experience, previously considered more important for females, seem to be more important to males. These findings suggest that the socializing aspects that are inherent among new types of gaming experiences appeal more to male users. In line with prior gaming studies [27,31,70], Table 6 Inter-item correlation for survey instruments *Correlation is significant at 0.05 level; **Correlation is significant at 0.01 level the results affirm that male players scored higher for achievement and coolness gratifications in Pokémon Go. Another unexpected, yet intriguing finding from this study relates to the concerns associated with privacy. Most of the IS literature consistently points out that females are more concerned about various aspects of privacy, especially in the social networking domain [45,[46][47][48]. Our results, on the contrary, indicate that males are more concerned about privacy than females. The most obvious reason might be that female gamers never imagined that another person or organization can pose a threat to them or their gaming data through game play. It is quite likely that many female gamers have never heard of or experienced firsthand privacy breaches on their mobile devices due to a game. They might also believe that the game is developed and published by a well-known and reputable company (many players consider Nintendo as the publisher of Pokémon Go), hence their data is safe. As males generally play more games and are active on a number of gaming forums and communities, it is highly probable that they might have read, heard, or experienced privacy breaches associated with digital games.
With respect to age, the results from the current study also reiterate some of the commonly reported findings by prior gaming studies as well as reveal novel ones. Consistent with prior research, most of the common gaming gratifications including social, achievement, immersion, and coolness depict inverse linear patterns with age. Younger players of Pokémon Go are overwhelmingly attracted by these gratifications. With an increase in age, there is a consistent drop in the appeal of these gratifications. Physical explorations and enjoyment gratification do not significantly differ among the age groups, yet interestingly, the means of both are the highest among all the gratifications. Another highly interesting finding relates to the nostalgia factor among different age groups. This factor has been a significant one and stands out the most for the young adult age group. The Pokémon franchise, introduced in 1995, evolved over time and then reached the pinnacle of its popularity during the early 2000s, branching out through a number of anime series, TV specials, musicals and trading cards among other iterations. The association with the brand, characters, and the game itself seem to have revived and become more popular with the age groups that have interacted with the characters or brand when they were younger. Finally, with respect to concerns about privacy, it is lowest among the youngest and the oldest age groups. Results indicate that privacy concerns increase with age and the young/young-adult groups (26-30 and 31-35 years) are concerned the most with respect to privacy. These results are strongly in line with previous notions that teenagers and older age groups are largely less concerned about their privacy as the older users are generally unaware, while the younger ones seem not to take it seriously or have an "it won't happen to me" attitude [46][47][48]61].
Overall findings from the current study highlight the complex nature of motivations when considering gender and age differences. The results highlight the fact that game design, demographics, motivations, and actual usage have changed over time alongside the emergence of new technologies. Two of the most important aspects of online games are the gaming experience and communication aspects. Socializing, exploration, physical activations, and enjoyment motivations seem to be popular among the participants in this study. These findings support the U&G framework that suggests that people use technology in order to satisfy a wide range of needs. Online games are extrinsically rewarding as they deliver immediate access to other individuals and mobile applications. Moreover, they offer users the opportunity to customize, manipulate, and explore the virtual environment and their surroundings. Hence, in line with the U&G framework, the features of augmented reality games, such as Pokémon Go, provide high-frequency rewards that promote regular usage and help satisfy the relevant needs of gamers hailing from multiple cohorts.
Implications for game design and social computing
In terms of implications for social computing research and game design, the results provide insights into social behavior and human social dynamics. Through their participatory nature, augmented reality games have opened a new dimension allowing users across different age cohorts, and in particular, females, to feel more empowered. This is an area for future research, which may include expanding the theoretical base for game design by drawing from more disciplines and guiding new business ventures. There is a strong need for cross-disciplinary research to bridge social and technical aspects associated with digital gaming. More generally, the present research study has implications for education, social sciences, and society at large. The results of the study add to the literature on augmented reality games by focusing on player motivations in a highly popular gaming phenomenon (i.e., Pokémon Go).
The study findings will be of interest to game developers and designers, as they signify how certain constructs play a role in playing online games. The findings also highlight the influence of structural characteristics in inducing play. The present study has shown that there are various motivations for playing online games and substantial differences prevail among gender and age groups. Designing gaming applications for niches as well as for broader demographics can provide insights on relevant motives and concerns. With the contemporary gaming arena leaning further toward mobile (and online) gaming, motivations such as physical exploration, social interaction, and nostalgia should be prioritized while crafting new games.
Digital games that make use of new technology (such as augmented reality) need to strongly emphasize and rely on some of the well-known features of human-computer interaction, which is socializing, physical pursuit, and competing against others. Game designers may consider these features to help improve augmented reality interactions, features, as well as overall gaming experiences. Interestingly, digital distribution platforms, rankings or positive online reviews, often mentioned as key success factors for mobile games, were among the least mentioned reasons for playing Pokémon Go, which infers that, at least for a game with strong prior brand recognition, these might not be as critical. As nostalgic association and word of mouth were regarded as highly significant reasons, future games can also target for these to promote their content. These suggestions provide designers and developers with the blueprint for successful augmented reality games in the future. More specifically, the present study suggests that the focus should be on the positive aspects of social computing.
Study limitations
The present study has several limitations, namely that the sample was self-selected and might not be a representative sample of online game users. In this sense, online game users who may be concerned about their playing behavior could have been attracted to the study in order to provide insights into their own game play. Secondly, few methodological issues surrounding the use of a self-report measure (e.g., social desirability bias, recall biases) are also present in the present study that could have been prevented by using other data sources such as those obtained with qualitative methods or with a case study approach. However, using alternative data collection methods in such a study may have resulted in a limited geographical reach, high dropout rate and considerably smaller sample size being obtained. Even though majority of the respondents can be categorized as casual gamers as 62.7% of them have less than 5 games installed on their phones, it is probable that the current sampling method might be potentially biased. It is somewhat likely that the respondents may be the "hard core Pokémon gamers" who follow game-related social media, blogs, or gaming forums, and the results might or might not be generalizable to "casual gamers" who do not follow such channels. Even if this holds true, understanding the motivations of this specific audience (hard core gamers) is also highly relevant and interesting for practitioners and researchers, since they are the ones who are more likely to invest money and spend more time in online gaming.
Conclusion
With the increasing use of online and mobile games worldwide, investigating the motivations for playing different games on various platforms offers highly relevant and essential insights to researchers, developers, designers as well as policymakers. We present findings from a study that examines the effects of age, gender, uses, and motives in the relatively new phenomenon of locationbased augmented reality games. The study has set the way for a deeper analysis of motivation factors and associated concerns among different cohorts of players. Understanding motivations for play can provide researchers with the analytic tools to gain insight into the preferences for and effects of game play for different kinds of users. Future research could make use of data mining techniques to test real-time data gathered by users to gain a deeper insight into online game use. Also using more innovative techniques that allow researchers to gather location-specific data (via location tracker apps) would allow for comparisons across countries and cultures. This would provide further insights into how users of new technology interact with augmented reality games. | 7,856.8 | 2019-10-16T00:00:00.000 | [
"Computer Science",
"Sociology"
] |
Targeting Monoacylglycerol Lipase in Triple Negative Breast Cancer Reduced Tumor-Associated Inammation, Tumor Growth and Tumor Colonization in the Brain
While the prevalence of breast cancer metastasis in the brain is significantly higher in triple negative breast cancers (TNBCs), there is a lack of novel and/or improved therapies for these patients. Monoacylglycerol lipase (MAGL) is a hydrolase involved in lipid metabolism that catalyzes the degradation of 2-arachidonoylglycerol (2-AG) linked to generation of pro- and anti-inflammatory molecules. Here, we targeted MAGL in TNBCs, using the selective MAGL inhibitor AM9928 (hMAGL IC 50 = 9nM, with prolonged pharmacodynamic effects of 46 hours residence time). AM9928 blocked TNBC cell adhesion and transmigration across human brain microvascular endothelial cells (HBMECs) in 3D co-cultures. In addition, AM9928 inhibited the secretion of IL-6, IL-8, and VEGF-A from TNBC cells. TNBC-derived exosomes activated HBMECs resulting in secretion of elevated levels of IL-8 and VEGF, which were inhibited by AM9928. Using in vivo studies of syngeneic GFP-4T1-BrM5 mammary tumor cells, AM9928 inhibited tumor growth in the mammary fat pads and attenuated blood brain barrier (BBB) permeability changes, resulting in reduced TNBC colonization in brain. Together, these results support the potential clinical application of MAGL inhibitors as novel treatments for TNBC. series of highly potent carbamate MAGL inhibitors were synthesized and characterized at the Center for Drug Discovery, Northeastern University. In this study, we have investigated the inhibitor AM9928, which exhibited high potency against recombinant human MAGL with IC 50 value of 8.9nM and lacked any affinity for the cannabinoid receptors CB1 and CB2. 25-27 AM9928 demonstrated: (1) potency and selectivity for the target; (2) suitable physicochemical properties with low lipophilicity (ClogP 3-4); (3) good microsomal stability (>20 min) and plasma stability (>120 min); and (4) prolonged pharmacodynamic effects as determined by 1 H NMR spectroscopy, assessing the time required for MAGL reactivation (residence time) following the covalent inhibitory interaction between ligand and protein. investigated MAGL’s role in TNBC growth and tumor colonization in brain. MAGL is highly expressed in TNBCs, which secrete high levels of inflammatory cytokines and chemokines. We hypothesized that tumor growth in the mammary fat pads and tumor cell infiltration across the BBB is facilitated via inflammatory chemokines/cytokines secreted from TNBC cells, leading to activation of HBMECs and resulting in TNBC spreading and colonization in brain. Here, we have performed studies using several human and mouse cell lines: (1) a TNBC human cell line (MDA-MB-231 cells); (2) human and mouse brain-seeking TNBC cell lines (MDA-MB-BrM2 and GFP-4T1-BrM5, respectively; (3) human brain microvascular endothelial cells (HBMECs); and (4) the spontaneous breast metastasis mouse models (syngeneic). We found that TNBC adhesion to HBMECs and transmigration across HBMECs was inhibited by AM9928. AM9928 inhibited TNBC’s secretion of inflammatory cytokines such as IL-6 and IL-8, and the angiogenic factor VEGF-A. Notably, AM9928 inhibited i n vivo changes in BBB permeability and decreased TNBC colonization in brain. Taken together, these results demonstrate novel mechanisms by which MAGL mediates its effects on brain metastasis by activation of brain microvascular endothelium and modulating BBB permeability. These studies support the potential of clinical application of MAGL inhibitors as a novel treatment of TNBC tumor growth and TNBC-colonization in the brain. as compared to control were treated with vehicle control or with AM9928. Murine tumor cells in brain were detected by GFP immunostaining (GFP antibodies 1:50 dilutions; Abcam) and their respective controls were used under the same standardized conditions. Brain nuclei were counterstained with DAPI (blue). n = 10 mice/treatment; These are representative images of over 50 images from three independent experiments. Scale bar = 20 μ m. d. Quantitative analysis of tumor cells in the brain: Murine tumor cells in brain were detected by GFP immunostaining as described above. The tumor areas at day 28 were detected by immunostaining with GFP antibody: n = 10 mice/treatment; *p < 0.05, Mann– Whitney Utest; **p < 0.005 as compared to vehicle control. Brain nuclei were counterstained with DAPI (blue). These are representative images of over 50 elds examined from three independent experiments. The magnication - x40; scale bar-10 μ m.
2 Breast cancer is a common cause of brain metastases occurring in at least 10-16% of patients and reaching 25-35% of TNBC patients. 1-5 Unfortunately, patients who are diagnosed with brain metastases often have poor prognosis with short overall survival times. 6 Triple negative breast cancer (TNBC) more frequently affects younger patients and have higher prevalence in African-American and Hispanic women. 6,7 TNBC tumors are larger in size and more biologically aggressive with lymph node involvement. TNBC patients often have a higher rate of distant recurrence and a poorer prognosis than patients with other breast cancer subtypes. 8. Breast cancer metastases in brain (BCM/B) show significant morphological and genomic heterogeneity. [1][2][3][4][5][6][9][10][11][12][13] Patients with BCM/B have high mortality resulted from the brain lesions and are resistant to chemotherapy treatments. [9][10] Metastatic breast tumor cells transmigrate across the blood-brain barrier (BBB) and form colonies of tumor cells in brain. 9,10,15 Under normal conditions, the BBB is a highly selective barrier due to existence of tight junctions (TJs) between adjacent brain microvascular endothelial cells (BMECs). However, in the process of invasion of tumor cells to the brain, inflammatory cytokines and chemokines secreted by these infiltrating tumor cells disrupt the BBB integrity. Although the BBB prevents the delivery of most therapeutic drugs into the brain, the circulating metastatic tumor cells invade the damaged BBB to form colonies in the brain. 9-10. Several genes were shown to be involved in the development of brain metastases which include cyclooxygenase COX-2, EGFR ligand HBEGF and α-2,6-sialyltransferase ST6GALNAC5 9,16 , all involved in facilitating cancer cell passage through the BBB. We have previously reported the roles of the proinflammatory peptide P in impairing the BBB integrity and the angiogenic factor Angiopoietin-2 in mediating activation of brain microvascular endothelial cells and BBB impairment, resulting in infiltration of TNBCs into the brain. 17,18. Further, loss of E-cadherin was found in breast cancer metastasis and its expression inversely correlates with tumor stage, pathologic stage, and prognosis of cancers of epithelial origin. 19 The tumor microenvironment (TME) is important in cancer progression. [10][11][12] TME is characterized by chronic inflammation which stimulates tumorigenesis, especially in inflammatory breast cancer, where metastasis arises at the initial stage. TME is comprised of cells surrounding the tumor (such as macrophages, fibroblasts, and endothelial cells) and the extracellular matrix (ECM). TME in tumor niches differs from the healthy tissue microenvironments in cell type composition and phenotype. [10][11][12] TME promotes tumor development by complex signaling molecules that include soluble secreted molecules such as cytokines, chemokines, growth factors, proinflammatory enzymes, exosomes and matrix remodeling proteinases. [10][11][12][13][14] Interestingly, extracellular vesicles, a heterogeneous group of cell-derived membranous structures comprising of exosomes and microvesicles, were shown recently to breach intact BBB via Transcytosis. 21 Although tumor-derived exosomes are being recognized 3 as essential mediators of intercellular communication between cancer and immune cells, it is not known whether exosomes derived from breast cancer cells can activate directly brain endothelial cells.
Cancer cells endogenously synthesize 95% of free fatty acids (FFAs) de novo 15 and incorporate and remodel exogenous palmitate into structural and oncogenic glycerophospholipids, sphingolipids and ether lipids. 15 Fatty acid synthesis (FAS) pathway confers a survival advantage to cancer cells and especially to tumor cells that are resistant to chemotherapy. 15 Fatty acid-binding protein 5 (FABP5) promoted lipolysis droplets, de-novo fatty acid synthesis, and coordinated lipid signaling that promoted prostate cancer metastasis. 20 Increased FFA levels in tumors lead to enhanced tumor aggressiveness by increasing lipid synthesis mediated by lipolytic pathways. 15,[22][23][24] Thus, targeting lipid metabolism may prevent the gain of survival advantage and improve treatment response in cancer cells.
MAGL is a serine hydrolase that regulates a fatty acid network that promotes cancer pathogenesis by enriching pro-tumorigenic signaling molecules. MAGL is a key hydrolytic enzyme in the FFA tumor network reported in colorectal cancer, neuroblastoma, nasopharyngeal carcinoma, and other cancers. [21][22][23][24] MAGL primarily hydrolyzes endocannabinoid 2-AG and other monoacylglycerides. In addition, MAGL indirectly controls the levels of free fatty acids derived by their hydrolysis and other lipids derived by metabolism of fatty acids with pro-inflammatory or pro-tumorigenic effects. 24 Recently, 2-AG was shown to be the source for arachidonic acid, the precursor of prostaglandins and other inflammatory mediators. 45,51,53 Accumulated studies showed that MAGL inhibition impaired cell migration, invasion and tumorigenicity in some types of cancers. 20,[22][23][24] Further, MAGL promoted progression of hepatocellular carcinoma via NF-kB mediated epithelial-mesenchymal transition. 52 Taken together, although MAGL was shown to be involved in some cancers and inflammatory diseases, its specific roles in TNBC tumor progression and the brain metastasis process is still unknown.
A new series of highly potent carbamate MAGL inhibitors were synthesized and characterized at the Center for Drug Discovery, Northeastern University. In this study, we have investigated the inhibitor AM9928, which exhibited high potency against recombinant human MAGL with IC50 value of 8.9nM and lacked any affinity for the cannabinoid receptors CB1 and CB2. [25][26][27] AM9928 demonstrated: (1) potency and selectivity for the target; (2) suitable physicochemical properties with low lipophilicity (ClogP 3-4); (3) good microsomal stability (>20 min) and plasma stability (>120 min); and (4) prolonged pharmacodynamic effects as determined by 1 H NMR spectroscopy, assessing the time required for MAGL reactivation (residence time) following the covalent inhibitory interaction between ligand and protein.
Since MAGL is important for lipid metabolism and lipid metabolism plays various roles in tumorigenesis, we investigated MAGL's role in TNBC growth and tumor colonization in brain. MAGL is highly expressed in TNBCs, which secrete high levels of inflammatory cytokines and chemokines. We hypothesized that tumor growth in the mammary fat pads and tumor cell infiltration across the BBB is facilitated via inflammatory chemokines/cytokines secreted from TNBC cells, leading to activation of HBMECs and resulting in TNBC spreading and colonization in brain. Here, we have performed studies using several human and mouse cell lines: (1) a TNBC human cell line (MDA-MB-231 cells); (2) human and mouse brain-seeking TNBC cell lines (MDA-MB-BrM2 and GFP-4T1-BrM5, respectively; (3) human brain microvascular endothelial cells (HBMECs); and (4) the spontaneous breast metastasis mouse models (syngeneic). We found that TNBC adhesion to HBMECs and transmigration across HBMECs was inhibited by AM9928. AM9928 inhibited TNBC's secretion of inflammatory cytokines such as IL-6 and IL-8, and the angiogenic factor VEGF-A. Notably, AM9928 inhibited in vivo changes in BBB permeability and decreased TNBC colonization in brain. Taken together, these results demonstrate novel mechanisms by which MAGL mediates its effects on brain metastasis by activation of brain microvascular endothelium and modulating BBB permeability. These studies support the potential of clinical application of MAGL inhibitors as a novel treatment of TNBC tumor growth and TNBC-colonization in the brain. form metastatic brain tumors [16][17][18] and generate multifocal lesions in the cerebrum, the cerebellum, and the brainstem, with features typical of brain metastasis in cancer patients.
5
MAGL inhibitors: MAGL inhibitor AM9928 was synthesized and characterized at the Center for Drug Discovery, Northeastern University. AM9928 inhibits human MAGL (hMAGL) with an IC50 value of 8.9 nM . [25][26][27] lacked any affinity for the cannabinoid receptors CB1 and CB2. 25 Isolation of Exosomes: The culture supernatants of MDA-MB-231 and MDA-MB-BrM2 cells untreated or treated with AM9928 for 24hours, at approximately 65% confluence and were harvested after 16 hours conditioning in serum-free media. Cells and debris were cleared from the supernatants by centrifugation (500 g, 10 min) followed by filtration using 0.22-micron filters (Millipore Inc.). Exosomes were prepared from cell-free supernatants using the Exosome Isolation Kit (Kit# EIK-01, Creative Biolabs, NY). The quantitative and qualitative analysis of exosomes were performed on double sandwich enzyme-linked immunoassay using the Total Exosome Capture & Quantification Kit (Kit#EQK-04Creative Biolabs).
In vivo effects of MAGL on BBB integrity and TNBC colonization in brain: Female (6wks) BALB/c mice were purchased from Jackson Laboratories (Bar Harbor, ME). The mice were housed at an AAALAC-accredited facility at Beth Israel Deaconess Medical Center, Boston. The mice were handled in accordance with the animal care policy of Harvard Medical School. Mice were euthanized humanely by CO2 inhalation in the end of the experiments following treatment and tumor samples were harvested for further study as described below.
We have selected AM9928 for our in vivo studies based on its prolonged target engagement. [25][26][27] We expected that the prolong inhibitory effect of AM9928 would be translatable to a longer pharmacodynamic effect when compared to other MAGL inhibitors such as AM4301.To that end, we studied the effects of AM9928 on BMEC-TJs and tumor colonization in the brain using the spontaneous breast cancer metastasis mouse model (syngeneic) of mammary tumor cells. We used GFP-4T1-BrM5 cells which migrate to the brain [16][17][18] and form breast metastasis in the brains of Balb/c mice. We administered GFP-4T1-BrM5 tumor cells (5x10 4 cells) into the mammary fat pads of Balb/c mice. More than 70% of these mice developed mammary tumors within 3 weeks, while brain microtumors were developed in about 5 weeks. Here, following administration of GFP-4T1-BrM5 cells, mice were injected with AM9928 (10 mg/kg, i.v.) or the vehicle control twice a week for 3 weeks (10 mice/group/treatment). Tumor sizes in the mammary fat pads were measured using calipers, and the volume was calculated, using the formula: V = 0.52 × (length) × (width) 2 . The BBB integrity at the end of the experiments was analyzed by Evan blue test in control groups and in mice treated with AM9928 as compared to vehicle control.
Briefly, Evans's blue (EB) (Sigma Chemical Co. St. Louis, MO. USA.) dye (25% in 0.9% NaCl solution) was intravenously injected at dose of 25 mg/kg under anesthesia. One hour after the injection, animals were sacrificed.
Brains were weighed, clipped, and individually placed within formamide p.a. (2mL/brain). The content of dye extracted from each brain was determined by spectrophotometer (Photometer 4010, Boehringer) at 620nm and compared to standard graph created through the recording of optical densities from serial dilutions of EB in 0.9% NaCl solution. For in vivo imaging, spectral fluorescence images were obtained using a Maestro-based imaging system (CRI Inc., Woburn, MA, USA). Five sets of side-by-side whole-brain images of animals with tumors from the GFP-4T1BrM5 cells treated with vehicle control or with AM9928 were obtained. In the end of the experiment, mice were sacrificed, and brain and mammary fat pad tissues were collected for further analysis. Based on these observations we selected AM9928 for our studies.
The toxicity effects and time kinetics of treatment with AM9928: First, the toxicity of AM9928 on TNBC cells were analyzed. TNBC cells include the highly invasive MDA-MB-231 and the brain-seeking variant MDA-MB-BrM2 cells. The following doses were examined: 250, 500 or 1000nM, for durations of 72 hours. No significant toxicity of AM9928 (Figure 2a) was observed on the tested cells.
AM998 inhibited adhesion and transmigration of breast cancer cells through human brain microvascular endothelial cells (HBMECs): Next, in vitro adhesion assay was performed using co-cultures of the brain-seeking Tumor growth in mammary fat pads was significantly reduced in mice treated with AM9928 as compared to mice treated with vehicle control (Figure 4b). Quantitation of GFP-4T1-BrM5 tumor cell colonization in brain was determined by ex vivo imaging of the brain and by immunohistochemistry using GFP antibody. The expression of GFP positive tumor cells was significantly lower in the mice group treated with AM9928, while the presence of GFP-4T1-BrM5 tumor cells in brain was significantly higher in the untreated GFP-4T1BrM5 cells treated mice or the vehicle control treated mice (Figure 4c, 4d).
Increased in BBB permeability was found in mice administered with tumor cells or with vehicle treated mice, while mice treated with AM9928 had lower BBB permeability changes as compared to control mice at day 28 ( Figure 5a). We then analyzed BMEC-tight junction proteins' expression of ZO-1 and Claudin-5 in vivo in normal brain, as compared to TJs in tumor brains from syngeneic GFP-4T1-BrM5 mammary tumor cell model. ZO-1 is the main scaffolding protein and claudin-5 is known as the TJ sealer protein of the BBB. As shown in Figure 5b, both expression of ZO-1 and claudin-5 were abundant in the normal brain vasculature. Following GFP-4T1-BrM5 cells administration to the mammary fat pads, the GFP-tagged tumor cells migrated across the BBB after about 4 weeks. The GFP tagged tumor cells were detected by immunostaining with either Monoclonal Pan-Cytokeratin (Pan-CK) antibodies or GFP antibodies. No staining of normal brain sections with Pan-ck antibodies was observed (data not shown). These tumor cells were usually located around the brain capillaries and in-close proximity to BMECs (BMECS were detected with CD31 antibodies) (figure 5c).
Both ZO-1 and claudin-5 structures were damaged in the brain tumor vasculature (Figure 6a) as compared to the control brain (Figure 5b). In some images, both ZO-1 and claudin-5 structures withing BMECs were seen to colocalized with tumor cells, indicating the proximity of BMECs to the tumor cells in vivo. Interestingly, following treatment with AM9928, the expression of both ZO-1 and claudin-5 was less impaired as compared to the mice treated with vehicle control (Figure 6a). The number of CD31 + vessels expressing TJ proteins, ZO-1, and claudin-5, was performed and images were analyzed using Adobe Photoshop CS2. The expression of ZO-1, as determined by average sum of intensity was 582,096 and following treatment with AM9928 the average sum 9 of intensity was 610,920 ( Figure 6b). The average sum of intensity of Claudin-5 expression in untreated group was 518,599 and following treatment with AM9928 the average sum of intensity was 681,457 ( Figure 6c). Although these differences in TJ numbers were not statistically significant between the untreated mice and the treated AM9928 mice, both ZO-1 and claudin-5 were observed as less damaged structures in the AM9928 treated mice, as compared to the vehicle treated mice.
In summary, the number of mice positive with mammary tumors and brain tumors was significantly higher in the vehicle treated group as compared to AM9928 treated mice (Table 1). Importantly, the number of alive mice at day 28 was higher in the AM9928 treated mice as compared to the vehicle control group (Table 1). Thus, AM9928 significantly inhibited changes in the BBB permeability and reduced TNBC colonization in brain.
Discussion
During tumor progression, cells undergo reprogramming of metabolic pathways that regulate glycolysis and the production of lipids. 24,[38][39][40] Since MAGL is important in assigning the lipid stores in the direction of protumorigenic signaling lipids in cancer cells 41,42 , we studied the mechanisms by which MAGL contributes to promoting TNBC metastases. The energy supply of lipids comes from de novo synthesis rather than circulating lipids during tumor development, which are modulated and regulated by MAGL. 38 A. The TME cells are the source of proinflammatory cytokines, 10,15 which are secreted by both autocrine and paracrine manner though activation of STAT3. 44 The strong inhibition of IL-6 by AM9928 indicates the potential of AM9928 to target inflammation associated with TNBC. Therefore, MAGL may regulate lipid quality and/or quantity to promote aggressiveness such as migration and inflammation in breast cancer cells.
VEGF-A/VEGF-R signaling functions as an important survival pathway in breast cancer cells. [48][49][50] VEGF-A promoted tumor cell self-renewal through VEGF-A/VEGF-R/STAT3 signaling resulting in activation of Myc, Sox2 and STAT3. [48][49][50] High VEGF-A levels strongly correlated with both STAT3 and Myc expression as well as with tumor metastatic potential. Interestingly, we found significant inhibition of VEGF-A expression by AM9928 (Figure 3c), strongly suggesting the effectiveness of targeting MAGL in inhibition of angiogenesis. The permeability of the BBB plays a crucial role in brain metastasis and is implicated in the initial stages of cancer cell extravasation. In the presence of invading tumor cells, the local BBB is altered and exhibits heterogeneous permeability. [16][17][18] The changes in permeability of the BBB-BMEC-TJs are a dynamic process dependent on the interaction between TNBCs and BMECs. Our results showed that TNBC transmigration across the BBB resulted in increased BBB permeability (Figures 5). Since AM9928 inhibited tumor growth in the mammary fat pads ( Figure 4) and prevented changes in BBB integrity in vivo, resulting in reduced tumor cell colonization in brain, we conclude that MAGL plays a role in TNBC tumor growth and TNBC transmigration across the BBB, in addition to its roles in neurological and neurodegenerative diseases. 51,53 Taken together, we suggest that targeting MAGL may provide a new approach to reducing expression of proinflammatory factors as well as inhibiting tumor growth and tumor cell colonization in brain.
The funders had no role in study design, data collection and analysis, decision to publish or preparation of the manuscript. This study was supported by departmental grants to Hava Karsenty Avraham and Shalom Avraham. In vivo effects of AM9928 on TNBC tumor growth and brain metastases: a. Schematic presentation of time injection of AM9928 and GFP/4T1BrM5 cells: AM9928 (at 10 mg/kg) or vehicle control were injected by I.V. twice a week for 3 weeks. b. Tumor formation in mammary fat pads following treatment with AM9928: Murine tumor formation in mammary fat pads were measured at day 28: n = 10 mice/treatment; *p < 0.05, Mann-Whitney Utest; **p < 0.005 as compared to vehicle control. c. Detection of GFP-positive cells in brain tumors: GFP-4T1BrM5 tumor cells were administered to the mammary fat pads as compared to control mice and were treated with vehicle control or with AM9928. Murine tumor cells in brain were detected by GFP immunostaining (GFP antibodies 1:50 dilutions; Abcam) and their respective controls were used under the same standardized conditions. Brain nuclei were counterstained with DAPI (blue). n = 10 mice/treatment; These are representative images of over 50 images from three independent experiments. Scale bar = 20μm. d. Quantitative analysis of tumor cells in the brain: Murine tumor cells in brain were detected by GFP immunostaining as described above. The tumor areas at day 28 were detected by immunostaining with GFP antibody: n = 10 mice/treatment; *p < 0.05, Mann-Whitney Utest; **p < 0.005 as compared to vehicle control.
Figure 5
In vivo effect of AM9928 on transmigration of GFP/4T1BrM5 cells across the BBB and on BMEC-Tight Junctions expression a: In vivo analysis of the BBB permeability: Following injection of Evans blue, a statistically signi cant increase of the dye was found in the hippocampal region in mice administered with 4T1-BrM5 cells and treated with vehicle control, but not in mice treated with AM9928. The level of dye in the control WT mice is shown also for comparison between the groups. This is a representative experiment of three separate experiments. The error bars indicate standard deviations. *p < 0.05 as compared to WT mice control. b: Immunodetection of tight junctions in normal brain CD31+ microvasculature (BMECs). The normal brain microvasculature was immunoassayed with CD31+ antibody (1:500 dilution) as a marker for BMECs, using FITC-conjugated secondary antibody. Brain sections were immunoassayed for ZO-1(1:500 dilution) and claudin-5 (1:500 dilution) using uorescence Texas Red secondary antibody to detect ZO-1 and claudin-5 expression in normal mouse brain sections. Arrows show the intact ZO-1 and claudin-5 structures. Brain nuclei were counterstained with DAPI (blue).
These are representative images of over 50 elds examined in three independent experiments. Scale bar = 20μm. c: In vivo effects of GFP-4T1BrM5 tumor cells on the BBB integrity and on tight junctions' structures: Upper Panel: The tumor cells were detected by pan-cytokeratin (Pan-CK) antibody (1:500 dilution) as primary antibody and uorescence Texas red was used as a secondary antibody to detect the tumor cells in brain sections of mice administered with GFP-4T1BrM5 cells. The brain microvasculature was immunoassayed with CD31+ antibody (1:500 dilution) to detect BMECs. The arrows indicate the tumor cells or the CD31 BMECs as indicated. These are representative images of over 50 elds examined in three independent experiments. Magni cation -x40; scale bar-10 μm. Middle and Lower Panels: Immunodetection of tight junctions in brain tumors: Expression of claudin-5 (middle panel) and ZO-1 (lower panel) in tumor brain sections were performed by immunostaining using FITC-conjugated secondary antibody as indicated. Detection of GFP-4T1BrM5 cells in vivo was performed by immunostaining with Pan-CK antibody. Co-immunostaining of the tumor cells with claudin-5 or with ZO-1 are shown in the merged gures. Brain nuclei were counterstained with DAPI (blue). These are representative images of over 50 elds examined from three independent experiments. The magni cation -x40; scale bar-10 μm. a.Analysis of ZO-1 and Claudin-5 expression in the BBB in mice treated with AM9928: Fluorescence Texas Red was used as a secondary antibody to detect ZO-1 and claudin-5 expression in mouse brain tumor sections. ZO-1 and claudin-5 structures are shown by arrows. Brain nuclei were counterstained with DAPI (blue). These are representative images of over 50 elds examined from three independent experiments. The magni cation -x40; scale bar-20 μm. b-c: Quantitative analysis of claudin-5 and ZO-1 proteins expression in the brain in mice treated with AM9928: The expression level was quantitated and compared with 4T1-BrM5 injected mice and mice treated with vehicle control. Changes in BMEC-TJ proteins in brain section samples after AM9928 treatment as compared to BMEC-TJ proteins from control mice is shown. Data are presented as the mean ± S.D. of two experiments. Number of mice per group per treatment, n = 10. Scale bar = 20μm | 5,839.4 | 2021-11-29T00:00:00.000 | [
"Biology",
"Medicine",
"Chemistry"
] |
PUFKEY: A High-Security and High-Throughput Hardware True Random Number Generator for Sensor Networks
Random number generators (RNG) play an important role in many sensor network systems and applications, such as those requiring secure and robust communications. In this paper, we develop a high-security and high-throughput hardware true random number generator, called PUFKEY, which consists of two kinds of physical unclonable function (PUF) elements. Combined with a conditioning algorithm, true random seeds are extracted from the noise on the start-up pattern of SRAM memories. These true random seeds contain full entropy. Then, the true random seeds are used as the input for a non-deterministic hardware RNG to generate a stream of true random bits with a throughput as high as 803 Mbps. The experimental results show that the bitstream generated by the proposed PUFKEY can pass all standard national institute of standards and technology (NIST) randomness tests and is resilient to a wide range of security attacks.
Introduction
In recent decades, sensor networks have been becoming a widely-used technology [1][2][3]. As is well known, those networks are made up of a large number of small nodes that sense the environment and typically report their measurements to a base station. Because of the need for the protection against unauthorized access, the trustworthiness of transmitted messages and data integrity in general, security issues are very important and cannot be neglected when developing sensor network applications.
Commonly-implemented security mechanisms rely on the availability of random numbers in order to perform their operations, as in the case of key exchange algorithms [4], which are based on randomly-generated keys, or of mutual authentication algorithms [5], which use the so-called random "nonce" to ensure the other part is trusted. Random number generators therefore play a crucial role when considering security in sensor networks.
Currently, two basic methods can be used to generate random numbers. The first method is a true random number generator (TRNG). It requires a physical source, which is truly random and from which bits can be derived directly. Such non-deterministic sources derive their randomness from underlying properties that exhibit unpredictable behavior. Recently, there have been some great TRNGs. Pyo, Pae and Lee [6] presented a DRAM-based TRNG, which exploits the unpredictability in DRAM access time caused by refresh cycles. Wieczorek [7,8] designed a novel TRNG that exploits random behavior from a nearly-metastable operation of groups of FPGA flip-flops as opposed to many deep metastability-based TRNGs. Amaki, Hashimoto and Onoye [9] presented an oscillator-based TRNG that automatically adjusts the duty cycle of a fast oscillator to 50% and generates unbiased random numbers tolerating process variation and dynamic temperature fluctuation. There are two important downsides to most of these TRNG constructions. Firstly, some require specific hardware to extract the randomness from the physical entities on the device [10]. Secondly, the throughput of TRNGs is generally relatively low [11]. They are problematic when large streams of random bits are required for cryptographic applications in sensor networks.
The second approach to generate a random number is by using true random seeds and pseudo-random number generators (PRNGs), which are deterministic algorithms. Some of them use SRAM PUF or other kinds of PUF to generate a secure random seed, then take advantage of PRNGs to generate a random bitstream [12][13][14][15]. It can produce a stream of random output bits at a high throughput rate. However, The output of such a generator only seems random to observers without prior knowledge. If an observer knows which data have been used as a seed for the PRNG or some output of a random number, then he/she will be able to calculate all output values of the generator.
In this paper, we propose PUFKEY, a high-security and high-throughput hardware true RNG, to resolve the above problems. We use SRAM PUF to generate a high quality random seed, which is non-deterministic. Subsequently this initial seed is fed into another stable PUF that generates a larger amount of random bits. The random bitstream is unpredictable. At the same time, it could generate a random bitstream at a high rate with tight cost, area and power constraints. More importantly, it can resist a wide range of security attacks.
The remainder of this paper is organized as follows. Section 2 provides related work in PUF and a SRAM cell. We present the architecture of PUFKEY in Section 3. After that, we analyze and evaluate entropy in start-up values in Section 4, a conditioning algorithm in Section 5 and non-deterministic random number generator (NDRNG) in Section 6. Based on these analyses, we discuss the security of PUFKEY in Section 8 after the implementation in Section 7. Finally, we conclude the paper in Section 9.
PUF
PUFs are most commonly used for the identification or authentication of integrated circuits (ICs), either by using the inherent unique device fingerprint or by using it to derive a device unique cryptographic key [16]. Due to deep submicron manufacturing process variations, every transistor in an IC has slightly different physical properties that lead to measurable differences in terms of its electronic properties, such as the threshold voltage, gain factor, etc. Since these process variations are uncontrollable during manufacturing, the physical properties of a device can neither be copied nor cloned. It is very hard, expensive and economically inviable to purposely create a device with a given electronic fingerprint. Therefore, one can use this inherent physical fingerprint to uniquely identify an IC.
SRAM as Sources of Entropy
Our approach to generating a seed value is based on random noise extracted from the power-up state of SRAM modules. Each bit of SRAM is a six-transistor storage cell consisting of cross-coupled inverters and access transistors. Each of the inverters drives one of the two state nodes labeled as A and B. When the circuit is unpowered, both state nodes are low (QQ' = 00). Once power is applied, this unstable state will immediately transition to one of the two stable states, which are either "0" (QQ' = 01) or "1" (QQ' = 10). The choice between the two stable states depends on threshold mismatch and thermal and shot noise. Because the stabilization depends only on mismatch between local devices, the impact of common mode process variations, such as lithography, and common mode noise, such as substrate temperature and supply fluctuations, is minimized. The sources of randomness are shown in Figure 1. Therefore, uninitialized SRAM is normally considered to be in a logically-unknown state. Some cells, however, will be almost perfectly symmetric, which leads them to settle to an unpredictable value at start-up. It is the noise due to these cells that we exploit in order to generate high quality random seeds.
Architecture
In this section, we propose the architecture of PUFKEY, which is depicted in Figure 2. PUFKEY consists of two main parts: (1) An SRAM memory connected to a conditioning algorithm for deriving a truly random seed.
(2) A non-deterministic random number generator (NDRNG). Because of the design of an SRAM memory, a large part of the bit cells is skewed due to process variations and tends to start up with a certain preferred value. This results in a device-unique pattern that is present in the SRAM memory each time it is powered. Bit cells that are more symmetrical in terms of their internal electrical properties, on the other hand, tend to sometimes start up with a "1" value and sometimes with a "0" value. Hence, these bit cells show noisy behavior. This noise can be used for the generation of random bitstreams.
The conditioning algorithm is used to derive a truly random seed from the SRAM start-up values. As explained above, only part of the SRAM bits have noisy behavior when being powered. The entropy in these noisy bits needs to be condensed into a full entropy random seed. The conditioning algorithm takes care of this. Basically, the conditioning algorithm is a compression function that compresses a certain amount of input data into a smaller fixed size bit string. The amount of compression required for generating a full entropy true random output string is determined by the min-entropy of the input bits.
Finally, the generated hash is used to seed a non-deterministic random number generator, which can generate a stream of true random numbers. It is a vital part in the whole system, because it is at the core of contributing to the high throughput and high security.
Evaluation of Entropy in SRAM Start-Up Values
For the purpose of extracting a random seed from SRAM start-up values, it is important to investigate and analyze their entropy contents. In this section, we explain the approach to quantify the entropy quality of SRAM patterns. To make sure that this seed is truly random, the SRAM patterns should be conditioned (e.g., hashed, compressed, etc.). This conditioning will be discussed in the next section.
Method of Deriving Min-Entropy
In order to extract a high quality seed from the SRAM start-up values, we have to examine their randomness properties in terms of entropy. In particular, the amount of entropy that must be present in the noise of SRAM start-up patterns should be determined. Therefore, we will calculate the min-entropy. This method is based on the NISTspecification [17] that defines min-entropy as the worst case (i.e., the greatest lower bound) measure of uncertainty for a random variable.
For a binary source, we can define the min-entropy as: In Equation (1), p 0 and p 1 are the probabilities of "0" and "1" occurring. Assuming that all bits from the SRAM start-up pattern are independent, each bit i can be viewed as an individual binary source. For each of these sources, we estimate the probabilities p i 0 and p i 1 of powering up "0" or "1" respectively by repeatedly measuring the power-up values of the SRAM. In case m subsequent measurements are performed, p i 0 denotes the number of occurrences of a zero divided by m and p i 1 = 1 -p i 0 . For n independent sources (where n is the length of the start-up pattern), we have: Hence, under the assumption that all bits are independent, we can sum the entropy of each individual bit to derive the min-entropy of the entire SRAM. In the remainder of this work, we generally denote the available min-entropy as a percentage of the total available SRAM size.
Measurement of Entropy
To be able to calculate the min-entropy of the noise on SRAM memories under different environmental conditions, measurements are performed on SRAM-based FPGA. Since it is known that SRAM start-up patterns are susceptible to external influences (such as deep-submicron process variations, varying ambient temperature, voltage variations, etc.), it is important to measure the min-entropy under different circumstances.
Cortez [18] has researched the modeling and analysis of their start-up values (SUVs) of SRAM. He presents an analytical model for SUVs of an SRAM PUF based on the static noise margin (SNM) and reports some industrial measurements to validate the model. The model is further used to perform a sensitivity analysis to identify the impact of different technology and non-technology parameters. Simulation of the impact of different sensitivity parameters (such as variation in power supply, temperature, transistor geometry) has been performed. The results show that out of all sensitivity parameters, variation in threshold voltage is the one with the highest impact in technology parameters. Schrijen [19] concludes that temperature is a key influencing factor in non-technology parameters. Supply voltage variation does not influence the reliability of these SRAM memories when used as PUFs. Selimis [20] also proves that the temperature is an important element. Therefore, we choose to perform measurements at varying ambient temperatures in our work.
For min-entropy calculations, the measurement environment for the worst case behavior should be as stable as possible. Therefore, we will be determining the minimal amount of entropy for each of the individual ambient temperature conditions.
We measured the entropy on 10 samples. The samples are from block SelectRAM (18Kb) on the Xilinx Virtex-II Pro Platform, which contains a Xilinx XC2VP30 [21]. We calculated the min-entropy for all of these memories at −10 • C, +20 • C and +50 • C. Using these measurements, the results for the different conditions can be found in Table 1.
Discussion of Measurement Result
From the results in the previous subsection, it becomes clear that all studied memories have different amounts of randomness that can be extracted from noise. In general, we can say that the entropy of the noise is minimal at the lowest measured temperature of −10 • C.
Based on the results, we can conclude that for each of the evaluated memories, it should be possible to derive a truly random seed (with full entropy) of 128 bits from a limited amount of SRAM. For instance, if we assume a conditioning algorithm that compresses its input to 4% (each studied memory has a min-entropy that is higher than 4% under all circumstances), the required length of the SRAM start-up pattern is 400 bytes.
Conditioning Algorithm
As described in the prior section, a seed is required before generating true random output bits with NDRNG. This seed is used to instantiate NDRNG and to determine its initial internal state. According to the specifications by NIST [17], the seed should have entropy that is equal to or greater than the security strength of the NDRNG. In our design, the NDRNG requires an input seed of 128 bits with the full entropy. In order to achieve this, a conditioning algorithm can be used.
To extract a truly random seed from the SRAM start-up noise, we have selected a lightweight hash function which is called QUARK [22] to perform the conditioning. QUARK could provide at least 64-bit security against all attacks (collisions, multi-collisions, distinguishers, etc.), fit in 1379 gate-equivalents, and consume, on average, 2.44 µW at 100 kHz in 0.18 µm ASIC.
This hash function compresses input strings into an output string at the length of 128 bits. In order to have the full entropy, the amount of entropy at the input of the hash function should be at least 128 bits. For example, if the input string has a min-entropy of 4%, the length of this input needs to be at least 400 bytes. Then, we perform the Hamming distance test to find out whether the seed derived by the conditioning algorithm is truly random. For this test, subsets of the generated seeds are compared to each other based on fractional Hamming distance (HD), which is defined by: Based on Equation (4), x and y are SRAM start-up patterns with individual bits x i and y i , respectively. A set of fractional HDs is composed. HDs are measured 5000 times, and the average value of HDs is 14.327 HDs. To indicate that these seeds have been created by a random source, the set of HDs should have a Gaussian distribution with mean value of 0.5 and a small standard deviation. Figure 3 shows the Hamming distance distribution of the 5000 seeds generated by the conditioning algorithm. As can be seen in this figure, the distribution is perfectly Gaussian with a mean value µ of 0.5. The standard deviation σ of this distribution is 0.04419. The estimation of the entropy of this dataset can be evaluated: Based on this evaluation of 5000 seeds, it appears that the output strings of the conditioning algorithm are truly random and contain the full entropy, since the length of these strings is 128 bits.
NDRNG
When we have a truly random seed with full entropy, a stream of random numbers could be generated by NDRNG. The architecture of NDRNG is depicted in Figure 4. It has k bits input/output and nlevels of LUTs. Each LUT has x inputs and one output, and it randomly chooses each input from the output of an LUT in the previous levels. The randomness of NDRNG is based on the randomness of inter-stage shuffling and the content of each LUT. A k-bits random number is generated after using the seed (k-bits) for initialization. The generated k-bits random number would serve as the seed for the next round. We repeat the procedure to continuously generate k parallel random bits at a time and then transform k parallel bits into a sequential bitstream. Figure 5 shows a simple example, which has four inputs, four LUTs and four flip flops. The structure is a sequential logic cluster. This design forms a cycle, which can keep repeating. When it comes to the first cycle, a random seed of four bits is used as the primary inputs. The four LUTs select their inputs from the four bits of initialization inputs after random shuffling. Each LUT randomly chooses four bits as its inputs. Then, in Cycle 2, the four outputs generated by the four LUTs in Cycle 1 are stored in the flip flops, and they are becoming the cycle's inputs after random shuffling. The cycles can be repeated many times. We choose the Virtex-II Pro Platform to build the architecture of NDRNG. After its power-up, SRAM cells will be altered to either a stable "1" or "0". Therefore, the inter-stage shuffling and content of SRAM cells in the LUT is randomness. Alternatively, customers can allocate LUTs in such a way that each cell has the same probability to be either "1" or "0", so the number of "0" and "1" will be completely the same.
An important problem is what the sufficient number of LUT levels is to achieve the good security property while maintaining a low area/power consumption given an NDRNG of k outputs. Given an NDRNG of 128 primary inputs and 128 final outputs and each level of k = 128 input LUTs, we test the output Hamming distance of the avalanche effect for different numbers of levels. The output Hamming distance is defined as the number of bits changed in the output vector when one bit is changed in the input vector. Figure 6 clearly shows that the Hamming distance grows exponentially in the beginning (levels < 5) when the number of levels increases and quickly converges to the ideal case of 128 when the number of levels reaches nine. Therefore, a total of nine levels is enough to achieve a good security property. The throughput of PUFKEY mainly depends on NDRNG, and we have tested the throughput of NDRNG when the bit width input/output = 128. The results are shown in Section 7.
Implementation
Based on the above results and analysis, we implemented the proposed RNG in this paper, which consists of SRAM, the conditioning algorithm and NDRNG. The building blocks are combined in the implementation as depicted in Figure 2.
The amount of SRAM used for the implementation is 400 bytes, which are based on the min-entropy estimations from Section 4. The minimal amount of observed min-entropy in this section is 4%. As stated in Section 5, the conditioning algorithm used in this implementation is the u-Quarkhash function, which is the smallest version of Quark. u-Quark has a maximum parallelism degree of eight, meaning that up to eight rounds can be performed within one cycle. Aumasson [22] specifies 4b = 544 rounds. However, further studies show that 3b or even 2b rounds are enough. Therefore, we use the 3b-round permutation of u-Quark. With 3b = 408 rounds, this translates to 408/8 = 51 cycles to process eight bytes. To process 400 bytes, it needs about 2550 cycles.
The random seed is used as the input for NDRNG, which converts the seed into a stream of random bits. The implementation of NDRNG sets m = 128 and n = 9.
The implementation targets the XUP Virtex-II Pro Development System from Digilent. A comparison with related works is summarized in Table 2. A design with multiple ring oscillators is proposed in [23].
However, the underlying assumptions behind the proof of security have been challenged. In addition, the design consumes too much FPGA resources, because it requires 110 ring oscillators. A compact TRNG with a security proof implemented on the Xilinx Spartan-3E FPGA is presented in [24], but this design achieves a throughput of only 250 kb/s. The design based on self-timed ring-oscillators is presented in [25]. This design achieves 100 Mb/s throughput, and it is accompanied by an entropy model, but it consumes too much resource on the FPGA, since it requires a self-timed ring oscillator with 511 stages. A novel entropy extraction technique for true random number generators on FPGAs is presented in [26]. This technique relies on the carry-logic primitives for efficient sampling of the accumulated jitter. However, the highest throughput is 14.3 Mb/s. There is another TRNG comprised of logic gates only [27], and it can be integrated in any kind of logic large scale integration (LSI), but the entropy source determines the maximum data flow (bit rate) of a generator and its sensitivity to the supply voltage. Our design achieves higher throughput than other TRNGs. The maximum delay of NDRNG is 159.22 ns. It is suitable in sensor networks where large streams of true random bits are required for cryptographic applications. However, it costs some hardware resource.
Output Randomness
We test the output randomness by applying the NIST randomness test. NIST is a battery of standard statistical tests to detect non-randomness in the binary sequences constructed. Table 3 shows the average passing ratio of each NIST statistical test. One thousand bitstreams of 1,000,000 bits are provided to each test. Each test passes for p-value ≥ δ, where δ = 0.01. We can see that the proportion of successful tests is high enough to indicate excellent randomness in the output stream.
Security of the Truly Random Seed
What is crucial for security is to maintain the unpredictability of the data stream produced by PUFKEY. Once an attacker knows the seed, it may affect the security of the whole system. Thus, to ensure high security, it is important to protect the seed value from the accesses of other algorithms.
In this work, we assume an attack scenario in which an adversary has no direct physical access to the FPGA. Otherwise, it would be impossible to ensure that the power-up SRAM value remains secret, since an adversary can use a debugging interface, such as JTAG, to halt the FPGA during start-up, read out the data and then let the start-up process continue.
To limit the exposure of the initial SRAM state and prevent attacks where the seed is re-calculated from the SRAM content, all SRAM (except for the seed value) should be cleared immediately after seed generation. This can be achieved by making sure that the seeding algorithm is the very first code that runs on power-up and that the algorithm is executed atomically. Methods to ensure this, such as disabling interrupts and preventing unauthorized firmware modifications, are outside the scope of this paper.
Finally, in order to guarantee proper SRAM resets in between power cycles of the FPGA, care should be taken that the FPGA's positive supply lines are grounded when the device shuts down. If this is not done, it might power up with old and predictable data of low entropy still being present in SRAM.
Output Hamming Distance
While using the NDRNG to secure the sensor network, the resistance against malicious attack/prediction is important. Even if attackers get seeds or some outputs of random numbers, it is still difficult to calculate the bitstream without the knowledge of NDRNG. In this section, we statistically analyze the security of the NDRNG system. The attacks would be regarded as successful if the output stream can be predicted. The simulation is conducted on an NDRNG architecture with m = 128 and n = 9. The statistical results are based on 1,000,000 input-output pairs.
One type of prediction is to predict outputs by the knowledge of the outputs from similar inputs. This attack is dangerous when the output vectors with similar input vectors are highly similar to each other. To test this, we summarize the Hamming distances (the number of bits that are different between two vectors) between the output vectors by changing one bit of the input vectors at each iteration. In the ideal case, the distribution would be in the form of a binomial distribution with the peak on the half of the number of outputs. Figure 7 shows the accumulative results of 1000 test cases, and in each case, 1,000,000 instances of the Hamming distance are tested. In the test, we individually tested the 128 output ports of NDRNG. Each output of NDRNG is presented as the X-axis and relative frequency as the Y-axis. The binomial distribution proves that the NDRNG output stream cannot be predicted in this way.
Input-Output Correlation
Similar to the previously-described attack, the other type of attack attempts to predict the output bit O i according to the value of an input bit I j in NDRNG. In either case, if the output bit has a strong correlation with an input bit, then the attacker can deduce the output vector by knowing the input bits. This security test reveals the bitwise correlation between the inputs and the outputs of NDRNG. The 128 output ports of NDRNG are tested individually. We present a conditional probability map of P(O i = 1|I j = 1) in Figure 8 depicting the low potential for prediction based on input to output correlation in NDRNG.
Frequency Prediction
There is another attack. The attacker collects the output data from the previous bitstream and builds a probability distribution for each output. Ultimately, the attacker tries to predict each output bit by using this distribution. The goal of the attacker is to predict P (o i ) = x where x = "0" or "1". The ideal situation is that an output is "0" or "1", and each output has a probability 0.5. Figure 9 shows the mean value of the probability that each output bit is equal to "1". In the test, we individually tested the 128 output ports of NDRNG. Each output port of NDRNG is presented as the X-axis and the frequency as the Y-axis. The probability is close to 0.5, indicating the high unpredictability of the structure. We further adopt von Neumann correction to the output of the random bitstream, so that the "0" and "1" have the same probability of 0.5. Figure 9. Probability that an output bit is equal to "1".
Conclusions
In this work, we present a novel hardware true random number generator PUFKEY for sensor networks, which is based on the combination of extracting a non-deterministic true random seed from the noise on the start-up pattern of SRAM memories and a non-deterministic random number generator to convert this seed into a stream of true random bits. The extraction of the physical randomness from SRAM start-up patterns is based on min-entropy calculation. Then, we use a lightweight hash function QUARK as a conditioning algorithm to extract truly random seeds from SRAM noise. The results show that the seeds contain full entropy. Combining the extracted randomness with the NDRNG, PUFKEY could generate a true random bitstream at 803 Mbps. PUFKEY passes all standard NIST randomness test and resists a wide range of security attacks. | 6,326.6 | 2015-10-01T00:00:00.000 | [
"Computer Science"
] |
Ivorian Towns of the Inland, Put to the Test of Their Environmental Degradation: the Case of Daloa (West Center of Côte d'Ivoire)
With its 258,509 inhabitants (INS, 2014. 27), Daloa, the third largest city in Côte d'Ivoire, has experienced an urban growth rate of 2.73% (op cites). But over the years, the locality has been confronted with an uncontrolled urban dynamic which has environmental repercussions on its landscape. Measures and actions are daily announced by the public authorities to eradicate this phenomenon, but the change is still virtual. This study aims to identify the persistence of environmental degradation in the city of Daloa. The methodology for conducting this study was based on a set of technical approaches. This is documentary research focusing mainly on scientific work and study reports addressing the issue of uncontrolled urbanization and its environmental effects. In addition to this approach, direct observation and interviews with local resource people were used. A questionnaire survey was also conducted with 373 heads of household using the probabilistic formula without Environmental Management and Sustainable Development ISSN 2164-7682 2021, Vol. 10, No. 1 http://emsd.macrothink.org 61 replacement in 1/3 of the enumeration areas totaling 13 neighborhoods. The results show that the deterioration of the urban environment in Daloa is experienced by households in the form of pollution caused by rainwater, wastewater and sluices (47%.), Pollution due to household waste (24% ), air (15%) and noise (14%) pollution. The factors are plural and reveal that 58% of the households surveyed dispose of their used water (detergents, dishes) on the street and 48% dump household waste on the streets. The impact on the urban landscape is just as diverse and unpleasant. Rainwater on unpaved roads accelerates erosion. The flow of open sewage and piles of rubbish, undermine the aesthetics of the urban landscape of Daloa.
Introduction
The beginning of the 21st century marks the assertion of the South as engines of urbanization. This very sustained urbanization is reflected both by massive arrivals of populations from the countryside and by high birth rates within urban families, generally young. The rural exodus combined with demographic growth produces strong pressure in terms of access to housing and infrastructure (P. NEDELEC, 2018, p. 33) The 21st century as the engine of Africa's urbanization is a reality. With an urbanization rate of 14.5% in 1950, the urbanization rate reached 40% in 2010, reaching 61.60% in 2050 (P. NEDELEC, 2018, p. 33). In Côte d'Ivoire, the urbanization rate fell from 5% in 1950 to 39% and 42.5% in 198839% and 42.5% in and 199839% and 42.5% in , respectively, to reach 50.3% in 201439% and 42.5% in (INS, 198839% and 42.5% in , 199839% and 42.5% in and 2014. This urban explosion is also noticeable in Daloa, a town located in the center-west of the Ivory Coast. With an estimated population of 258,509 inhabitants and 45,429 households (INS, 2014), it has an urban growth rate of 2.73%. The locality is faced with an uncontrolled urban dynamic which is not unrelated to environmental problems KOUKOUGNON (2012, p.23). Despite the actions taken to preserve the city's environmental framework, the major trends in its degradation are irrefutable: illegal dumping of household refuse, defective sanitation networks, impassable traffic routes, etc. Therefore, the issue of environmental degradation in Daloa remains relevant.
Geographic Setting of the Study
Environmental Management and Sustainable Development ISSN 2164-7682 2021 Figure 1. Presentation of the study area 42 districts divided into six counting zones make up the city of Daloa. Each enumeration area includes a residential district, an evolving district and a precarious district. Each spatial entity brings together sociodemographic, socioeconomic, sociological and environmental sensitivities. This nomenclature is established by the INS as part of general population and housing census operations. As it was impossible to carry out the study on all six zones, 2 counting zones were chosen for sampling (Zone I and III) in view of operational constraints and environmental realities in the town of Daloa. These areas were chosen to understand the heavy trend of environmental degradation in the city of Daloa because they better explain the phenomenon.
Data Collection Method
The data collection was initially focused on documentary research; On the one hand, the documentary research focused on the consultation of theses, dissertations, scientific articles, study reports and on the other hand, on administrative archives. The sought information concerned the forms of degradation of the urban environment, their spatial distribution, the factors of the degradation of the urban environment and their impact on the living environment of the populations. Overall, the consulted documents were in the documentation centers of the University Jean Lorougnon Gué dé (UJLoG), the documentation center of Calasanz, the French institute of Abidjan plateau, the Institut of Tropical Geography (IGT); in the Ministries of Environment and Sustainable Development (MEDD), Health and Sanitation Environmental Management and Sustainable Development ISSN 2164-7682 2021 (MSA), Construction, Housing, Sanitation and Urban Planning (MCLAU); at the Ivorian Antipollution Center (CIAPOL) and at the Daloa technical town hall. The documentary approach was supplemented by webography (internet).The practical phase of data collection combined direct observation, interviews and questionnaire. The direct observation was the opportunity to travel through the city in general and in particular the thirteen (13) districts of the 2 zones in order to understand the environmental realities in the so-called residential districts (zone I) and the so-called working-class districts (zone III). Also, these areas were the subject of our study because they contain varied and heterogeneous environmental realities. The main entities observed are mainly the landscape, equipment, infrastructure (sanitation, health, education, water, roads) and habitat (living environment).A questionnaire for heads of households focused on the forms of environmental degradation they face, the factors behind this degradation and the impact of this phenomenon on their quality of life. Interviews were also carried out with a manager from each of the following six (6) administrations: MEDD, MSA, CIAPOL, MCLAU, Daloa technical town hall. In total, 379 people were surveyed in this study, and this during the months of January, February and March 2019. To determine the size of the sample of households to be surveyed within the 13 neighborhoods comprising the two zones, the probabilistic no discount method was applied.
A multiformity of the Degradation of the Urban Environment Induced by Plural Factors
Environmental degradation in Daloa takes various forms and affects various media, both natural (water, air, soil) and artificial (living environment, infrastructure). This degradation is above all due to pollution ( Figure 2). Pollution caused by rainwater, wastewater and valves is criticized by households at 47%. ISSN 2164-7682 2021 Their particularity is that they start from homes to alleys and vice versa. In addition, 24% indexed the pollution due to household waste which ends up being buried alone in the ground over time. As for atmospheric pollution (air), 15% noted it. They complain about the dust particles released by roads, most of which are unpaved. Finally, 14% of the surveyed households also noted the noise pollution resulting from numerous noisy activities in the vicinity of dwellings. The results indicate that 58% of the surveyed households evacuate their wastewater in the street, 22% in septic tanks, 11% in soaker wells and 9% say they evacuate it in gutters. But in the studied areas, the phenomenon is not understood in an egalitarian way ( Figure 4).
Environmental Management and Sustainable Development
ISSN 2164-7682 2021, Vol. 10, No. 1 Figure 4. Distribution of the wastewater disposal method by district in 2019 Figure 4 shows that in the districts of Abattoir 1, Sud A (Garage), Sud B (Abattoir 2), Sud C (Fadiga), Sud D (Savonnerie), Labia, Lobia 2 and Tazibouo Extension at least 50% of households that they evacuate their waste water in the street. The proportions are respectively: Slaughterhouse 1 (81.33%), South A (61.90%), South B (65.33%), South C (57.14%), South D (65.38%), Labia (72%), Lobia 2 (63.16%) Tazibouo Extension (57.43%). As for the districts, Evê ché (75%), Kirmann (62.5%), General Staff (80%) and Tazibouo 1 (56.10%), households evacuate their wastewater in septic tanks. The soaker wells and gutters are mainly used in the South C district with 28.57% and 42.86% respectively. In zone 3, the street remains the preferred household network for the evacuation of wastewater ( Figure 5). This Figure shows a street in the Abattoir 1 neighbourhood, where you can see the waste water run-off in the middle of the street, which looks like black water in terms of colour. The whole thing is mixed with a pile of rubbish. This leads us to believe that this street is a dump tolerated by everyone.
Environmental Management and Sustainable Development
The method of emptying the latrines The emptying of toilets and latrines constitutes a phenomenon of degradation of the urban environment. Indeed, during this activity, foul odors are released from human waste that pollute the atmosphere. Thus, the methods of emptying the latrines identified during the household survey are the use of emptying trucks, either through soaks, or they are connected to the open gutter. The emptying method by using an adapted truck is more practiced by households in the areas surveyed. Thus, 43% of households use this means, 41% soak-up wells and 16% open-air gutters. Households in the South C (68%), Evê ché (73%), Kirmann (61%), Piscine (62%), Lobia 2 (62%), Tazibouo 1 (74%) and General Staff (79%) districts no longer use trucks as a means of emptying latrines and toilets. In the Abattoir 1 and Tazibouo Extension districts, 65% of the surveyed households proceed by connection to the open gutter to empty their latrines. On the other hand, this way of doing things concerns only 1% of households in the Tazibouo 1, Evê ché and Tazibouo Etat-Major districts. The proportions of this practice in the South A, South B, South D, Piscine, Labia and Lobia 2 districts are respectively 12%, 10%, 12%, 18%, 11% and 5%. Households using soaks for emptying are Environmental Management and Sustainable Development ISSN 2164-7682 2021 mainly located in the South A (68%), South B (59%), South D (55%) and Labia (82%) districts. Households use the most appropriate technique for emptying their latrines and toilets, however the wrong options contribute to the degradation of the urban environment of Daloa.
Inappropriate Management of Household Waste in Neighborhoods
The proliferation of household waste remains the most visible form of the degradation of the urban environment. . In fact, it is possible to observe piles of rubbish in the corners of houses and in the streets of neighbourhoods. Speaking of the possession of rubbish bins, 41% of the households surveyed said that they had them, compared with 59% of households who said that they did not have any. There is an indescribable lawlessness everywhere around households and on the streets garbage storage. The odors that emerge become unbearable at times.
A heterogeneous way of disposing of household waste
The method of disposing of household waste in the town of Daloa has been criticized as a factor in the degradation of the urban environment. Thus, streets, lowlands or ravines, incineration and the service of pre-collectors, constitute the main modes of household waste disposal. The results of the investigations indicate that 48% of the households surveyed evacuate their waste in the streets, 30% by the service of the pre-collectors, 13% in the shallows and 9% by incineration. The most worrying phenomenon is the proliferation of household waste along roadsides ( Figure 6). On this road in the garage district, we can see an informal deposit of household refuse that overflowed from its border site to spread out on the roadway. Unsanitary water is released there. ISSN 2164-7682 2021 The different proportions by district are highlighted by the graph below The Figure shows that households in the Abattoir 1 (62%), Sud A (62%), Sud B (68%), Labia (52%) and Tazibouo Extension (50%) districts dump their household waste in the street. As for those in the Evê ché (75%), Kirmann (75%), Swimming pool, Tazibouo 1 (52%) and General Staff (65%) districts, they hire the precollector for the collection of their rubbish. in the districts of Abattoir 1 (18%), South A (10%), South B (12%), South D (28%) Tazibouo 1 (12%), Labia (30%), Lobia 2 (25%) and Staff (18%), the lowlands serve as dumping grounds. However, no household in the Sud C, Evê ché , Kirmann and Piscine neighborhoods mentioned the use of the shallows as a place of garbage disposal. In all of the surveyed districts, the responses of households in favor of incinerating their garbage are low (9.12%). However, 50% of households in the Sud C district (Fadiga) incinerate their garbage. The data show that a relatively large proportion of the populations of the neighborhoods studied exhibit behaviours that are harmful to the environment. The non-use of the places indicated by the municipal authorities for the deposit of household waste shows the acuteness of the unsanitary living environment of the surveyed populations in zone 3.
Degradation Attributable to Road Infrastructure, Atmospheric Pollution and Noise Pollution
According to the technical services of the town hall, the urban road network has a total length of 526.09 km. The paved road in good condition which is 48.52 km long; the paved road in poor condition has a length of 23.88 km; the unpaved road in poor condition totals 254.6 km; the undeveloped road of 199, 09 km. Therefore, it emerges that 91% of the roads are in poor condition or not developed; Only 13.76% of the road is asphalted, 1/3 of which is in poor condition. The districts of Zone 1 concentrate most of the roads in good condition and the districts of Zone 3 concentrate most of the undeveloped and / or in very poor condition. Thus, there are 8% of paved roads and good condition in zone 1 against 5.76% in zone 3. Stripping, lack of maintenance, silting up are the ills of asphalted roads and gullying. erosion explains the poor condition of the unpaved roads (Figure 8). In the long run, not asphalting the tracks leads to their degradation or the production of dust clouds during the dry season.
Of the 15.47% of households decrying atmospheric (air) pollution, 14% note that it is due to dust particles mainly emanating from unpaved roads and 1.47% decry the incineration of waste. Regarding the 13.65% of respondents complaining of noise pollution, 12.70% decry the anarchic installation of economic activities and 0.95% the lack of recreational space.
Populations Exposed to Socio-environmental Risks
In the face of the generalised degradation of the urban environment in Daloa, the households surveyed indicated that they had suffered several prejudices. They were grouped into 4 categories (Figure 9). ISSN 2164-7682 2021 The stench and the degradation of the ambient living environment is mainly presented by 41% of the respondents a major incidence of pollution in Daloa. This is followed by difficulties in accessing neighborhoods (32%) due to the impraticability of most of the roads. The weak establishment of modern service activities resulting from this pollution is revealed by 21% of housewives since the stench and numerous dust particles thwart their attractiveness. Households also complain of the difficulty in keeping their clothes clean due to the numerous particles of dust or mud during the rainy season and dry season. The built habitats in non-aedificandi areas are the most exposed to degradation of the living environment ( Figure 10). ISSN 2164-7682 2021 This Figure shows a house located in a lowland in front of which the rainwater stagnates. This situation presents an unpleasant state of the habitat. Thus, the inhabitants constantly live in dampness and mud, putting them in a state of vulnerability. Also, this state of affairs poses a problem of movement in the yard. In general, the stagnation of rainwater is feared by households at 38% while the effect of erosion is denounced at 62% by said households. Thus, these natural effects are more feared in the South A (73%), Abattoir 1 (54%), South D (63%) and Lobia 2 (54%) districts with regard to erosion. As for stagnant rainwater, these are the districts of Abattoir 1 (69%), South A (55%), Swimming pool (51%) and Tazibouo Extension (63%). In other districts, these so-called effects have been less recent. The stench and the degradation of the surrounding living environment also undermine the aesthetics of the urban landscape ( Figure 11 This Figure taken at Lobia 2, highlights the illegal garbage dumps littering the edges of streets and giving off foul odors. The image also shows passers-by (students) visibly suffering from the stench emanating from the pile of rubbish.
Increased Health Risks
The current state of the environment in Daloa contributes to the proliferation of diseases. Indeed, the hygienic practices of the populations in such a quasi-vulnerable environment are inadequate with environmental standards. This results in environmentally related diseases. Thus, the survey conducted among households to find out the occurrence of diseases to which they are more victims, it appears that malaria, acute respiratory infections (ARI) and diarrhea Environmental Management and Sustainable Development ISSN 2164-7682 2021 are recurrent. The results indicate that 64% of households are affected by malaria against 27% of ARIs and 9% of diarrhea. It should be noted that the preponderance of these diseases depends on the different districts of the city. These spatial disparities in pathologies can be seen in Figure 12 below.
Figure 12. Cases of illnesses observed in households by district in 2019
Source: Our surveys, January-June, 2019 Specifically, the three main diseases (malaria, ARI, diarrhea) are observed in all neighborhoods. Indeed, the prevalence of malaria is denounced in most neighborhoods at at least 60%. However, there is an absence of this infection in the South C district (Fadiga). As for ARI, this infection is more marked in the South C district (78%) and weak in the other districts. Finally, concerning diarrhea, it is less singled out by the households surveyed. It is weak in all neighborhoods.
Theoretically, the evolution of pathogenic cases of malaria is linked to the multiplicity of stagnant water in connection with the non-use of impregnated mosquito nets by the neighboring populations. As for ARIs, they are caused by increased dust and smoke in the air. Infection of populations with ARI results from aspiration of infected air particles. Finally, the resurgence of diarrheal diseases is the result of contact between wastewater and water resources (well water, surface water in the water table). This type of contact increases the risk of contamination of populations when they use these resources (consumption, swimming, food, bath, laundry, dishes, etc.) In short, all these socio-sanitary dysfunctions are as many effects which characterize the neighboring populations. It is in the face of this dramatic situation that the local authorities of the city of Daloa will take action to improve the environmental framework. ISSN 2164-7682 2021
Discussion
The objective of this study is to identify the problem of the persistence of environmental degradation in Daloa, a town in the interior of Côte d'Ivoire. This locality is one of the 10 most important localities of Côte d'Ivoire from a socio-economic point of view. For KOUKOUGNON (2012, p. 23), this question suggests that the town of Daloa is experiencing accelerated spatial-demographic growth. As such, one can affirm, echoing VENNETIER (1990, p. 59-64) and OUMAROU (2018, p. 130), that the increase in its population entails risks of environmental degradation while exposing it to massive dangers.
The results of the study revealed a multiformity of the degradation of the urban environment induced by multiple factors. The poor treatment of rainwater, wastewater and valves is decried by 47% of respondents. In their studies on Adjamé in Abidjan and N'Djamena in Chad, TUO (2010, p. 10) andSIMEU-KAMDEM (2018, p. 149) note the absence and inefficiency of wastewater and rainwater treatment systems as one of the major causes of the increase in insalubrity in these cities. Aspect underlined by N. NEDELEC (2018, p. 33) in a more general approach to urbanization in cities in developing countries. However, Q. YAO-KOUASSI (2010, p. 8) points out that the issue of insalubrity in cities in the South is not only due to the urbanization trajectory -it also resides in a political will to propose solutions for the management of urban waste. For this author, the aim is to review the policies of Ivorian cities by resorting to a more pragmatic urban governance with regard to the issue of household waste management. Furthermore, the results of the study indicate that 43% of households use latrines and adapted trucks for emptying, 41% use waste wells and 16% use open gutters.
In addition, the poor management of household waste was decried by 24% of respondents. On this issue, SIMEU-KAMDEM (2018, p. 150), referring to the case of the city of N'Djamena, only points out that 17% of waste water is evacuated through gutters or gullies. All the rest is dumped in the street, in the concessions or even on some unoccupied plots. Also, cities are the cause of the concentration of this waste in one place, which overloads the capacity of local ecosystems to assimilate them. Moreover, such practices also have a social cost due to the nuisance produced. For ATTAHI (2006, p. 11), the problem of household waste is 'one of the big black spots in the municipal management of cities. All mayors have lost the battle over household waste. They are dedicated to the collection of this rubbish. This is understandable, given the continued unsavoury behaviour of some households. In this respect, DIA and TENDENG (2018, p. 188) explain these household practices by the difficult transition from rural to urban lifestyle. This study also found that 48% of households dump domestic waste in the streets. As for air and noise pollution, respectively 15% and 14% of the households surveyed present them as determining factors in environmental degradation. Overall, the results of the study agree with those of GOGOUA (2013, p. 53) who quoted HaJle and Bruzon, claiming that environmental degradation poses many problems. The priority problems are insufficient collection of solid and waste wastes, pollution of surface water. And the second-tier problems concern groundwater, air pollution, noise pollution and natural hazards. ISSN 2164-7682 2021 Regarding the socio-environmental and health impacts, 41% of respondents complained of the stench and the deterioration of the living environment, 64% of malaria, 27% of respiratory infections and 9% of diarrhea. This is why DERYCKE (1973, p. 310), wonders. For him, the urban space, which is home to nearly two / thirds of humanity and most of the production units, has become the seat of tension, nuisance, insecurity and cost, the magnitude of which leads one to wonder if urban life does not have more disadvantages than advantages.
Environmental Management and Sustainable Development
The health issue is all the more worrying as CAIRNCROSS et al (2004, p. 45), affirms that environmental factors are the cause of 21% of diseases in the world, and this proportion is even greater in developing countries. According to these authors, 1.7 million young children die each year from diarrhea due to unsafe drinking water supply, inadequate sanitation and hygiene. 1.4 million childhood deaths from respiratory infections are attributable to atmospheric air pollution. WHO-UN Habitat, (2010, p. 98), confirms this by indicating that environmental factors are responsible for more than 21% of the global burden of diseases.
Conclusion
This study has shown, that the environmental landscape of Daloa is subject to various forms of environmental degradation, resulting from multiple factors. The main causes of environmental degradation identified are the problems associated with the poor management of rainwater, wastewater and valves. The proliferation of household waste and its poor management are also a formidable factor. To this was added the anarchic installation of economic activities causing noise pollution in certain neighborhoods. Finally, the many unpaved roads produce significant amounts of dust, here called air pollution. The disadvantages of the degradation of the environment on the city of Daloa are noticeable both on the populations and on the space. Populations are infected and the urban landscape has lost its appeal. The phenomenon is evolving exponentially while initiatives are less and less. No one should wait for the worst before taking appropriate large-scale measures. | 5,611.2 | 2021-02-16T00:00:00.000 | [
"Economics"
] |
Variability in the Spatial Structure of the Central Loop in Cobra Cytotoxins Revealed by X-ray Analysis and Molecular Modeling
Cobra cytotoxins (CTs) belong to the three-fingered protein family and possess membrane activity. Here, we studied cytotoxin 13 from Naja naja cobra venom (CT13Nn). For the first time, a spatial model of CT13Nn with both “water” and “membrane” conformations of the central loop (loop-2) were determined by X-ray crystallography. The “water” conformation of the loop was frequently observed. It was similar to the structure of loop-2 of numerous CTs, determined by either NMR spectroscopy in aqueous solution, or the X-ray method. The “membrane” conformation is rare one and, to date has only been observed by NMR for a single cytotoxin 1 from N. oxiana (CT1No) in detergent micelle. Both CT13Nn and CT1No are S-type CTs. Membrane-binding of these CTs probably involves an additional step—the conformational transformation of the loop-2. To confirm this suggestion, we conducted molecular dynamics simulations of both CT1No and CT13Nn in the Highly Mimetic Membrane Model of palmitoiloleoylphosphatidylglycerol, starting with their “water” NMR models. We found that the both toxins transform their “water” conformation of loop-2 into the “membrane” one during the insertion process. This supports the hypothesis that the S-type CTs, unlike their P-type counterparts, require conformational adaptation of loop-2 during interaction with lipid membranes.
Introduction
Three-finger toxins (TFTs) are disulfide-rich proteins from the venoms of cobras [1][2][3][4], kraits [5], coral [6], and some other snakes. Their fold features three distinct β-structural "fingers", emerging from a globular core stapled by four conserved disulfide bridges [7][8][9][10]. Here, we determined the spatial organization of CT13Nn (CT2Nk, or L48V49 variant of CT3Nn N.naja) toxin by X-ray crystallography. We then studied the interaction of this toxin with the anionic membranes of palmitoiloleoylphosphatidylglycerol (POPG), using the Highly Mimetic Membrane Model (HMMM) [38]. Previously, this membrane model had not been used for studies of CT/lipid interactions. Its main feature is the presence of an organic solvent layer representing the hydrophobic core of the membrane, while shorttailed phospholipids constitute the headgroup region. These lipid molecules exhibit up to two orders of magnitude enhancement in lateral diffusion, leaving the membrane atomic density profile of the headgroup region essentially identical to that of the membrane models composed of full-length lipid molecules. Use of the HMMM looks promising because substantial acceleration (from microseconds to few hundred nanoseconds) compared to conventional all-atom MD simulation study of CTs can be achieved. In addition, we performed similar calculations for cytotoxin 1 from N. oxiana (CT1No) ( Table 1). For this toxin, the spatial structure in aqueous solution and in dodecylphosphocholine (DPC) micelles had been determined earlier [22]. Thus, we were able to demonstrate that the spatial organization of the central loop in a CT molecule in the crystal state can be similar to the one adopted by this molecule in the lipid membrane. This information is important for establishing structure-activity relationships in CTs.
X-ray Crystallography
Here, we used X-ray crystallography to determine the spatial organization of CT13Nn. Structures of hexagonal (with three molecules in the asymmetric unit denoted A, B, and C; Figure 1a) and orthorhombic (with six molecules in an asymmetric unit denoted A, B, C, D, E, and F; Figure 1b) crystal forms were solved at 2.3 and 2.6 Å-resolution, respectively. model had not been used for studies of CT/lipid interactions. Its main feature is the presence of an organic solvent layer representing the hydrophobic core of the membrane, while short-tailed phospholipids constitute the headgroup region. These lipid molecules exhibit up to two orders of magnitude enhancement in lateral diffusion, leaving the membrane atomic density profile of the headgroup region essentially identical to that of the membrane models composed of full-length lipid molecules. Use of the HMMM looks promising because substantial acceleration (from microseconds to few hundred nanoseconds) compared to conventional all-atom MD simulation study of CTs can be achieved. In addition, we performed similar calculations for cytotoxin 1 from N. oxiana (CT1No) ( Table 1). For this toxin, the spatial structure in aqueous solution and in dodecylphosphocholine (DPC) micelles had been determined earlier [22]. Thus, we were able to demonstrate that the spatial organization of the central loop in a CT molecule in the crystal state can be similar to the one adopted by this molecule in the lipid membrane. This information is important for establishing structure-activity relationships in CTs.
X-ray Crystallography
Here, we used X-ray crystallography to determine the spatial organization of CT13Nn. Structures of hexagonal (with three molecules in the asymmetric unit denoted A, B, and C; Figure 1a) and orthorhombic (with six molecules in an asymmetric unit denoted A, B, C, D, E, and F; Figure 1b) crystal forms were solved at 2.3 and 2.6 Åresolution, respectively. ) and X-ray (CT13Nn of the hexagonal (c) and orthorhombic form (d)). Protein is shown in cartoon representation. 7O2K is colored brown; the color scheme of subunits of the hexagonal and orthorhombic forms is the same in all panels. ) and X-ray (CT13Nn of the hexagonal (c) and orthorhombic form (d)). Protein is shown in cartoon representation. 7O2K is colored brown; the color scheme of subunits of the hexagonal and orthorhombic forms is the same in all panels.
It was impossible to accurately determine the amino acid sequence of the 47-50 region solely by mass spectrometry. Analysis of the 2Fo-Fc electron density map of the 2.3Å structure revealed that the correct variant is the LLVK sequence ( Figure S1).
The average value of the B-factor of protein atoms amounts to 38.9 Å 2 and 98.5 Å 2 in the hexagonal and the orthorhombic form, respectively. This indicates that the hexagonal form is more ordered. The molecules in the asymmetric part of each structure feature different level of disorder. The electron density for molecules D, E, and F of the orthorhombic structure is considerably poorer than for three other molecules, which is also reflected in the higher B-factor (Table S1). The most ordered molecules in the hexagonal and the orthorhombic structure are B and A, respectively. Subunit B of the hexagonal form was further used for conformational analysis and comparison with other NMR or X-ray structures. A root-mean-square deviation (RMSD) between coordinates of Cα atoms of molecule B in the hexagonal crystal form superimposed on the other two molecules ranges from 0.26 to 0.34 Å. RMSD for Cα atoms of molecule A of the orthorhombic structure superimposed on the other five molecules ranges from 0.39 to 1.45 Å. Significant difference between E subunit of the orthorhombic structure and most other subunits could be explained by an unusual conformation of loop-2 in all subunits, except E. Previously, this was not observed in crystal structures of homologues. This conformation was also found in all subunits of the hexagonal structure ( Figure 2a).
2.3Å structure revealed that the correct variant is the LLVK sequence ( Figure S1).
The average value of the B-factor of protein atoms amounts to 38.9 Å 2 and 98.5 Å 2 in the hexagonal and the orthorhombic form, respectively. This indicates that the hexagonal form is more ordered. The molecules in the asymmetric part of each structure feature different level of disorder. The electron density for molecules D, E, and F of the orthorhombic structure is considerably poorer than for three other molecules, which is also reflected in the higher B-factor (Table S1). The most ordered molecules in the hexagonal and the orthorhombic structure are B and A, respectively. Subunit B of the hexagonal form was further used for conformational analysis and comparison with other NMR or X-ray structures. A root-mean-square deviation (RMSD) between coordinates of Cα atoms of molecule B in the hexagonal crystal form superimposed on the other two molecules ranges from 0.26 to 0.34 Å. RMSD for Cα atoms of molecule A of the orthorhombic structure superimposed on the other five molecules ranges from 0.39 to 1.45 Å. Significant difference between E subunit of the orthorhombic structure and most other subunits could be explained by an unusual conformation of loop-2 in all subunits, except E. Previously, this was not observed in crystal structures of homologues. This conformation was also found in all subunits of the hexagonal structure ( Figure 2a). Comparison of subunit E with the NMR structure of CT2Nk (in aqueous solution) showed that the conformations of the loops were very similar (Figure 3 a,b). Unusual conformation of the loop-2 in all subunits of the hexagonal and most subunits of the orthorhombic form was similar to that found by NMR for cytotoxin 1 from N. oxiana (5NQ4) in DPC micelle [22]. Superimposition of the models (except E) over residues 22-36 resulted in RMSD values in the range of 0.8 to 1.1 Å. Specifically, for model E the respective RMSD value was 1.8 Å. Superimposition of subunit E and A of CT13Nn demonstrated that the tip of the loop-2 was repulsed from the hydrophobic amino acid residues 8-9 of loop-1 of the neighboring subunit B due to crystal packing ( Figure 2). We found that the tightly bound water molecule was present within the loop-2 of CT13Nn, similarly to that in CT2Nk structure (pdb 7O2K). Due to the resolution limit, this water molecule was found in all subunits of the hexagonal crystal form and only in two subunits of the orthorhombic crystal form. This molecule was H-bonded to the following atoms: N of Met26, O of Val32, and OG of Thr31 (Figure 3a). In contrast, in the NMR struc- Comparison of subunit E with the NMR structure of CT2Nk (in aqueous solution) showed that the conformations of the loops were very similar (Figure 3a,b). Unusual conformation of the loop-2 in all subunits of the hexagonal and most subunits of the orthorhombic form was similar to that found by NMR for cytotoxin 1 from N. oxiana (5NQ4) in DPC micelle [22]. Superimposition of the models (except E) over residues 22-36 resulted in RMSD values in the range of 0.8 to 1.1 Å. Specifically, for model E the respective RMSD value was 1.8 Å. Superimposition of subunit E and A of CT13Nn demonstrated that the tip of the loop-2 was repulsed from the hydrophobic amino acid residues 8-9 of loop-1 of the neighboring subunit B due to crystal packing ( Figure 2). We found that the tightly bound water molecule was present within the loop-2 of CT13Nn, similarly to that in CT2Nk structure (pdb 7O2K). Due to the resolution limit, this water molecule was found in all subunits of the hexagonal crystal form and only in two subunits of the orthorhombic crystal form. This molecule was H-bonded to the following atoms: N of Met26, O of Val32, and OG of Thr31 (Figure 3a). In contrast, in the NMR structure 7O2K this water molecule formed three hydrogen bonds with the N atom of Met 26 and the carbonyls of Val 32 and Asn 29 residues (Figure 3a). Apparently, the difference in this coordination of the water molecule is caused by the crystal packing effects. In the crystal structure of CT13Nn, Thr31 residue cannot be turned toward solvent (as in the NMR structure) due to proximity of the residues of the symmetrically related molecule (Figure 3 a,b). Comparison of the structure of CT2Nk and the crystal structure of S-type toxins (pdb codes: 4OM4, 4OM5, and 1UG4) and Ptype toxins (pdb code1H0J) supports this conclusion.
MD Study of CT13Nn and CT1No/HMMM POPG Interactions
Study of the interaction of polycationic peptides with anionic lipid membranes, using conventional force fields is highly time-demanding. To accelerate simulations, we used the Highly Mimetic Membrane Model [38] and considered anionic POPG membrane. To compare effects of HMMM membranes on the spatial organization of CTs, we studied, as a reference molecule, CT1No (Table 1). Thus, CT13Nn (CT2Nk) and CT1No were studied side-by-side in POPG bilayers.
First, we simulated the interaction of CT2Nk (model 7O2K) with the HMMM POPG bilayer ( Figure 4). Apparently, the difference in this coordination of the water molecule is caused by the crystal packing effects. In the crystal structure of CT13Nn, Thr31 residue cannot be turned toward solvent (as in the NMR structure) due to proximity of the residues of the symmetrically related molecule (Figure 3a,b). Comparison of the structure of CT2Nk and the crystal structure of S-type toxins (pdb codes: 4OM4, 4OM5, and 1UG4) and P-type toxins (pdb code1H0J) supports this conclusion.
MD Study of CT13Nn and CT1No/HMMM POPG Interactions
Study of the interaction of polycationic peptides with anionic lipid membranes, using conventional force fields is highly time-demanding. To accelerate simulations, we used the Highly Mimetic Membrane Model [38] and considered anionic POPG membrane. To compare effects of HMMM membranes on the spatial organization of CTs, we studied, as a reference molecule, CT1No (Table 1). Thus, CT13Nn (CT2Nk) and CT1No were studied side-by-side in POPG bilayers.
First, we simulated the interaction of CT2Nk (model 7O2K) with the HMMM POPG bilayer ( Figure 4).
The toxin molecule was placed outside the bilayer, as shown in Figure 4b. The time dependence of the deepening of Cα atoms of the toxin molecule is shown in Figure 4a. As can be seen from this map, the toxin molecule interacted with the bilayer, inserting consecutively loop-1, then loop-2, and finally, loop-3 (Figure 4c-e). Interestingly, insertion of loop-2 was accompanied by structural changes ( Figure 5).
The time dependence of these changes (Figures 4f and 5a,b) suggests that they start when the toxin molecule embeds its loop-2 into the membrane, i.e., in the beginning of the period marked with a letter "d" on Figure 4a. These structural changes leave intact the cavity within loop-2 for the tightly-bound water molecule (Figure 4g). The toxin molecule was placed outside the bilayer, as shown in Figure 4b. The ti dependence of the deepening of Cα atoms of the toxin molecule is shown in Figure As can be seen from this map, the toxin molecule interacted with the bilayer, insert consecutively loop-1, then loop-2, and finally, loop-3 (Figure 4c-e). Interestingly, ins tion of loop-2 was accompanied by structural changes ( Figure 5). We note that the final conformation of loop-2 in the HMMM POPG membrane is very similar to that of the respective fragment of CT1No in DPC micelle (Figure 5c,d). Thus, the question arises as to whether interaction of CT1No with the HMMM POPG bilayer would result in structural transformations, similar to those observed during its embedding in DPC micelle. This has recently been described [22]. To answer this question, we performed MD simulation of CT1No in the presence of the HMMM POPG bilayer. The starting conformation of the molecule was one of the major forms of CT1No determined by NMR in aqueous solution (pdb code 1RL5, Table 3). Its position and orientation toward the bilayer were identical to that of CT2Nk (Figure 4b). In this case, we obtained the map of penetration, which was nearly identical to that observed for CT2Nk ( Figure S3a). Withiñ 50 ns CT1No embedded all its three loops into the membrane. Close inspection of the MD trajectory revealed that the insertion was accompanied by structural changes, which were more pronounced in the loop-2 region, as in the case of CT13Nn (Figure 4f). The equilibrium conformation of this loop was similar to that obtained in DPC micelle (pdb code 5NQ4) ( Figure S4). Thus, embedding of CT1No in the HMMM POPG bilayer resulted in structural changes, similar to those observed by NMR spectroscopy for this toxin in the detergent micelle. Variation of the loop-2 of CT1No ( Figure S4) was similar to that shown for CT2Nk in Figure 4f. In addition, we performed MD simulations of either CT1No or CT2Nk in the presence of the HMMM POPG bilayer, starting from the micelle-embedded conformation for the former toxin (pdb code 5NQ4), or one adapted to the HMMM POPG bilayer for the latter. We found that the molecule did not change its conformation in the course of MD simulation. Embedding of the loops 2 and 3 occurred quickly and simultaneously. This contrasts with the situation when simulation was started with the "water" conformation, where loop-3 embedded after loop-2 with a delay (Figures 4a and S3). Interestingly, the hydrogenbonding pattern of the tightly-bound water molecule did not change in the course of the MD run ( Figure S5). Thus, membrane adaptation of the loop-2 conformation accelerated reaching of the binding mode with the all three loops of CTs embedded into the membrane.
Taking the data for CT1No into account, we may suppose that relatively fast embedding of CTs into the HMMM POPG bilayer is accompanied with structural changes within the loop-2 region, similar to those observed in embedding of this molecule in detergent micelle.
Discussion
Among the spatial structures of CTs the quality of the NMR-determined models is usually inferior to that of X-ray ones [7]. One of the reasons for this is slow conformational equilibrium between cis-and trans-isomers of the X-Pro8 peptide bond, which occurs in aqueous solution but not in the crystal [39,40]. This equilibrium results in an increase in the number of cross-peaks in NMR spectra of the toxins in aqueous solution. Normal mode analysis [7], MD study [39], and NMR investigation [41] of the dynamics of CTs revealed that the residues involved in beta-sheet formation, are more rigid compared to those that are not involved in backbone hydrogen bonding. The latter are localized in the tips of loops (residues 7-11, 16-18, 28-31, and 46-48). Three of these moieties are involved in binding to lipid membranes: loop-1 (residues 7-11), loop-2 (residues 28-31), and loop-3 (residues 46-48) [22,26,42]. The conformation of these fragments differs in aqueous and membrane environments [21,22]. However, the most significant changes are observed in the tip of loop-2 [22]. This fragment adopts a configuration of an omega loop, a widespread structural motif of the globular protein' secondary structure [43,44].
For the first time, in this work the structure similar to the membrane-bound conformation of functionally important loop-2 of an S-type CT was observed in the two crystal forms. Apparently, these conformational changes are caused by the repulsion of loop-2 from its neighbor molecules in the crystal cell. One can assume that the hydrophobic residues of the neighbor "mimic" in some way the membrane surface and sterically disturb the tip of loop-2. Earlier, related effects were observed and interpreted in a similar way in the X-ray structure of cardiotoxin-like basic protein A5 from Taiwan cobra venom [45]. Polymorphism of protein crystals is not a rare event. Polymorphic crystals formed under similar conditions make it possible to identify crystal structures with potentially similar free energies of crystallization [46].
Despite some amino acid variability of the loop-2 in different CTs, only two conformations (aqueous and membrane-bound) could be found. This indicates the structural role of the conservative tightly bound water molecule found in NMR and X-ray structures. Similarity coordination of this water molecule within the hexagonal X-ray structure and NMR-structure of CT1 N. oxiana in the presence of DPC (5NQ4) is in a good agreement with this suggestion. In the structure 5NQ4, the side chain of Thr 31 is turned toward this water molecule in a manner similar to our crystal structure. Therefore, the H bond of this residue with tightly bound water in the "membrane" form cannot be excluded. Of note, the amino acid sequences of the loop-2 of these toxins differed slightly. This resulted in minor differences in the loop conformation and coordination of the water molecule within it ( Figure S2).
In this work, we used the HMMM model to investigate the interaction of CT2Nk with POPG membrane. For comparison, we studied the interaction of CT1No with the membrane, because the spatial structures of this toxin have been determined with NMR spectroscopy both in aqueous solution and in detergent micelles ( Table 2). We found that interaction of the both CT2Nk and CT1No toxins with this membrane proceeds via similar steps, detected in AA-study of cytotoxin 2 from N. oxiana in POPC membrane [31]. Firstly, all these toxins insert in the membranes loop-1, then loop-2, and finally, loop-3, in accordance with hydrophobicity gradient between these loops [47]. In POPG membrane, incorporation of loop-2 was accompanied by its structural remodeling. Interestingly, the final conformational state of loop-2 in POPG membrane was similar to the corresponding fragment of CT1No in DPC micelle ( Figure S3b). When the starting conformation of CT1No or CT2Nk was one in either DPC micelle or HMMM POPG membrane, no changes of the loop-2 occurred after embedding of the toxin molecule in the HMMM POPG bilayer ( Figure S3b). Also, in this case the hydrogen bonding pattern of the tightly bound water molecule within loop-2 did not change in the course of the MD simulation ( Figure S5). This is clearly different to the case of embedding of CT2Nk in the HMMM POPG membrane starting from its solution conformation (Figure 4a). The hydrogen-bonding pattern of a tightly bound water molecule in the loop-2 changed (Figure 4g) because the tip of loop-2 reorganized in the membrane (Figure 5a,b). Both CT1No and CT2Nk are S-type CTs [20]. Thus, the membrane effect on the loop-2 of CTs can be reproduced in silico using HMMM approximation of anionic membranes and CHARMM36m force field. This force field has been shown to correctly reproduce protein-lipid interactions at the membrane interface [48]. At the same time, this force field introduces some flexibility in the membrane-interacting loops of CTs. Indeed, the dihedral angle PHI of some residues from loop-1 and loop-2 of CT2Nk in the HMMM POPG bilayer (Figure 5d) took positive values, unlike those in the aqueous phase (Figure 5d). However, this flexibility seems to be important to correctly reproduce adaptation of the CT loops to the membrane environment. The question arises, as to whether the membrane adaptation of the loop-2 depends on the lipid composition of the membrane. Our previous analysis showed that the modes of interaction of CTs with zwitterionic and anionic lipid membranes are similar [27]. Previous study of the interaction of P-and S-type CTs with liposomes, formed of PG lipids showed that classification into P-and S-types can be expanded from zwitterionic to anionic lipid membranes [27]. Namely, the binding modes of these toxins to either zwitterionic, or anionic membranes do not differ. In the both cases the three-loop mode is realized, although the membrane binding constants differ. Most probably, the molecular basis for that is caused by the fact that the tip of the loop-2 of S-type CTs is formed of more hydrophilic residues, compared to those of their P-type counterparts [20]. As a result, the conformation of the tip of this loop of S-type CTs changes in the membrane, while in P-type ones it remains intact upon incorporation into micelle [21], or lipid membrane [31]. As we see in the current work, changes of the conformation of the loop-2 in S-type CT1No observed upon their transfer from aqueous solution to detergent micelle, or HMMM POPG membrane are very similar. Thus, simulation of such toxins with these membrane models can substitute in many aspects their study by NMR spectroscopy in detergent micelles.
There are numerous reports on studies of cytotoxic activity of cobra CTs, including their action on cancer cell lines (e.g., [49,50]). For some of the latter, P-type CTs possess higher activity, compared to their S-type counterparts [19]. In this case we may assume that this is due to different efficiency of the interaction of these CTs with either the plasma membrane, or with membranes of intracellular organelles of the cancer cells. S-type CTs require an additional step during their interaction with membranes. Our data indicate that in the process of membrane-binding, the structure of an S-type CT molecule undergoes rearrangement, or adaptation to the membrane environment. The P-type CTs do not require rearrangement of their loop-2 [21,31]. We believe that in cases where the membrane activity of CTs underlies their cytotoxic mechanism, the P-type CTs possess higher activity, compared to S-type ones. This rule will remain true in considering toxicity of CTs in animal tissues. Recently, it was demonstrated that CTs of either P-or S-types influenced the function of blood vessel and heart muscle [51]. This finding again indicates that the membrane-affected structure of CTs is an important factor in their toxic effects. We believe that the observed conformation of CT loops in the membrane-bound molecule should be taken into account in the design of cytotoxic drugs based on CTs. At the same time, we cannot exclude that not only cell membranes but also some membrane proteins are affected by CTs. In this case, both hydrophobic and electrostatic properties of CT molecules should be accurately taken into account [49,52].
Conclusions
For the first time, in the crystal state of a CT molecule, the membrane-bound conformation of its central loop has been observed. Polymorphism of protein crystals together with MD studies and analysis of NMR structures of homologues assists in elucidating functional features of CTs. The HMMM bilayer can be used as a membrane environment, where adaptation of peripheral polypeptides to the lipid-water interface can be studied in atomistic detail via molecular dynamics simulations.
Cytotoxin Purification
CT13Nn was isolated from N. naja venom and purified essentially as described in [53]. The purification was performed in three steps, including gel-filtration, ion-exchange and reverse-phase high-performance liquid chromatography. Briefly, gel filtration was performed on a Superdex 75 column (10 × 300 mm, Cytiva, Marlborough, MA, USA), equilibrated with 0.1 M ammonium acetate (pH 6.2), at a flow rate of 0.5 mL/min. The main toxic fraction containing cytotoxins was separated by ion-exchange chromatography on a HEMA BIO 1000 CM column (8 × 250 mm, Tessek, Prague, Czech Republic) with a gradient of 5-700 mM ammonium acetate (pH 7.5) for 140 min, at a flow rate of 0.5 mL/min. Fractions containing cytotoxins were further separated by reversed-phase chromatography on a Bio Wide Pore C18 column (10 × 250 mm, Merck KGaA, Darmstadt, Germany) in a 20-50% gradient of acetonitrile for 60 min in the presence of 0.1% trifluoroacetic acid, at a flow rate of 2.0 mL/min. The amino acid sequence of the isolated toxin was analyzed by mass spectrometry, as described [54].
Crystallization, Data Collection, and Processing
The initial conditions for CT13Nn (5 mg/mL) crystallization were screened at room temperature by the vapor-diffusion method using screening kits Crystal Screen HR2-110 and Crystal Screen HR2-112 (Hampton Research). The final optimized crystals were grown by vapor-diffusion or counter-diffusion methods from (1.7-1.9 M sodium chloride, 0.1 M sodium dihydrogen phosphate, 0.1M MES, pH 5.5).
For data collection, the crystals were soaked in a cryoprotectant solution consisting of reservoir solution with 20% glycerol added. The crystals were then flash-cooled in liquid nitrogen at 100 K. Diffraction data for the orthorhombic crystal form were collected at beamline P14, Deutsches Elektronen-Synchrotron (DESY), Petra III, Hamburg, Germany, and for the hexagonal crystal form at beamline BL41XU at SPring-8, Japan. Data were processed by XDS [55] or HKL2000 [56]. Data collection statistics, along with unit cell dimensions, are summarized in Table 2.
Structure Determination and Refinement
Structures of hexagonal and orthorhombic crystal forms were solved at 2.3 and 2.6 Å, respectively. The structure was determined by molecular replacement with Phaser [57] using the structure of the CT1No (PDB code 1RL5) as the initial model. Hexagonal and orthorhombic forms contain three or six subunits in the asymmetric part, respectively. Model refinements were performed with Refmac5 [58], combined with manual modelrebuilding using Coot [59]. The models of hexagonal and orthorhombic crystal forms were refined to a crystallographic R-factor of 19.7% (R free = 24.2%) and R-factor of 20.49% (R free = 24.8%), respectively. Model validation was performed with PROCHECK [60].
Refinement statistics are summarized in Table 2. Structures were deposited with the Protein Data Bank under ID of 7QHI and 7QFC for hexagonal and orthorhombic forms, respectively.
Molecular Dynamics
The cytotoxin/membrane systems were assembled with the use of CHARMM-GUI Membrane Builder, or input generator (http://charmm-gui.org/?doc=input/membrane. bilayer, accessed on 16 February 2022) [61]. The CT molecule was placed outside the bilayer composed of 128 lipid molecules in an orientation in which the tip of loop-1 was directed toward the membrane surface (see further details). The starting conformation of the toxins was determined by either X-ray or NMR methods, or derived from MD simulations ( Table 3). To check reproducibility of the simulations, three independent runs were performed for each toxin/lipid system listed in Table 3. In these starts, the orientations of the toxin molecules with respect the membrane surface were identical, but the distance from the center of mass of the toxin molecule to the center of the lipid bilayer was varied in the range 37-45 Å. Table 1 for the amino acid sequences of the toxins; b solution conformations of the major form of CT1No [22,39] or CT2Nk [37]; and c conformation of CT1No in detergent micelles [22].
The parameters of the CT13Nn (CT1No)/lipid assemblies are given in the Supplementary Materials (Table S2). The per-lipid area ratio (R SA , ratio of the average area per lipid molecule to that in the all-atom model of the membrane) was chosen to be a default value of 1.1. The terminal acyl carbon number was fixed at the 6th carbon atom. This means that the lipid molecules were truncated below the 6th carbon atom and the free membrane volume was replaced with a box of organic solvent, 1,1-dichloroethane (DCLE). In all calculations, the tip3p water model was employed [62]. To keep the system electrically neutral, counterions were added (see Table S3). The CHARMM36m force field was used to perform MD simulations. The following simulation parameters were selected: the NPT ensemble, i.e., a constant number of molecules in the simulation box, pressure and temperature. Van der Waals interactions were smoothly switched off at 10-12 Å by a force-based switching function [63]. The particle-mesh Ewald algorithm was used to evaluate electrostatic interactions [64]. A time step of 2 fs was used in all simulations. The temperature was kept at 303.15 K using a Nosé-Hoover thermostat. The Nosé-Hoover Langevin-piston method [65,66] was used to maintain constant pressure at 1 bar. The standard CHARMM-GUI six-step protocol was used to equilibrate the systems. The length of MD trajectories was 200 ns. Other details of MD simulations and data analysis were similar to those used previously [67].
Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/toxins14020149/s1, Table S1: Overall B-factors of protein chains of L48V49 CT3Nn the crystal forms, Table S2: Typical parameters of the simulation box and system size in the CT/HMMM POPG systems, Figure S1: The 2Fo-Fc density map at 1.2σ level of 47-52 fragment of subunit A of 7QFC, Figure S2: Conservative water molecule in the loop-2 of cobra toxins. Superposition of subunit B of XXX3(cyan) and 5NQ4(pink). Water molecule is shown by red sphere in 7QFC and dark red sphere in 5NQ4, Figure S3: Interaction of CT1No in different starting conformations with the HMMM POPG bilayer. (a) Solution conformation, pdb code 1RL5, (b) conformation adopted in detergent micelle, 5NQ4. The model #1 from the respective NMR ensem-bles are shown to the right of the panels (blue color). Superimposed by backbone atoms of 1-60 residues are shown, in addition, solution conformation of CT2Nk (pdb code 7O2K) (panel a) and equilibrium conformation of CT2Nk molecule in the course of MD simulation in HMMM POPG bilayer (see Figure 4, panel a for details). The CT2Nk molecules are shown in red color. The loops are marked in the upper panel. The orientation of the superimposed models is the same in the both panels. The penetration depth of the CA-atoms of CT1No relative to the average position of PO4-moieties of the POPG molecules is shown in color, according to the scale given below the panel (b). Note that the loop-3 of the toxin molecule embeds in the bilayer simultaneously with the loop-2 in the panel (b), while in the panel (a) the loop-3 embeds the bilayer with a delay respective to the loop-2, Figure S4: Interaction of CT1No with POPG bilayer probed by time-dependence of the variation of backbone RMSD for residues 22-36. The starting MD-state of the molecule corresponds to its "water" conformation (1RL5) and the black curve is calculated relative to it. The red curve is calculated relative to the "membrane" conformation of the molecule (5NQ4). First, the molecules from trajectory and the template were superimposed over all backbone atoms. Then the RMSD over 22-36 residues was calculated, Figure S5: Exchange of water molecules in the cavity of the loop-2 of CT1No in the course of the MD simulation in HMMM POPG bilayer. The starting conformation of the molecule was one determined by NMR in detergent micelle (pdb code 5NQ4). The respective deepening map of CA atoms is shown in Figure S3 b. The horizontal bars correspond to the hydrogen bonding to acceptor backbone atoms, indicated in the insert in the top upper part of the graph. Note that these hydro-gen bonds exist practically during the whole duration of the trajectory, unlike for CT2Nk started from its solution con-formation (Figure 4g).
Conflicts of Interest:
The authors declare no conflict of interest. | 7,672 | 2022-02-01T00:00:00.000 | [
"Chemistry",
"Biology"
] |
Tunable colloid trajectories in nematic liquid crystals near wavy walls
The ability to dictate the motion of microscopic objects is an important challenge in fields ranging from materials science to biology. Field-directed assembly drives microparticles along paths defined by energy gradients. Nematic liquid crystals, consisting of rod-like molecules, provide new opportunities in this domain. Deviations of nematic liquid crystal molecules from uniform orientation cost elastic energy, and such deviations can be molded by bounding vessel shape. Here, by placing a wavy wall in a nematic liquid crystal, we impose alternating splay and bend distortions, and define a smoothly varying elastic energy field. A microparticle in this field displays a rich set of behaviors, as this system has multiple stable states, repulsive and attractive loci, and interaction strengths that can be tuned to allow reconfigurable states. Microparticles can transition between defect configurations, move along distinct paths, and select sites for preferred docking. Such tailored landscapes have promise in reconfigurable systems and in microrobotics applications.
E ver since Brown discovered the motion of inanimate pollen grains, material scientists have been fascinated by the vivid, life-like motion of colloidal particles. Indeed, the study of colloidal interactions has led to the discovery of new physics and has fueled the design of functional materials [1][2][3] . External applied fields provide important additional degrees of freedom, and allow microparticles to be moved along energy gradients with exquisite control. In this context, nematic liquid crystals (NLCs) provide unique opportunities 4 . Within these fluids, rod-like molecules coorient, defining the nematic director field 5 . Gradients in the director field are energetically costly; by deliberately imposing such gradients, elastic energy fields can be defined to control colloid motion. Since NLCs are sensitive to the anchoring conditions on bounding surfaces 6,7 , reorient in electro-magnetic fields 5,8 , have temperature-dependent elastic constants 5 and can be reoriented under illumination using optically active dopants 9,10 , such energy landscapes can be imposed and reconfigured by a number of routes.
Geometry, topology, confinement, and surface anchoring provide versatile means to craft elastic energy landscapes and dictate colloid interactions [11][12][13][14] . This well-known behavior 4,15 implies that strategies to dictate colloidal physics developed in these systems are robust and broadly applicable to any material with similar surface anchoring and shape. Furthermore, the ability to control the types of topological defects that accompany colloidal particles provides access to significantly different equilibrium states in the same system. Thus, the structure of the colloid and its companion defect dictate the range and form of their interactions.
By tailoring bounding vessel shape and NLC orientation at surfaces, one can define elastic fields to direct colloid assembly 4 . This was shown for NLC controlled by patterned substrates 16,17 , optically manipulated in a thin cell 18 , or in micropost arrays 19,20 , grooves [21][22][23] , and near wavy walls 24,25 . In prior work, the energy fields near wavy walls have been exploited to demonstrate lockand-key interactions, in which a colloid (the key) was attracted to a particular location (the lock) along the wavy wall to minimize distortion in the nematic director field. However, the elastic energy landscapes obtainable with a wavy wall are far richer, and provide important opportunities to direct colloidal motion that go far beyond near-wall lock-and-key interaction.
In this system, elastic energy gradients are defined in a nonsingular director field by the wavelength and amplitude of the wavy structure, allowing long ranged wall-colloid interactions. Colloids can be placed at equilibrium sites far from the wall that can be tuned by varying wall curvature. Unstable loci, embedded in the elastic energy landscape, can repel colloids and drive them along multiple paths. In this work, we develop and exploit aspects of this energy landscape to control colloid motion by designing the appropriate boundary conditions. For example, we exploit metastable equilibria of colloids to induce gentle transformations of the colloids' companion topological defects driven by the elastic fields. Since topological defects are sites for accumulation of nanoparticles and molecules, such transformations will enable manipulation of hierarchical structures. We also create unstable loci to direct particle trajectories and to produce multistable systems, with broad potential implications for reconfigurable systems and microrobotics. Finally, we combine the effects of the NLC elastic energy field and of an external field (gravity) to demonstrate fine-tuning of the particles' sensitivity to the size of their docking sites.
Results
Molding the energy landscape. To mold the elastic energy landscape near a curved boundary with geometrical parameters defined in Fig. 1a, we fabricate long, epoxy resin strips using standard lithographic techniques to form wavy structures (Fig. 1b). These structures are placed between two parallel glass slides, separated by distance T, with planar anchoring oriented perpendicular to the strip (see Methods for details and parameters) to form a cell within which the NLC is contained. This cell is filled by capillarity with a suspension of colloids in the NLC 4-cyano-4'-pentylbiphenyl (5CB) in the isotropic phase, and subsequently quenched into the nematic phase (T NI = 34.9°C). The alignment of a colloid-free cell is examined under crossed polarizers (Fig. 1c, d), which shows that the bulk liquid crystal is defect-free. The much brighter texture at 45°-135° (Fig. 1c) compared to the 0°-90° (Fig. 1d) also shows good planar alignment along the y direction. The defects visible in Fig. 1c, d are only in the thin NLC film squeezed between the top of the wavy wall and the confining glass, a region which is not accessible to the colloids.
Colloid migration in the cells is observed with an optical microscope from a bird's-eye view. For the larger colloids, as expected, strong confinement between the glass slides stabilizes the Saturn ring configuration 26 , with a disclination line encircling the colloid. Smaller colloids, which experience weaker confinement, adopt the dipolar structure where a colloid is accompanied by a topological point-like defect often called a hedgehog. Particles are equally repelled by elastic interactions with the top and bottom glass slides, whose strength dominates over the particles' weight, so gravity plays a negligible role in our system 27 when the z axis of our experimental cell is vertical. When observed through the microscope, this configuration forms a quasi-2D system in the (x,y) plane, where y is the distance from the base of a well in the direction perpendicular to the wall. Unless otherwise specified, when reporting colloid position, y denotes the location of the colloid's center of mass (COM).
The wavy wall forms a series of hills and wells, with amplitude 2A measured from the base of the well to the highest point on a hill. Because of strong homeotropic anchoring at the wavy wall, these features impose zones of splay and bend in this domain. In particular, the valleys are sites of converging splay, the hills are sites of diverging splay, and the inflection points are sites of maximum bend. The wavelength of the structure λ can be expressed in terms of the radius of curvature R and the amplitude 1a). Therefore, λ and R are not Fig. 1 Schematic of experiment. a Schematic of the wall shape with relevant parameters: radius of curvature R, amplitude A, and wavelength λ. b Schematic of the experimental setup (N denotes rubbing direction, T denotes thickness of the cell). c, d Cross polarized images of liquid crystal near the wavy wall with the long axis either (c) at a 45°angle to the polarizer or (d) perpendicular to the polarizer. The scale bars are 20 μm independent for fixed A. Different aspects of the colloid-wall interaction are best described in terms of one or the other. For example, the range of the distortion is discussed in terms of λ, and the splay field near the well is described in terms of R. Throughout this study, unless specified otherwise, 2A = 10 μm. The gentle undulations of this wall deform the surrounding director field, but do not seed defect structures into the NLC. We demonstrate control over colloidal motion within the energy landscape near this wall. In addition, we use Landau-de Gennes (LdG) simulation of the liquid crystal orientation to guide our thinking. Details of the simulation approach can be found in the Methods section.
Attraction to the wall. To determine the range of interaction of a colloid with undulated walls of differing λ, a magnetic field is used to move a ferromagnetic colloid (radius a = 4.5 μm) to a position y far from the wall and x corresponding to the center of the well. The magnet is rapidly withdrawn, and the colloid is observed for a period of 2 min. If the colloid fails to approach the wall by distances comparable to the particle radius within this time, the colloid is moved closer to the wall in increments of roughly a particle radius until it begins to approach the wall. We define the range of interaction H* as the maximum distance from the base of the well at which the colloid starts moving under the influence of the wall (Fig. 2). In these experiments, the Saturn ring defect was sometimes pinned to the rough surface of the ferromagnetic particles. To eliminate this effect, these experiments were repeated with homeotropic magnetic droplets with a smooth interface whose fabrication is described in the Methods section. The results did not change. A typical trajectory is shown in Fig. 2a in equal time step images (Δt = 125 s). For small λ (i.e., λ ⪅ 40 μm), H* increases roughly linearly with λ. However, at larger λ, the range of interaction increases only weakly. A simple calculation for the director field near a wavy wall in an unbounded medium in the one elastic constant approximation and assuming small slopes predicts that the distortions from the wall decay over distances comparable to λ 24 . However, for λ much greater than the thickness of the cell T, confinement by the top and bottom slides truncates this range (see Supplementary Note 1 and Supplementary Figure 1), giving rise to the two regimes reported in Fig. 2b: one that complies with the linear trend and one that deviates from it. A similar shielding effect of confinement in a thin cell was reported in the measurements of interparticle potential for colloids in a sandwich cell 28 .
The colloid moves toward the wall along a deterministic trajectory. Furthermore, it moves faster as it nears the wall (Fig. 2c), indicating steep local changes in the elastic energy landscape. This motion occurs in creeping flow (Reynolds number Re = ρva/η ≈ 1.15 × 10 −8 , where ρ and η are the density and viscosity of 5CB, respectively, and v is the magnitude of the velocity of the colloid). The energy U dissipated to viscous drag along a trajectory can be used to infer the total elastic energy change; we perform this integration and find U~5000 k B T. In this calculation, we correct the drag coefficient for proximity to the wavy wall according to Ref. 29 and for confinement between parallel plates according to Ref. 30 (see Ref. 24 for more details). The dissipation calculation shows that gradients are weak far from the wall and steeper in the vicinity of the wall. The elastic energy profile found from LdG simulation as a function of particle distance from the base of the well is consistent with these observations (Supplementary Figure 2). The particle finds an equilibrium position in the well. At larger distances from the wall, the energy increases first steeply, and then levels off (Supplementary Figure 2). For wide wells (λ > 15a), the energy gradient in x near the wall is weak, and the drag is large. In this setting, the colloid can find various trapped positions, and introduce error to the energy calculation. Therefore, the trajectory is truncated at around y = 15 μm from contact with the wall.
Equilibrium position. The wall shape also determines the colloid's equilibrium position y eq , i.e., the distance between the colloid's COM and the bottom of the well. In fact, we show that the particles do not always dock very close to the wall. Rather, they find stable equilibrium positions at well-defined distances from contact with the hills and wells. We probe this phenomenon by varying colloid radius a and well radius of curvature R (Fig. 3a). At equilibrium, y eq is equal to R. That is, the colloid is located at the center of curvature of the well (Fig. 3b, c). In this location, the splay of the NLC director field from the colloid matches smoothly to the splay sourced by the circular arc that defines the well. As R increases, this splay matching requirement moves the equilibrium position of the colloid progressively away from the wall.
However, for wide wells with R ≫ 2a, the elastic energy from the wall distorts the Saturn ring, displacing it away from the wall (Fig. 3d, e). When this occurs, the equilibrium position of the colloid is closer to the wall. For all such colloids, the height of Saturn rings (Fig. 3a crosses: y = y def ) and that of the COM of the particles (Fig. 3a open circles: y = y eq ) do not coincide. Specifically, the particle moves closer to the wall, and the disclination line becomes distorted, i.e., the Saturn ring moves upward from the equator of the particle so that the particle-defect pairs become more dipole-like (Fig. 3g, h). For comparison, we plot the COM of particles with point defects sitting near the wall (Fig. 3f). We observe that, when the colloid radius is similar to the radius of the wall (R/a ≈ 2), there is a similar "splay-matching" zone for the dipoles; however, as we increase R/a, the behavior changes. In this regime, the dipole remains suspended with its hedgehog defect at a distance of roughly y def /a = 3 from the base of the well for wells of all sizes. The equilibrium distance of . c The position of the particle y with respect to time t. Inset: Energy dissipated to viscosity along a particle trajectory U with respect to the particle position y. The cross shows where we truncate the trajectory for integration along the path to infer the dissipation. The scale bar is 10 μm A colloid positioned directly above a well moves down the steepest energy gradient, which corresponds to a straight path toward the wall. The energy minimum is found when the particle is at a height determined by R/a, consistent with our experiments (Fig. 3b). We also note that at R/a = 7, we find y COM /a = 3.5, which corresponds to the equilibrium distance of colloids repelled from a flat wall. However, even at these wide radii, the elastic energy landscape above the undulated wall differs significantly from the repulsive potential above a planar boundary, which decays monotonically with distance from the wall 31 . For colloids above the wide wells, energy gradients in the y direction are small, but gradients in the x direction are not. As a result, particles migrate laterally and position themselves above the center of the wells. We have postulated and confirmed the splay matching mechanism to be the driving force of the colloid docking. We expect that by using a liquid crystal that has different elastic constants, we can enhance or suppress this effect. For example, for a LC with K 11 > K 33 , the colloids will preferentially sit closer to the wall to favor bend distortion over splay.
Quadrupole to dipole transition. For micron-sized colloids in an unbounded medium, the dipole is typically the lowest energy state 32 ; electrical fields 33 , magnetic fields 34 or spatial confinement 26 can stabilize the Saturn ring configuration. In prior research, we showed that a colloid with a Saturn ring defect, stabilized by confinement far from the wavy wall, became unstable and transformed into a dipolar structure near the wavy wall 24 . However, in those experiments, the transformation occurred very near the wall, where the dynamics of the colloid and surrounding liquid crystal were strongly influenced by the details of wall-particle hydrodynamic interactions and near-wall artifacts in the director field. Here, to avoid these artifacts, we use wells with a smooth boundary where R > a and amplitude A > a (specifically, A = R = 15 μm and λ = 60 μm, or A = R = 25 μm and λ = 100 μm). These wells are deeper and are best described as semicircular arcs with rounded corners.
We exploit these wider wells to position a colloid with a companion Saturn ring several radii above the wall. The elastic energy field distorts the Saturn ring, and drives a gentle transition to a dipolar defect configuration, as shown in Fig. 4a in time lapsed images. The location of the colloid y and the evolution of the polar angle of maximum deflection θ are tracked and reported in Fig. 4b. This transition is not driven by hydrodynamics; the Ericksen number in this system is Er = 8 × 10 −4 , a value two orders of magnitude lower than the critical value Er = 0.25 at which a flow-driven transition from quadrupole to dipole occurs 35 .
The confinement from the top and bottom glass stabilizes the Saturn ring. The wavy wall, however, exerts an asymmetrical elastic energy gradient on the Saturn ring, displaces it away from that wall, and ultimately destabilizes this configuration. Once the transition to dipole has taken place, re-positioning the particle away from the wall with a magnetic field does not restore the Saturn ring (Supplementary Movie 2).
Previously, Loudet and collaborators 36 studied the transition of a colloid with a Saturn ring defect to a dipolar configuration in an unbounded medium, prompted by the fast removal of the stabilizing electric field. Although these two sets of experiments take place in very different physical systems (confined vs. unconfined, withdrawal of an electric field vs. an applied stress field via boundary curvature), the slow initial dynamics and the total time of transition are common features shared by both (Fig. 4c, d).
The dynamics of the transition are reproducible across particles of different sizes (Fig. 4e), and across additional runs with different sized walls (Supplementary Figure 4), and even in the case where debris is collected by the topological defects on the way. However, Loudet et al. observed a propulsive motion attributed to back flow from reorientation of director field in the direction opposite to the defect motion. In our system, the motion is smooth and continuous as the colloid passes through the spatially varying director field. Furthermore, the velocity of the droplet decreases right after transition; we attribute this, in part, to the change in the drag environment ( Fig. 4b and Supplementary Figure 4b) There are cases in which the transition does not occur; rather, the Saturn ring remains distorted. In such cases the polar angle then ranges from θ = 103°to 130°. For polar angles larger than 130°, the transition always occurs, indicating that this is the critical angle for the transition. This value, however, differs from that measured in Ref. 37 . This difference may be attributed to the differing confinement of the cell. Differences in anchoring and elastic constants may also play a role. Quadrupoles and dipoles in simulation. In deeper wells (A > a), the polar angle increases as the colloid migrates into the well. LdG simulation reveals that, in the dipolar configuration, there is less distortion in the director field near the colloid owing to bend and splay matching, and that it is indeed more favorable for a colloid with dipolar defect to be located deep within the well (Fig. 5a-d).
In simulation, we compute the energy of a colloid both far (state 1: y = 5a, reference state) and near the wavy wall ( Fig. 5a-d) to locate the equilibrium site for both the Saturn ring and dipolar configurations (state 2: y = 1.8a and y = 1.5a, for Saturn ring and dipolar configuration, respectively). Details of this calculation are given in Methods. Using the same geometrical parameters and anchoring strength for the LdG numerics, we stabilize a dipolar configuration by initializing the director field by the dipolar farfield ansatz 38 . While colloids in both configurations decrease their energy by moving from state 1 to state 2, the decrease in energy is 2.9 times greater for the dipolar case (Fig. 5c, d). This change is determined by differences in the gradient free energy, corresponding to reduced distortion in the nematic director field. Stark 39 argues that the stabilization of a Saturn ring under confinement occurs when the region of distortion becomes comparable to or smaller than that of a dipole, assuming the same defect energy and energy density. Yet this argument does not apply here because the presence of the wavy wall strongly alters the energy density at various regions in the domain (Fig. 5a-d). Energy density (k Fig. 5 Simulated energy density for dipole and quadrupole near a wavy boundary. By exploring the energy for colloids in dipole (DP) and Saturn ring (QP) configurations at various positions above the well for fixed colloid size and wavy wall geometry, the equilibrium heights for the Saturn ring are found. a A Saturn ring located at the reference state far from the wall (state 1, y = 5a). b A Saturn ring located at its equilibrium location (state 2, y = 1.8a), a decrease of 203.5 k B T from state 1. c A dipole located at the reference state far from the wall (state 1, y = 5a). d A dipole located at its equilibrium location (state 2, y = 1.5a), with an energy decrease of 585.01 k B T from state 1. e Schematic representation of the total energy of the system E vs. the reaction coordinate θ for several distances y from the well, changing from far from the well to close to the well (i through iv) as E decreases. The presence of the well shifts the angle of the energy barrier's maximum to the right (increasing θ) and decreases the energy barrier until it is eliminated as the particle moves closer to the wall. The y location of the colloid's center of mass (COM) and evolution of the polar angle θ during the transition. Initially, the colloids assume the θ = 90°(Saturn ring) configuration, which gradually evolves to θ = 180°as the COM continuously moves towards the wall. After the transition to a dipolar configuration, the particle approaches the wall. c, d Reduced ring size and velocity from our system reveal similar dynamics of transition as shown in Fig. 2 in Ref. 36 . The solid line serves as guide to the eye. e θ vs. t c −t plot shows three experimental runs of transition in similar geometry. In b-d, t c is the time at which θ = 90°N Since this reorganization occurs in creeping flow and at negligible Erickson number, it occurs in quasi-equilibrium along the reaction coordinate. In principle, this suggests that insight can be gained into the transition energy between the two states by simulating the equilibrium value for θ and the corresponding system energy E for a colloid Saturn ring configuration at various fixed heights above the wall. We can consider the polar angle θ and the director field as our "reaction coordinate" to characterize the transition between the Saturn ring state (θ = 90°) and the dipolar state (θ = 180°). As shown schematically in Fig. 5e, an energy barrier exists between these two states far from the wall. The experiment indicates that this barrier is eliminated by the elastic energy field of the wall as the colloid approaches the well for certain geometries. Unfortunately, we are limited in how thoroughly we can explore this concept in simulation. The particle radii in our experiments are too large to be accurately reproduced, and must be re-scaled with caution, owing to the correlation length, which does not scale with system size.
In particular, our simulations are limited to particle radii for which the dipole is more costly than the Saturn ring everywhere in the domain, i.e., far from the wall and in its vicinity. Our experiments, recall, are performed with particle radii for which the dipole is the stable state, and the Saturn ring is metastable. Thus, direct calculations cannot yet capture the manner in which the energy landscape near the wall eliminates the energy barrier between and Saturn and dipole configuration, driving the transformation. Rather, direct calculations of system energy E vs. θ for small colloids with stable Saturn rings simply show an energy minimum and an equilibrium ring displacement at their equilibrium height above the well (Supplementary Figure 5).
We can compare the system energy for quadrupolar and dipolar configurations by computing ΔE = E dipole −E Saturn ring (Fig. 5f, Supplementary Figure 6). This quantity is always positive for colloidal radii accessible in simulation. By moving closer the the wall, however, ΔE decreases ( Fig. 5a-d, f). To explore how ΔE scales with colloid radius, we calculate ΔE in systems of similar geometries in which all length scales are increased proportionally with a for a range of values (colloid radius a = 90, 135, 180, 225, 270 nm) (Fig. 5f, Supplementary Figure 6). The total energy consists of two parts, the phase free energy which captures the defect energy, and the gradient free energy which captures the distortion of the field. The hedgehog defect does not grow with the system size, while the Saturn ring grows with the linear dimension of the system. Thus, the difference in the phase free energy ΔE phase between dipole and quadrupole is always linear in a (Supplementary Figure 6a). However, the gradient free energy ΔE gradient has more complex scaling, with a part that scales linearly in a and a part that scales as a log(a) 38 . Simulated values for ΔE gradient are fitted to such a form k 1 a + k 2 alog(a) + k 3 , Supplementary Figure 6b).
The sum of these two (ΔE = ΔE phase + ΔE gradient ) for different y values is presented in Fig. 5f (circles: simulated results; solid line: fit; dotted lines: extrapolations to micron-sized particles). Note that for large a values, comparable to those in experiment, the linear-logarithmic form fitted to ΔE gradient is linear in a. Extrapolation of ΔE according to the scaling arguments presented above suggests that ΔE becomes negative for large enough a. In this limit, the dipole becomes the stable configuration everywhere in the domain, in agreement with experiment. Furthermore, this suggests that, as a particle moves closer to the wall, the dipolar configuration is more favored.
These results show that the distortion field exerted by the wavy boundary can be considered as an external field, in some ways analogous to external electrical, magnetic or flow fields. However, the spatial variations in the elastic energy landscape and its dependence on boundary geometry allow gentle manipulations of colloids and their defects that are not typically afforded by those other fields.
Multiple paths diverging from unstable points. The elastic energy field in the vicinity of the wall was simulated by placing the COM of a colloid in a Saturn ring configuration at different locations (x, y). The reference energy is evaluated at (λ/2, λ), where, recall, λ is the wavelength of the periodic structure of the wall (Fig. 6a). The energy in the color bar is given in k B T for a colloid 54 nm in radius. The vectors in this figure show local elastic forces on the particle, obtained by taking the negative gradient of the elastic energy field. The solid curves indicate a few predicted trajectories for colloids placed at different initial positions in the energy landscape. (Further details of how this energy landscape is generated can be found in Supplementary Note 3 and Supplementary Figure 7). In the preceding discussions, we have focused on attractive particle-wall interactions and associated stable or metastable equilibria, which correspond to the energy b-e Particle paths are illustrated by points that indicate particle COM position over time; time step Δt = 5 s between neighboring points. The colored dots denote: b three representative trajectories (out of 28) of a colloid with Saturn ring defect. c Four representatives trajectories (out of 12) of an upward-oriented dipole. d Two representative trajectories (out of 11) of a downward-oriented dipole. e Two representative trajectories (out of 14) of a planar-anchoring colloid with two boojums released between two neighboring wells. Insets: schematics of colloids with respective defect types. The scale bars are 10 μm. f The range of interaction H* as a function of λ is similar for homeotropic (H) and planar (P) anchoring, for hedgehog (DP) and Saturn ring (QP) defects, and for solid colloids and droplets minima (blue) above the well. However, the location directly above a hill is an unstable point. When colloids are placed near this location using an external magnetic field, they can follow multiple diverging paths upon removal of the magnetic field. The particular paths followed by the colloid depend on small perturbations from the unstable point. Trajectories are computed by taking a fixed step size in the direction of the local force as defined by the local energy gradient (Fig. 6a).
In our experiments, amongst 28 trials using an isolated homeotropic colloid with a Saturn ring, the colloid moved along a curvilinear path to the well on its left 11 times, to the well on its right ten times and was repelled away from the peak until it was approximately one wavelength away from the wall seven times. Three sample trajectories are shown in Supplementary Movies 3-5. These trajectories are also consistent with the heat map in Fig. 6a. The numerically calculated trajectories (Fig. 6a) and their extreme sensitivity to initial position are in qualitative agreement with our experimental results (Fig. 6b). Thus, small perturbations in colloid location can be used to select among the multiple paths.
So far we have primarily discussed colloids with Saturn ring defects, but we can also tailor unstable points and attractors for dipolar colloids, and find important differences between the behavior of colloids attracted to wells and those attracted to hills. For example, a dipole pointing away from the wall (Fig. 6c) behaves like a colloid with companion Saturn rings in several ways. Both are attracted over a long range to equilibrate in wells, and both have unstable points above hills. Also, when released from this unstable point, both defect structures can travel in three distinct directions (left, right, and away from the wall, Fig. 6c). On the other hand, dipoles pointing toward the wall (Fig. 6d) behave differently. They are attracted to stable equilibria near hills, and are unstable near wells. Interestingly, when released from a point near a well, these colloids can travel only toward one of the adjacent hills. That is, there is no trajectory above the well that drives them in straight paths away from the wall.
Finally, we observed the behavior of colloids with planar molecular anchoring, which form two topologically required "boojums", surface defects at opposing poles 40 . They behave similarly to downward-orienting dipoles (Fig. 6e); they equilibrate near the hills, in accordance with the simulations of Ref. 41 , and they follow only two sets of possible paths when released from unstable points above a well. The ability to drive particle motion with a gently undulating wall is thus not limited to colloids with companion Saturn rings; the wall also directs the paths of dipolar colloids with homeotropic anchoring and colloids with planar anchoring, decorated with boojums.
These results indicate that the range of repulsion differs for hills and wells. This is likely related to the differences in the nematic director field near these boundaries. While converging splay field lines are sourced from the well, divergent splay field lines emanate from the hill. Both fields must merge with the oriented planar anchoring far from the wall. As a result, hills screen wells better than wells screen hills. The ranges of interaction for various colloid-defect configurations are summarized in Fig. 6f; while colloids with each defect structure have distinct equilibrium distances from a flat wall ( Supplementary Figure 8), the range of interaction between colloids and wavy walls follows a similar trend independent of the topological defects on the colloid (Fig. 6f).
Extending the range of interaction by placing wavy walls across from each other. Thus far, we have discussed instances of colloids of different defect structures diverging along multiple paths from unstable points near wavy walls. These features can be used to launch the colloid from one location to another, propelled by the elastic energy field. To demonstrate this concept, we arranged two wavy walls parallel to each other with the periodic structures in phase, i.e., the hills on one wall faced valleys on the other (Fig. 7a). For wall-to-wall separations more than 2λ, colloids with Saturn rings docked, as expected (Fig. 7b). For wall-towall separations less than 2λ, a colloid, placed with a magnetic field above the peak on one wall, was guided by the NLC elastic energy to dock in the valley on the opposite wall (Fig. 7c), thus effectively extending its range of interaction with the second wall (Supplementary Movie 6). In the context of micro-robotics, such embedded force fields could be exploited to plan paths for particles to move from one configuration to another, guided by a combination of external magnetic fields and NLC-director field gradients.
We can also exploit wall-dipole interactions to shuttle the colloid between parallel walls. For walls positioned with their wavy patterns out-of-phase (Fig. 7d, Supplementary Movie 7), dipoles with point defect oriented upwards are repelled from initial positions above hills on the lower wall and dock on the hill on the opposite wall. However, for walls with their patterns in phase, dipoles with defects oriented downwards released from an initial position above a well dock either at an adjacent hill on the same wall (Fig. 7e, Supplementary Movie 8), or in an attractive well on the opposite wall (Fig. 7f, Supplementary Movie 9).
"Goldilocks" or well-selection for colloids in motion. Particles in motion can select preferred places to rest along the wavy wall. Wells with different wavelengths create energy gradients that decay at different, well-defined distances from the wall. Placing wells of different sizes adjacent to each other offers additional opportunities for path planning. In one setting that we explore, a colloid can sample multiple wells of varying sizes under a background flow in the x direction. We followed a colloid moving under the effect of gravity. The sample was mounted on a custom-made holder that can be tilted by an angle α (Fig. 8a, b) within a range between 10°and 20°so that the colloid experiences a body force in the x direction. We have verified in independent experiments that, without the wall, the particle moves at a constant velocity due to balance of drag and gravity. In the presence of the wavy wall, the particle's trajectory is influenced by the energy landscape there. We first describe the particle paths over a series of periodic wells, and then describe motion for wells of decreasing wavelengths.
Docking or continued motion in the cell is determined by a balance between the body force that drives x-directed motion and viscous forces that resist it, the range and magnitude of attractive and repulsive elastic interactions with the wall, and viscous drag near the wall. If the particle moves past the well in the x direction faster than it can move toward the wall, it will fail to dock. However, if interaction with the well is sufficiently pronounced to attract the particle before it flows past, the particle will dock.
For a tilted sample with a wavy wall of uniform wavelength (λ = 70 μm), colloids initially close enough to the wall dock into the nearest well (Fig. 8c, V x = 0.01 μm s −1 , Supplementary Movie 10). Far from the wall, the colloids do not dock. However, the influence of the wall is evident by the fact that the colloids do not remain at a fixed distance from the wall. Rather, the distance from the wall varies periodically, and this periodic motion has the same wavelength as the wall itself (Fig. 8d, V x = 0.06 μm s −1 , Supplementary Movie 11).
To simulate the forces on the particle, a particle is placed at different locations near a wall, and the energy of the system is calculated (as detailed in Supplementary Note 3). Gradients in this energy capture the forces on the colloid owing to the distortions of the director field at each location. A uniform body force in the x direction is then added on the colloid to find the trajectories. We simulated the trajectories for various initial loci. We find two outcomes: for strong x-directed force and/or far from the wall, the particle follows a wavy path (Fig. 8e, yellow trajectory); for weak x-directed force and near the wall, the particle docks (Fig. 8e, red and green trajectories). A particle slows down right before the hill and speeds up as it approaches the next well. This velocity modulation can be attributed to the interaction with the splay-bend region, similar to particles moving within an array of pillars 11 . Our experiments and simulations are in good agreement, showing both behaviors.
However, a different behavior is observed when we modulate the wavelength of the wavy wall, by placing wells adjacent to each other with different wavelength as defined in Fig. 1a. As a particle travels past successive wells of decreasing wavelengths (λ = 70, 60, 50, 40 μm, Supplementary Movie 12), the particle moves in the y direction, closer to the wells, until it eventually is entrained by a steep enough attraction that it docks (Fig. 8f, V x = 0.09 μm s −1 ). This particle, like Goldilocks, protagonist of a beloved children story, finds the well that is "just right". Simulation of two wells with different wavelengths and a superimposed force confirms these results: we can achieve an additional state not possible with the uniform well, i.e. a wavy trajectory that descends and docks (Fig. 8g, yellow trajectory). The slight energy difference between wells of different wavelength underlies the "Goldilocks" phenomenon. Since the energy landscape defines zones of strong bend and splay, the ratio between the elastic constants K 11 and K 33 is important in determining the particle paths. Such interactions open interesting avenues for future studies, in which the rates of motion owing to elastic forces and those owing to applied flows are tuned, and the trapping energy of the docking sites are tailored, e.g., for colloidal capture and release.
Discussion
The development of robust methods to drive microscopic objects along well-defined trajectories will pave new routes for materials assembly, path planning in microrobotics and other reconfigurable micro-systems. Strategies developed within NLCs are one means to address these needs. Since the strategies developed in liquid crystals depend on topology, confinement, and surface anchoring, which can be manipulated by changing surface chemistry or texture on colloids with very different material properties, they are broadly applicable across materials platforms. We have developed controllable elastic energy fields in NLCs near wavy walls as a tool to manipulate the ranges of attraction and to define stable equilibiria. We have also exploited elastic energy fields to drive transitions in topological defect configurations. The near-field interaction between the colloid and the wall rearranges the defect structure, driving a transition from the metastable Saturn ring configuration to the globally stable dipolar configuration for homeotropic colloids.
We account for this transformation by means of an analogy between confinement and an external applied field. However, the gentle elastic energy field allows us to access metastable states. As these defect sites are of interest for molecular and nanomaterials assembly, the ability to control their size and displacement will provide an important tool to improve understanding of their physico-chemical behavior, and potentially to harvest hierarchical structures formed within them.
Furthermore, we have developed the concept of repulsion from unstable points as a means to dictate paths for colloids immersed within the NLCs. We have identified unstable sites from which multiple trajectories can emerge, and have used these trajectories to propel particles, demonstrating the multistability made possible by the wavy wall geometry. Finally, we have demonstrated the Goldilocks concept, i.e., that wells of different wavelengths can be used to guide docking of particles moving in a superimposed flow or via an external force. These concepts lend themselves to actuation and path planning in reconfigurable systems.
Methods
Assembly of the cell. We have developed a wavy wall confined between two parallel plates as a tool to direct colloid assembly. The wavy wall is configured as a bounding edge to the planar cell. The NLC cell and the walls were fabricated following the procedure in Ref. 24 . The procedure is briefly outlined here. The wavy walls are made with standard lithographic methods of SU-8 epoxy resin (Micro-Chem Corp.). The wells have wavelengths λ ranging from 27-80 μm and consist of smoothly connected circular arcs of radius R between 7-40 μm. These strips, of thickness between T = 20-28 μm, are coated with silica using silica tetrachloride via chemical vapor deposition, then treated with DMOAP (dimethyloctadecyl[3-(trimethoxysilyl)propyl]). The wavy wall is sandwiched between two antiparallel glass cover slips, treated with 1% PVA (poly(vinyl alcohol)), annealed at 80°C for 1 h and rubbed to have uniform planar anchoring. Once assembled, the long axis of the wall is perpendicular to the oriented planar anchoring on the bounding surfaces. We observed that in some LC cells the actual thickness was larger than expected, which we attribute to a gap above the strip. In those cases we noticed that some small colloids could remain trapped between the wavy strip and the top glass surface, so the effective thickness could be as large as 35-40 μm.
Particle treatment and solution preparation. We use the NLC 5CB (4-cyano-4'pentylbiphenyl, Kingston Chemicals) as purchased. We disperse three types of colloids in the 5CB. The size and polydispersity of the colloids are characterized by measuring a number of colloids using the program FIJI. (1) a = 7.6 ± 0.8 μm silica particles (Corpuscular Inc.), treated with DMOAP to have homeotropic anchoring.
(2) a = 4.3 ± 0.4 μm ferromagnetic particles with polystyrene core and coated with chrome dioxide (Spherotech, Inc.), treated with DMOAP, an amphiphile that imposes homeotropic anchoring, or with PVA for planar anchoring. (3) a = 4.3-8 μm custom-made emulsion droplets where water phase was loaded with magnetic nanoparticles and crosslinked. The oil phase consisted of 5CB mixed with 2 wt% Span 80. The water consisted of a 50:50 mixture of water loaded with iron oxide nanoparaticle and a pre-mixed crosslinking mixture. The magnetic nanopowder iron (II, III) oxide (50-100 nm) was first treated with citric to make it hydrophilic. The crosslinking mixture was pre-mixed with HEMA (2-hydroxyl ethyl methacrylate): PEG-DA (poly(ethylene glycol) diacrylate): HMP (2-hydroxyl-2-methylpropiophenone) in 5:4:1 ratio. Water and oil phases were emulsified with a Vortex mixer to reach desired colloid size range. The two were combined in a vial treated with OTS (trichloro(octadecyl)silane) to minimize wetting of the wall by the water phase during the crosslinking process. All chemicals were purchased from Sigma Aldrich unless otherwise specified. The emulsion was crosslinked by a handheld UV lamp (UVP, LLC) at 270 nm at roughly power P = 1 mW cm −2 for 3 h. The emulsion was stored in a refrigerator for stability. Span 80 ensured that the liquid crystal-water interface would have homeotropic anchoring. The magnetic droplets are very poly-dispersed due to the emulsification process. However, when we compare their behavior with the silica and feromagnetic colloids, we only compare colloids and droplets of similar sizes.
Imaging. The cells form a quasi-2D system that is viewed from above. In this view, the wavy wall is in the plane of observation. The homeotropic colloids dispersed in the NLC are located between the top and bottom coverslips. These colloids are levitated away from both top and bottom surfaces by elastic repulsion 27 . The cell was imaged with an upright microscope (Zeiss AxioImager M1m) under magnification ranging from 20× to 50×. The dynamics of the colloid near the wavy wall are recorded in real time using optical microscopy. Additional information regarding the director field configuration is also gleaned using polarized optical microscopy.
Application of a magnetic field. The magnetic field was applied by using a series of 8 NdFeB magnets (K&J Magnetics, Inc.) attached to the end of a stick. The magnets was placed roughly 0.5 cm from the sample; the field applied is estimated to be roughly 40-60 mT, far below the strength required to reorient the NLC molecules, but sufficiently strong to overcome the drag and move magnetic droplets and particle in arbitrary directions. Numerical modeling by Landau-de Gennes (LdG) simulation. Numerical modeling provides insight into the NLC-director field in our confining geometries. We use the standard numerical Landau-de Gennes (Q-tensor) approach 42 with a finite difference scheme on a regular cubic mesh. This approach is widely used to compute regions of order and disorder in bounded geometries through a global free energy minimization. The Q-tensor is a second-rank, traceless, symmetric tensor whose largest eigenvalue is the order parameter S in the NLC. Using the Landau-de Gennes approach, at equilibrium, the 3-D director field and the locations of defect structures for a given geometry are predicted. The nematic director field, a headless vector field (i.e., −n = n), represents the average direction of an ensemble of molecules of size comparable to the correlation length at any point in the system. Defects are defined as the regions where the order parameter S is significantly less than than the bulk value. The mesh size in our simulation is related to the correlation length in the NLC, and corresponds to 4.5 nm. Due to the difference in scale, the exact final configurations of numerics and experiment must be compared with caution; nevertheless, it is an invaluable tool to corroborate and elucidate experimental findings.
Simulation geometry and parameters. The geometry of the system, the boundary conditions, and elastic constants for the NLC are inputs to the numerical relaxation procedure. The one-constant approximation is used. Since we have a quasi-2D system, with the director field expected to lie in the plane of the wavy wall, the effect of changing the twist constant is expected to be weak in comparison to changing the splay and bend elastic constants. Specifically, the particle is bounded by walls with oriented planar anchoring separated by thickness T = 4a, unless otherwise specified. The effect of confinement with different T values has been explored in detail in Supplementary Note 1 and Supplementary Figure 1. The anchoring at the boundary opposite of the wavy wall is set to zero, and that of the flat plates sandwiching the colloid and the wavy wall is set to oriented planar. The Nobili-Durand anchoring potential is used 43 . Because the size of simulation is much smaller, much stronger anchoring is applied. For most of our results, infinite anchoring strength is applied unless otherwise specified. To verify this assumption is valid, we simulate the particle placed at various distances from the wavy wall, centered above the well, and the anchoring strength is systematically varied. Under realistic anchoring strength (10 −3 -10 −2 Jm −2 ), the behavior of the energy of moving a colloid from near to far does not deviate much from that in the case of infinite anchoring (Supplementary Figure 9). As we decrease the anchoring further, the particle interacts with the well from a decreased range, and more weakly.
Simulation of the dipoles. To simulate dipoles, we vary the material constants B and C so that the core energy of the defect is 2.6x higher to compensate for the small system (details can be found in Supplementary Materials). In addition, we also use an initial condition with a dipolar configuration about the colloid: n r ð Þ ¼î þ PR 2 c rÀr c rÀr c j j 3 , where R c is the colloid radius, r c is the location of the colloid center, P = 3.08 is the dipole moment, andî is the far-field director 38 . This expression is applied only in a sphere of radius 2R c around r c .
Numerical modeling by COMSOL. To describe some aspects of the director field in the domain, we employ the common simplification in NLC modeling known as the one-constant approximation: K 1 = K 2 = K 3 ≡ K. If there is no bulk topological defect, then the director field is a solution to Laplace's equation ∇ 2 n = 0, which can be solved by COMSOL separately for the two components n x and n z , from which n y is obtained by the unit length restriction on n. In COMSOL, this is easiest implemented by the Electrostatics Module. The model solves the equivalent electrostatic problem of ∇ 2 V = 0, which gives us n x and n z . Customized geometry, such as the wavy wall, can be built with the geometry builder. We mesh the space with a triangular mesh and calculate the director field components; the results are then exported in grid form and post-processed in MATLAB. | 11,533.2 | 2018-08-02T00:00:00.000 | [
"Materials Science",
"Physics"
] |
A review on laboratory liver function tests
Laboratory liver tests are broadly defined as tests useful in the evaluation and treatment of patients with hepatic dysfunction. The liver carries out metabolism of carbohydrate, protein and fats. Some of the enzymes and the end products of the metabolic pathway which are very sensitive for the abnormality occurred may be considered as biochemical marker of liver dysfunction. Some of the biochemical markers such as serum bilirubin, alanine amino transferase, aspartate amino transferase, ratio of aminotransferases, alkaline phosphatase, gamma glutamyl transferase, 5′ nucleotidase, ceruloplasmin, α-fetoprotein are considered in this article. An isolated or conjugated alteration of biochemical markers of liver damage in patients can challenge the clinicians during the diagnosis of disease related to liver directly or with some other organs. The term “liver chemistry tests” is a frequently used but poorly defined phrase that encompasses the numerous serum chemistries that can be assayed to assess hepatic function and/or injury.
Laboratory Liver Tests Serum Bilirubin
Bilirubin is the catabolic product of haemoglobin produced within the reticuloendothelial system, released in unconjugated form which enters into the liver, converted to conjugated forms bilirubin mono and diglucuronides by the enzyme UDP-glucuronyltransferase [1]. Normal serum total bilirubin varies from 2 to 21µmol/L. The indirect (unconjugated) bilirubin level is less than 12µmol/L and direct (conjugated) bilirubin less than 8µmol/L [2]. The serum bilirubin levels more than 17μmol/L suggest liver diseases and levels above 24μmol/L indicate abnormal laboratory liver tests [3,4]. Jaundice occurs when bilirubin becomes visible within the sclera, skin, and mucous membranes at a blood concentration of around 40 µmol/L [5]. The occurrence of unconjugated hyperbilirubinemia due to over production of bilirubin, decreased hepatic uptake or conjugation or both. It is observed in genetic defect of UDP-glucuronyltransferase causing Gilbert\'s syndrome, Crigler-Najjar syndrome and reabsorption of large hematomas and ineffective erythropoiesis [6,7]. In viral hepatitis, hepatocellular damage, toxic or ischemic liver injury higher levels of serum conjugated bilirubin is seen. Hyperbilirubinemia in acute viral hepatitis is directly proportional to the degree of histological injury of hepatocytes and the longer course of the disease [3]. It has been observed that the decrease of conjugated serum bilirubin is a bimodal fashion when the biliary obstruction is resolved [8]. Parenchymal liver diseases or incomplete extrahepatic obstruction due to biliary canaliculi give lower serum bilirubin value than those occur with malignant obstruction of common bile duct but the level remains normal in infiltrative diseases like tumours and granuloma [9]. Raised Serum bilirubin from 20.52 µmol/L to 143.64µmol/L in acute inflammation of appendix has been observed [10]. In normal asymptomatic pregnant women total and free bilirubin concentrations were significantly lower during all three trimesters and a decreased conjugated bilirubin was observed in the second and third trimesters [11]. The recent study has shown that a high serum total bilirubin level may protect neurologic damage due to stroke [12].
Alanine amino transferase (ALT)
ALT is found in kidney, heart, muscle and greater concentration in liver compared with other tissues of the body. ALT is purely cytoplasmic catalysing the transamination reaction [1]. Normal serum ALT is 7-56 U/ L [2]. Any type of liver cell injury can reasonably increases ALT levels. Elevated values up to 300 U/L are considered nonspecific. Marked elevations of ALT levels greater than 500 U/L observed most often in persons with diseases that affect primarily hepatocytes such as viral hepatitis, ischemic liver injury (shock liver) and toxin-induced liver damage. Despite the association between greatly Page numbers not for citation purposes elevated ALT levels and its specificity to hepatocellular diseases, the absolute peak of the ALT elevation does not correlate with the extent of liver cell damage [13]. Viral hepatitis like A, B, C, D and E may be responsible for a marked increase in aminotransferase levels. The increase in ALT associated with hepatitis C infection tends to be more than that associated with hepatitis A or B [14].
Moreover in patients with acute hepatitis C serum ALT is measured periodically for about 1 to 2 years [1]. Persistence of elevated ALT for more than six months after an occurrence of acute hepatitis is used in the diagnosis of chronic hepatitis. Elevation in ALT levels are greater in persons with nonalcoholic steatohepatitis than in those with uncomplicated hepatic steatosis [15]. In a recent study the hepatic fat accumulation in childhood obesity and nonalcoholic fatty liver disease causes serum ALT elevation. Moreover increased ALT level was associated with reduced insulin sensitivity, adiponectin and glucose tolerance as well as increased free fatty acids and triglycerides [16].
Presence of Bright liver and elevated plasma ALT level was independently associated with increased risk of the metabolic syndrome in adults [17]. ALT level is normally elevated during 2nd trimester in asymptomatic normal pregnancy [11]. In one of the study, serum ALT levels in symptomatic pregnant patients such as in hyperemesis gravidarum was 103.5U/L, in pre-eclampsia patients was 115U/L and in haemolysis with low platelet count patients showed 149U/L. However in the same study ALT rapidly drops more than 50% of the elevated values within 3 days indicating the improvement during postpartum [4]. One of the recent study has shown that coffee and caffeine consumption reduces the risk of elevated serum ALT activity in excessive alcohol consumption, viral hepatitis, iron overload, overweight, and impaired glucose metabolism [18].
Aspartate amino transferase (AST)
AST catalyse transamination reaction. AST exist two different isoenzyme forms which are genetically distinct, the mitochondrial and cytoplasmic form. AST is found in highest concentration in heart compared with other tissues of the body such as liver, skeletal muscle and kidney [1]. Normal serum AST is 0 to 35U/L [2]. Elevated mitochondrial AST seen in extensive tissue necrosis during myocardial infarction and also in chronic liver diseases like liver tissue degeneration and necrosis [3]. About 80% of AST activity of the liver is contributed by the mitochondrial isoenzyme, whereas most of the circulating AST activity in normal people is derived from the cytosolic isoenzyme [3]. However the ratio of mitochondrial AST to total AST activity has diagnostic importance in identifying the liver cell necrotic type condition and alcoholic hepatitis [19]. AST elevations often predominate in patients with cirrhosis and even in liver diseases that typically have an increased ALT [20]. AST levels in symptomatic pregnant patient in hyperemesis gravidarum were 73U/L, in pre-eclampsia 66U/L, and 81U/L was observed in hemolysis with low platelet count and elevated liver enzymes [4]. Page numbers not for citation purposes
AST/ALT ratio
The ratio of AST to ALT has more clinical utility than assessing individual elevated levels. A coenzyme pyridoxal-5\'-phosphate deficiency may depress serum ALT activity and consequently increases the AST/ALT ratio [21,22]. The ratio increases in progressive liver functional impairment and found 81.3% sensitivity and 55.3% specificity in identifying cirrhotic patients [23]. Whereas mean ratio of 1.45 and 1.3 was found in alcoholic liver disease and post necrotic cirrhosis respectively [24]. The ratio greater than 1.17 was found in one year survival among patients with cirrhosis of viral cause with 87% sensitivity and 52% specificity [25]. An elevated ratio greater than 1 shows advanced liver fibrosis and chronic hepatitis C infection [26]. However, an AST/ALT ratio greater than 2 characteristically is present in alcoholic hepatitis. A recent study differentiated nonalcoholic steatohepatitis (NASH) from alcoholic liver disease showing AST/ALT ratio of 0.9 in NASH and 2.6 in patients with alcoholic liver disease. A mean ratio of 1.4 was found in patients with cirrhosis related to NASH [27]. Wilson\'s disease can cause the ratio to exceed 4.5 and similar such altered ratio is found even in Hyperthyroidism [28,29].
Alkaline phosphatase (ALP)
ALP is present in mucosal epithelia of small intestine, proximal convoluted tubule of kidney, bone, liver and placenta. It performs lipid transportation in the intestine and calcification in bone. The serum ALP activity is mainly from the liver with 50% contributed by bone [1]. Normal serum ALP is 41 to 133U/L [2]. In acute viral hepatitis, ALP usually remains normal or moderately increased. Elevation of ALP with prolonged itching is related with Hepatitis A presenting cholestasis. Tumours secrete ALP into plasma and there are tumour specific isoenzymes such as Regan, Nagao and Kasahara [30].
Hepatic and bony metastasis can also cause elevated levels of ALP. Other diseases like infiltrative liver diseases, abscesses, granulomatous liver disease and amyloidosis may cause a rise in ALP. Mildly elevated levels of ALP may be seen in cirrhosis, hepatitis and congestive cardiac failure [30]. Low levels of ALP occur in hypothyroidism, pernicious anaemia, zinc deficiency and congenital hypophosphatasia [31]. ALP activity was significantly higher in the third trimester of asymptomatic normal pregnancy showing extra production from placental tissue [11]. ALP levels in hyperemesis gravidarum were 21.5U/L, in pre-eclampsia 14U/L, and 15U/L in haemolysis with low platelet count was seen during symptomatic pregnancy [4]. Transient hyperphosphataemia in infancy is a benign condition characterized by elevated ALP levels of several folds without evidence of liver or bone disease and it returns to normal level by 4 months [32]. ALP has been found elevated in peripheral arterial disease, independent of other traditional cardiovascular risk factors [33]. Often clinicians are more confused in differentiating liver diseases and bony disorders when they see elevated ALP levels Page numbers not for citation purposes and in such situations measurement of gamma glutamyl transferase assists as it is raised only in cholestatic disorders and not in bone diseases [30].
Gamma Glutamyl Transferase (GGT)
GGT is a microsomal enzyme present in hepatocytes and biliary epithelial cells, renal tubules, pancreas and intestine. It is also present in cell membrane performing transport of peptides into the cell across the cell membrane and involved in glutathione metabolism. Serum GGT activity mainly attributed to hepatobiliary system even though it is found in more concentration in renal tissue [1].
The normal level of GGT is 9 to 85 U/L [2]. In acute viral hepatitis the levels of GGT will reach the peak in the second or third week of illness and in some patients remain elevated for 6 weeks [30].
Increased level is seen in about 30% of patients with chronic hepatitis C infection [34]. Other conditions like uncomplicated diabetes mellitus, acute pancreatitis, myocardial infarction, anorexia nervosa, Gullian barre syndrome, hyperthyroidism, obesity and dystrophica myotonica caused elevated levels of GGT [30]. Elevated serum GGT levels of more than 10 times is observed in alcoholism. It is partly related to structural liver damage, hepatic microsomal enzyme induction or alcoholic pancreatic damage [35]. GGT can also be an early marker of oxidative stress since serum antioxidant carotenoids namely lycopene, α-carotene, β-carotene, and β-cryptoxanthin are inversely associated with alcohol-induced increase of serum GGT found in moderate and heavy drinkers [36].
GGT levels may be 2-3 times greater than the upper reference value in more than 50% of the patients with nonalcoholic fatty liver disease [37]. There is a significant positive correlation between serum GGT and triglyceride levels in diabetes and the level decreases with treatment especially when treated with insulin. Whereas serum GGT does not correlate with hepatomegaly in diabetes mellitus [38]. Serum GGT activity was significantly lower in the second and third trimesters of normal asymptomatic pregnancy [11]. The levels of GGT in hyperemesis gravidarum was 45U/L, in preeclampsia 17U/L, and 35U/L in hemolysis with low platelet count and elevated liver enzymes was found during symptomatic pregnancy [4]. The primary usefulness of GGT is limited in ruling out bone disease as GGT is not found in bone [30].
5' Nucleotidase (NTP)
NTP is a glycoprotein generally disseminated throughout the tissues of the body localised in cytoplasmic membrane catalyzing release of inorganic phosphate from nucleoside-5-phosphates. The normal range established is 0 to 15U/L [1]. Raised levels of NTP activity were found in patients with obstructive jaundice, parenchymal liver disease, hepatic metastases and bone disease [9]. NTP is precise marker of early hepatic primary or secondary tumours. ALP levels also increased in conjugation with NTP showing intra or extra hepatic obstruction due to malignancy [39]. Elevation of NTP is found in acute infective hepatitis and also in chronic hepatitis [40]. In acute hepatitis elevation of NTP activity is more when compared with chronic hepatitis and it is attributed to shedding of plasma membrane with ecto NTP activity due to cell damage, or leakage of bile containing high NTP activity [41]. Serum NTP activity was slightly but significantly higher in the second and third trimesters of pregnancy [11].
Ceruloplasmin
Ceruloplasmin is synthesized in the liver and is an acute phase protein. It binds with the copper and serves as a major carrier for copper in the blood [1]. Normal plasma level of ceruloplasmin is 200 to 600mg/L [2]. The level is elevated in infections, rheumatoid arthritis, pregnancy, non Wilson liver disease and obstructive jaundice. Low levels may also be seen in neonates, menke's disease, kwashiorkor, marasmus, protein losing enteropathy, copper deficiency and aceruloplasminemia [3]. In Wilson\'s disease ceruloplasmin level is depressed. Decreased rate of synthesis of the ceruloplasmin is responsible for copper accumulation in liver because of copper transport defect in golgi apparatus, since ATP7B is affected [30]. Serum ceruloplasmin levels were elevated in the chronic active liver disease (CALD) but lowered in the Wilson's disease (WD). Hence it is the most reliable routine chemical screening test to differentiate between CALD and WD [42].
α-fetoprotein (AFP)
The AFP gene is highly activated in foetal liver but is significantly repressed shortly after birth. The mechanisms that trigger AFP transcriptional repression in postpartum liver are not properly understood. AFP is the major serum protein in the developing mammalian foetus produced at high levels by the foetal liver and visceral endoderm of the yolk sac and at low levels by foetal gut and kidney. AFP is required for female fertility during embryonic development by protecting the developing female brain from prenatal exposure to estrogen [43]. In response to liver injury and during the early stages of chemical hepatocarcinogenesis led to the conclusion that maturation arrest of liver-determined tissue stem cells give rise to hepatocellular carcinomas [44]. The normal level of AFP is 0 to 15µg/L [2]. An AFP value above 400 -500µg/L has been considered to be diagnostic for hepatocellular carcinoma (HCC) in patients with cirrhosis. A high AFP concentration ≥ 400µg/L in HCC patients is associated with greater tumour size, bilobar involvement, portal vein invasion and a lower median survival rate [45]. Higher serum AFP levels independently predict a lower sustained virological response (SVR) rate among patients with chronic hepatitis C [46]. There are three different AFP variants, differing in their sugar chains (AFP-L1, AFP-L2, AFP-L3). AFP-L1, the non-Lens culinaris Page numbers not for citation purposes agglutinin (LCA) -bound fraction, is the main glycoform of AFP in the serum of patients with nonmalignant chronic liver disease. In contrast, Lens culinaris-reactive AFP, also known as AFP-L3, is the main glycoform of AFP in the serum of HCC patients and it can be detected in approximately one third of patients with small HCC (< 3 cm), when cut-off values of 10% to 15% are used [47]. AFP-L3 acts as a marker for clearance of HCC after treatment. It is reported that an AFP-L3 level of 15% or more is correlated with HCC-associated portal vein invasion [48]. Estimating the AFP-L3 / AFP ratio is helpful in diagnosis and prognosis of HCC [49]. There is a direct association between secondtrimester maternal serum alpha-fetoprotein levels and the risk of sudden infant death syndrome (SIDS), which may be mediated in part through impaired foetal growth and preterm birth [50].
Conclusion
Laboratory liver tests help to elucidate the alteration of markers which reflect the liver disease. The assessment of enzyme abnormalities like, the predominant pattern of enzyme alteration, the magnitude of enzyme alteration in the case of aminotransferases, isolated elevation or in conjugation with some other parameter, the rate of change and the nature of the course of alteration or follow up of 6 months to 1-2 years helps in the diagnosis of the disease. But a single laboratory liver test is of little value in screening for liver disease as many serious liver diseases may be associated with normal levels and abnormal levels might be found in asymptomatic healthy individuals. The pattern of enzyme abnormality, interpreted in the context of the patient's symptoms can aid in directing the subsequent diagnosis. Page numbers not for citation purposes | 3,782.8 | 2009-11-22T00:00:00.000 | [
"Biology",
"Medicine"
] |
Strong decays of the lowest bottomonium hybrid within an extended Born–Oppenheimer framework
We analyze the decays of the theoretically predicted lowest bottomonium hybrid H(1P) to open bottom two-meson states. We do it by embedding a quark pair creation model into the Born–Oppenheimer framework which allows for a unified, QCD-motivated description of bottomonium hybrids as well as bottomonium. A new 1P1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$^{1}\!P_{1}$$\end{document} decay model for H(1P) comes out. The same analysis applied to bottomonium leads naturally to the well-known 3P0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$^{3}\!P_{0}$$\end{document} decay model. We show that H(1P) and the theoretically predicted bottomonium state Υ(5S)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varUpsilon (5S)$$\end{document}, whose calculated masses are close to each other, have very different widths for such decays. A comparison with data from Υ(10860)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varUpsilon (10860)$$\end{document}, an experimental resonance whose mass is similar to that of Υ(5S)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varUpsilon (5S)$$\end{document} and H(1P), is carried out. Neither a Υ(5S)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varUpsilon (5S)$$\end{document} nor a H(1P) assignment can explain the measured decay widths. However, a Υ(5S)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varUpsilon (5S)$$\end{document}–H(1P) mixing may give account of them supporting previous analyses of dipion decays of Υ(10860)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varUpsilon (10860)$$\end{document} and suggesting a possible experimental evidence of H(1P).
Introduction
There is nowadays compelling theoretical evidence, from quenched (without light quarks) lattice QCD calculations, of the existence of quarkonium hybrids (Q Qg bound states where Q is a heavy quark, b or c, Q its antiquark, and g stands for a gluon) [1]. In contrast, there is not convincing experimental evidence of the existence of hybrids until now mostly due to the difficulty of identifying unambiguous distinctive signatures for them. As for any other predicted unstable system these signatures have to be looked at the decay products. Hence a thorough theoretical analysis of the dominant and/or exclusive decays of hybrids can be instrumental in unveiling their presence from data. In this regard the lowest bottomoa e-mail<EMAIL_ADDRESS>(corresponding author) b e-mail<EMAIL_ADDRESS>nium hybrid (bbg bound state) that we shall call henceforth H (1P) (the reason for this notation will be explained later on) may be an ideal system, in spite of its possible mixing with bottomonium (bb bound state), for trying to disentangle these signatures for several reasons.
First, the mass of the b quark, M b , is much larger than the QCD scale, Λ QC D . This, added to the assumption of no significant string breaking effects, supports the Born-Oppenheimer (B-O) approximation used for the description of H (1P) in QCD. In this approach, its mass corresponds to the lowest energy level of a Schrödinger equation for bb in an excited flavor-singlet B-O potential. This (deepest) hybrid potential, defined by the energy of an excited state of the gluon field in the presence of static b and b sources, has been calculated in quenched lattice QCD.
Second, the validity of the B-O approximation implies approximate heavy quark spin symmetry (HQSS) [2] which limits its possible hadronic transitions.
Third, being the lowest hybrid state it cannot decay to other hybrids. Moreover, as the deepest hybrid B-O potential is smaller than the sum of the ground-state flavor singlet B-O potential and the mass of a glueball (bound state of gg) with the appropriate quantum numbers, decay to a bottomonium state, which is an energy level of the ground-state flavor singlet B-O potential, plus a glueball is not expected. Hence, its strong decays are constrained to final states not involving hybrids or glueballs.
In this article we follow these reasonings to analyze the dominant decays of H (1P) to open-flavor meson-meson states. As this hybrid and the Υ (5S) bottomonium state, both with the same quantum numbers J PC = 1 −− , are predicted to have about the same mass in the B-O approximation, we study for comparison similar decays from Υ (5S). In order to be consistent (and to make meaningful the comparison) we describe the decays, through pair creation, within an extended B-O framework. For the bottomonium states this description gives rise to the well-known 3 P 0 decay model. For H (1P) a 1 P 1 decay model comes out consistently. It is worth to emphasize that this 1 P 1 decay model is essentially different from the decay models built from constituent glue or flux tube hybrid models, see [3,4] and references therein. In brief, in these hybrid models pair creation is spin triplet whilst in the 1 P 1 is spin singlet. This difference, directly related to the different hybrid descriptions, translates in very distinct selection rules for the forbidden and allowed decays from H (1P) to open bottom two-meson states.
These selection rules, and the ones that can be analogously derived for other hybrids whose disentanglement from data may be even more difficult, could be of great help for the analysis of upcoming data, in particular from JLab (GlueX) and Fair (PANDA), or for directing new experimental searches.
The decay width ratios for Υ (5S) and H (1P), calculated from the 3 P 0 and 1 P 1 decay model respectively, are very different. None of these predicted ratios can explain by its own the measured ones for the only experimental candidate with a similar mass, the 1 −− resonance Υ (10860) [5]. In contrast, we show that a Υ (5S)-H (1P) mixed state could give reasonable account of data. This provides additional support to the mixing scenario proposed in [6] for Υ (10860) from the study of its leptonic and dipion decays. This suggest that, altogether, the measured decay widths of Υ (10860) might provide the first (indirect) experimental evidence of the existence of H (1P).
The contents of this article are organized as follows. In Sect. 2 the B-O framework for the description of bottomonium and bottomonium hybrids is briefly revisited. In Sect. 3 their dominant strong decays to open bottom two-meson states are described within an extended B-O framework. Next, in Sect. 4 we apply this description to the calculation of the widths of the lowest bottomonium hybrid H (1P) and the Υ (5S) bottomonium state. The results obtained are compared with data from Υ (10860) in Sect. 5. This comparison suggests that Υ (10860) could be a mixing of these two components. Finally, in Sect. 6, our main conclusions are summarized.
Born-Oppenheimer approximation
The B-O approximation, initially developed for the description of molecules from electromagnetic interactions [7] has been shown to be also well suited for the description of heavyquark meson bound states from strong interactions [1,2]. The reason is that it allows for the implementation of the strong interaction theory (QCD) dynamics through quenched (with quarks and gluons but no light quarks) lattice QCD calculations. (For a connection of the B-O approximation with effective field theory, see [8].) To understand how the B-O approximation works let us briefly recall the main steps in its construction. For this pur-pose let us consider a meson system containing a heavy quark-antiquark (Q Q) interacting with light fields (gluons), with Hamiltonian where K Q Q is the Q Q kinetic energy operator and H lf Q Q includes the light field energy operator and the Q Q -lightfield interaction. A bound state |ψ is a solution of the characteristic equation where E is the energy of the state. Notice that |ψ contains information on both the Q Q and light fields. The first step in building the B-O approximation consists in solving the dynamics of the light fields by neglecting the Q Q motion, i.e., setting the kinetic energy term K Q Q equal to zero (static limit). This corresponds to the limit where Q and Q are infinitely massive, what can be justified because the Q and Q masses, m Q and m Q , are much bigger than the QCD scale Λ QCD , which is the energy scale associated with the light fields. In this limit the quark-antiquark relative position r is fixed, ceasing to be a dynamical variable. Then, in the center of mass reference frame H lf Q Q depends operationally only on the light fields, and parametrically on r. We indicate this renaming it as H lf static (r). The dynamics of the light fields for any fixed value of r can then be solved from where |α; (r) are the light field eigenstates that depend parametrically on r and are characterized by the quantum numbers α ≡ (Λ, η, ). These quantum numbers have been detailed elsewhere, see for example [2]. Thus, Λ stands for the modulus of the eigenvalue of r · J lf , being J lf the total angular momentum of the light fields, η for the eigenvalue of C lf P lf , being C lf the light field charge-conjugation operator and P lf the parity operator spatially inverting the light fields through the midpoint between Q and Q, and for the eigenvalue of the operator reflecting the light fields through a plane containing Q and Q.
As for the eigenvalues V α (r), depending parametrically on r, they correspond to the energies of stationary states of the light fields in the presence of static Q and Q sources placed at distance r . These eigenvalues have been calculated ab initio in quenched lattice QCD [1].
So, the ground state of the light fields is usually characterized through Σ + g where Σ stands for Λ = 0, the subscript g for η = +1 and the superscript + for = +1, and the corresponding eigenvalue reads V Σ + g (r). Up to spin dependent terms that we shall not consider this eigenvalue mimics the form of the phenomenological Cornell (central) potential, see for example [9]. As for the first excited state of the light fields, it is denoted through Π + u where Π stands for Λ = 1 and the subscript u for η = −1, and its corresponding eigenvalue by V Π + u (r) which, up to spin dependent terms, is also a central potential.
The second step in building the B-O approximation consists in reintroducing the Q Q motion and assuming that the light fields respond almost instantaneously to the motion of the quark and antiquark. This is the adiabatic approximation, in which non adiabatic coupling terms in the kinetic energy are neglected. Then the bound state equation factorizes in a set of decoupled single-channel Schrödinger like equations for Q Q, one per each light field eigenstate |α; (r) , where the potential is given by the corresponding eigenvalue V α (r). The bound state solutions |ψ can be characterized as [2] where L and m L indicate that they are eigenstates of L 2 and L z , being L an angular momentum of the system defined as L = l Q Q + J α where l Q Q is the orbital angular momentum of Q Q and J α is the total angular momentum of the light fields. s Q Q and m s Q Q stand for the spin of Q Q and its third component, and R nL (r )Y Lm L ( r) for the wave function at r.
Notice that the energy of the state E depends on the quantum numbers n and L. From these solutions, which are also eigenstates of parity with eigenvalue P = (−1) Λ+L+1 (5) and charge conjugation with eigenvalue one can easily build physical states |E, L , s Q Q , J, m J ; α characterized by quantum numbers J PC where J = L+s Q Q is the total angular momentum of the system. For Q = b, and the ground state of the light fields these J PC states correspond to bottomonium bb. As J Σ + g = 0 one has L = l bb and J = j bb where j bb is the total angular momentum of bb. In particular, for J PC = 1 −− one has s bb = 1 and L = l bb = 0, 2.
For Q = b, and the first excited state of the light fields with J Π + u = 1, the J PC states correspond to bottomonium hybrids bbg. The lower energy state has J PC = 1 −− , s bb = 0 and L = 1. This is why we use H (1P) to denote this lowest bottomonium hybrid (H for hybrid and P for L = 1).
Strong decays to open bottom two-meson states
If allowed, the dominant strong decays of bottomonium and bottomonium hybrids are to open bottom two-meson states. Although the B-O approximation description of bottomonium and bottomonium hybrids incorporates QCD dynamics through quenched lattice results, the QCD treatment of these decays requires unquenched (with light quarks) lattice QCD inputs from which the mixing potentials with open bottom two-meson states can be derived. These mixing potentials are related to the non-adiabatic coupling terms which are neglected in the B-O approximation. Noticeable progress in the incorporation of the non-adiabatic coupling terms within a framework that goes beyond the B-O approximation has been recently reported [10,11]. However, the calculation involving the non-adiabatic coupling terms requires more unquenched lattice data than currently at disposal [12,13]. In particular, no lattice data for the mixing of bottomonium hybrids are available. This makes unaffordable such an ab initio treatment at present.
Instead, as the underlying physical mechanism is lightquark pair creation and string breaking, we shall assume that the decay proceeds first through light quark-antiquark (qq) pair creation in a transition from an initial light field (gluon) B-O configuration to a light field (gluon and light quark) configuration given by the direct product of a B-O one and a qq state. We call this product an extended B-O configuration.
The application of general conservation arguments to this transition informs us of the possible quantum numbers of the emitted pair. Then, the recombination of qq with bb gives rise to an open bottom two-meson state (string breaking). Although this two-step process is only an approximation to the QCD mixing, it seems reasonable to think that, as this mixing takes place via pair creation, the quantum numbers of the pair will be the same that we derive from general conservation laws. These quantum numbers define the decay model, as we show in what follows.
Bottomonium
For bottomonium it is natural to assume, because of its bb content, that a flavor and color singlet qq is emitted within the hadronic medium. In the extended B-O framework this emission corresponds to a transition where we have used HQSS so that s bb is conserved.
Conservation of parity implies l bb +l qq (10) and conservation of charge conjugation By substituting (10) in (11) we have (−1) s qq = −1 ⇒ s qq = odd. Hence If we reasonably assume that the most favored emission is for j qq having its minimal value, j qq = 0, then l qq = 1 (and l bb = l bb ), so that the emitted qq pair is in a 3 P 0 or 0 ++ state. Then, the recombination of the color singlet qq with the color singlet bb gives rise to the final bq and bq mesons.
This two step process defines the decay model for bottomonium within the extended B-O framework. Actually it corresponds to the so called 3 P 0 decay model proposed long time ago [14,15] within a constituent quark model framework. The 3 P 0 model, detailed for bottomonium decays in [16], has been extensively applied in quarkonium (bottomonium and charmonium) decays to open flavor two-meson states.
This correspondence with the 3 P 0 decay model is directly related to the equivalence of the B-O and the constituent quark model (with a Cornell potential) frameworks for the description of bottomonium. The nice feature of the extended B-O framework is that the possible quantum numbers for the qq pair are derived from general conservation arguments what also makes us confident in their validity despite our approximated treatment.
Lowest bottomonium hybrid
For the lowest bottomonium hybrid H (1P) it is natural to assume, because of its gluon content, that a color octet light quark-antiquark pair is emitted within the hadronic medium. (l qq , s qq , j qq ), Σ + g .
Conservation of parity implies so that l bb + l qq = odd, and conservation of charge conjugation l bb +l qq +s qq (15) so that l bb + l qq + s qq = odd, and s qq = even. Hence Then, using J Σ + g = 0, the total angular momentum conservation reads J = l bb + l qq . As r · l bb = 0 because the orbital angular momentum of bb is orthogonal to the separation vector of b andb, we have r · J = r · l qq . On the other hand J = L + s bb = L = l bb + J Π + u so that r · J = r · J Π + u . Therefore r · J Π + u = r · l qq . Recalling that the modulus of the eigenvalue of r · J Π + u is Λ Π + u = 1 we conclude that j qq = l qq ≥ Λ Π + u = 1. If we reasonably assume that the most favored emission is for j qq having its minimal value then j qq = 1, s qq = 0 and l qq = 1 so that the emitted color octet qq pair is in a 1 P 1 or 1 +− state. Then, l bb = even. For the lowest hybrid transition it is quite natural to assign j bb = l bb = 0 so that the color octet bb pair is in a 0 −+ state.
Notice that the quantum numbers of the emitted pair 1 +− are the same of the ground state gluelump, which is the limit of the gluon configuration Π + u when r → 0 [2]. The second step is the recombination of the color octet qq with the color octet bb giving rise to bq andbq mesons.
This two step process defines the 1 P 1 model for the decay of H (1P) into open bottom two-meson states within the extended B-O framework. Let us remark that the quantum numbers of the emitted pair 1 +− are different from the 1 −− used in decay models of hybrids based on constituent glue, see [3,4] and references therein. Otherwise said, the bottomonium hybrid descriptions provided by the B-O approximation in QCD and the constituent gluon models are not equivalent. This difference is crucial to establish the forbidden and allowed decays of H (1P) to open bottom two-meson states, as we show next.
Decay widths
Let us consider the decay H (1P) → C + F where C is a bq meson state B ( * ) (s) , and F is abq meson state B ( * ) (s) . In parallel with the 3 P 0 decay model for bottomonium we shall characterize the qq emission by a real constant probability amplitude: √ 2γ 1 for uu or dd and √ 2γ 1 for ss where the √ 2 is a color normalization factor. Notice that the color matrix element in the recombination of the emitted 1 +− color octet qq with the 0 −+ color octet bb is 1/ √ 2 so that the total (emission + recombination) color factor is √ 2 1 √ 2 = 1 as it corresponds to the decay of an initial color singlet into final color singlet states. To complete the calculation of the amplitude for the recombination process (of qq with the color octet bb) we need the radial wave function of the color octet bb. We shall approximate it by that of the hybrid R n=1,L=1 (r ). This is justified in the limit r → 0 where the r -dependent interaction potential between bb and the gluon field becomes negligible against the centrifugal barrier. As the orbital angular momentum of bb is zero the orbital part of the wave function is that of the gluon field and the radial part of the wave function is effectively that of the color octet bb configuration. This wave function approximation holds as long as the Π + u configuration remains close to the gluelump, what may occur up to a distance around 0.5 fm [2].
The calculation of the width follows exactly the same procedure used in the 3 P 0 model detailed in [14][15][16]. In the rest frame of H (1P) and for the emission of a uu or dd pair it can be expressed as (for the emission of ss one should substitute γ by γ ) where M H is the mass of the hybrid, E C is the energy of the C meson given by E C = M 2 C + k 2 being k the modulus of the three-momentum of C (or F), and (18) where I and m I stand for isospin and its third component, s for spin, J for total angular momentum of the initial state, l bb = 0, s bb = 0, j bb = 0, s qq = 0, l qq = 1 and j qq = 1. The square brackets are related to the 9 j symbols: withĵ ≡ 2 j + 1. The spatial integral J + is given by with , and u stands for the Fourier transform of the radial wave function.
For the sake of simplicity we shall use henceforth the notation: From (18) and taking into account that the three elements in the same column in the 9 j symbol have to satisfy the triangular rule for the symbol not to vanish we immediately infer from s qq = 0 that H (1P) As for the calculation of the widths for the other kinematically allowed decays to B B, B * B * and B s B s , B * s B * s we shall use for H (1P) the mass 10888 MeV and the radial wave function calculated in reference [6]. Let us remind that the excited string potential used in [6] differs from the lattice QCD parametrization of the hybrid potential [2] only at short distances, where they are both dominated by the centrifugal barrier given by L = 1. Therefore, the difference between these two parametrizations has no appreciable effect on the wave function of the lowest bottomonium hybrid. For the final mesons, we use their experimental masses and for simplicity, as usual, Gaussian radial wave functions with an average rms radius of 0.45 fm, as obtained from a Cornell potential model.
so that Let us emphasize again that the hybrid decay pattern resulting from Eqs. (23) and (24) is very different from the one predicted by constituent glue or flux tube models. In these models, as a consequence of the assumption of a spin triplet light quark pair, s qq = 1, our selection rule (23) does not appear. Instead, a different selection rule establishing the suppression of the coupling of the 1 −− hybrid with two Swave mesons, e.g., B ( * ) B ( * ) , comes out. More concretely, in [17] it has been proved that when s qq = 1 the amplitude M in (17) for the decay into two S-wave mesons vanishes. As in our case s qq = 0 this selection rule does not apply. On the other hand, in [18] the authors consider the same decay into two S-wave mesons and reach the same conclusion from their characterization of the hybrid as a bound state of a 0 −+ cc and a gluon in P-wave, which is referred to as a 1 +− magnetic gluon. This magnetic gluon then decays into a color octet spin-one S-wave light qq pair. The fact that the orbital angular momentum of the gluon in the hybrid is l g = 1, determining the symmetry properties of the hybrid wave function entering in the calculation of the amplitude, is instrumental for the derivation of the selection rule. A direct comparison of our B-O two-body description of the hybrid with the constituent model three-body description of this reference is not straightforward, but in this context we can observe that if instead one considered the hybrid as being made of a 0 −+ cc and a 1 +− gluelump in an S-wave, with this gluelump decaying into a color octet spin-zero P-wave light qq pair as it is our case, then the orbital angular momentum of the gluelump would be zero and the selection rule would not appear.
It is very illustrative to compare these results with the corresponding decay widths from Υ (5S) with a calculated mass of 10865 MeV, quite close to the hybrid one [6]. In this case, using the 3 P 0 decay model and the same kind of self-explained simplified notation we get from which The comparison of Eqs. (23), (25) with Eq. (27) makes clear the very different decay patterns for H (1P) and Υ (5S). Next we analyze whether these patterns could provide or not an explanation for current data.
Comparison with data
The only experimental data available to possibly check our results come from Υ (10860) [5], a 1 −− bottomonium-like resonance produced in e + e − annihilation at a c.o.m. energy of 10889.9 +3.2 −2.6 MeV, pretty close to the calculated masses of the lowest bottomonium hybrid H (1P) and the Υ (5S) bottomonium state, and not far from the B * s B * s threshold. Although the measured leptonic width Γ Υ (10860)→e + e − = 0.31 ± 0.07 KeV is compatible within the errors with an assignment of this resonance to a pure Υ (5S) state, this option is ruled out because Υ (10860) has dipion decays to π + π − h b ((1, 2)P) and π + π − Υ ((1, 2, 3)S) with a similar production rate. As s h b = 0 and s Υ = 1, HQSS implies that Υ (10860) must have s bb = 0 and s bb = 1 components.
As As an alternative, in reference [6] it has been proposed that Υ (10860) could be a mixing of H (1P) and Υ (5S) (for mixing in nonrelativistic effective field theories see [21,22]) we obtain where the quoted errors come from data uncertainty only. If we use this information, together with the calculated widths (24) and (26), then compatibility with experimental data Eqs. (28) and (29) translates into two independent conditions to be satisfied: where x ≡ γ 1 γ 0 tan θ . Each of these conditions admits two solutions. Full consistency with existent data requires that there exist a pair of overlapping solutions, so that there is a value of x that reproduces both experimental ratios within the error bars. The fact that the two closest solutions are x = 0.32 ± 0.06 x = −0.02 ± 0.10 (38) (errors from data uncertainty only) shows some tension (around 2σ ) with data. This may be attributed to the approximations we have followed, among them having neglected the possible momentum dependence of γ 0 and γ 1 . Following a Bayesian approach, we work under the hypothesis that these two solutions are nevertheless two independent measurements of the same quantity, so we identify the best fit of x with the weighted average: where the weighted average error has been doubled in order to account for the aforementioned uncertainties.
As for the decays to bottom-strange mesons, we use Γ H * s = 0 and to obtain Then from the experimental ratio Eq. (30) we get Like before, we have doubled the error bar to account for uncertainty coming from our approximations. Then, for example, for the maximum value of sin 2 θ = 0.1 we would have for bottomonium γ 0 = 2.0 ± 0.2 in good accord with the value commonly used in the literature, see for instance [16], and γ 0 = 0.75 ± 0.15 so that γ 0 < γ 0 as expected from the more probable emission of a uu or dd pair than a ss pair. From these values, one would have These results show that a fully consistent explanation of Υ (10860) as mainly being a mixing of Υ (5S) and the lowest bottomonium hybrid is feasible. It can be easily inferred that this mixing would be also necessary to explain data if an additional Υ (5S)-Υ (4D) mixing were implemented, since the decays to B * B * and B * s B * s from Υ (4D) are even more suppressed than from Υ (5S).
Therefore, a Υ (5S)-H (1P) mixing may give reasonable account of the observed leptonic, dipion and open-bottom two-meson decays of Υ (10860). We may tentatively interpret this as an indirect experimental evidence of the existence of the lowest bottomonium hybrid H (1P).
A remaining question is whether a direct detection of the hybrid dominated orthogonal combination, that we shall call H (10860): is feasible or not. From 0 < sin 2 θ 0.1 and our previous results we can easily evaluate a lower limit on the total width of H ( This width of at least some hundreds of MeV, and an expected larger width for the 2P hybrid state with a calculated mass around 11080 MeV, makes very unlikely a clean experimental signature of H (10860) in the foreseeable future.
Summary
A thorough study of the decays of the lowest bottomonium hybrid, that we have called H (1P), to open bottom twomeson channels has been carried out by implementing lightquark pair creation within an extended B-O framework. The application of conservation laws for strong interactions fixates the possible quantum numbers of the pair. Thus, a 1 P 1 decay model has been built. From it selection rules for the forbidden and allowed decays and quantitative ratios of decay widths have been predicted. These predictions differ greatly from the ones obtained from the 3 P 0 decay model resulting for bottomonium decays within the same framework. In particular, the calculated ratios for H (1P) and for the bottomonium state Υ (5S), with the same quantum numbers J PC = 1 −− and predicted masses in the B-O approximation close to each other, have been compared between them and with the measured widths of Υ (10860), an experimental 1 −− resonance with similar mass. This comparison indicates that Υ (10860) should not be assigned to a pure Υ (5S) state, in accord with the indications from Heavy Quark Spin Symmetry when applied to its observed dipion decays. The need for a heavy-quark spin zero component, apart from the Υ (5S) spin one, has led to several proposals about the nature of Υ (10860) in the literature. We have centered on a Υ (5S)-H (1P) mixing scenario that gives reasonable quantitative account of the dipion decay widths. We have shown that such a mixing could also explain the observed decays of Υ (10860) to open bottom two-meson channels. It is worth to emphasize that many of our results, namely the selection rules and the impossibility to describe the Υ (10860) as a pure bottomonium or bottomonium hybrid state, are not affected by any change in the hybrid wave function. These results make us tentatively conclude that current data on Υ (10860) may be showing the presence of the lowest bottomonium hybrid. | 7,421.4 | 2021-01-01T00:00:00.000 | [
"Materials Science"
] |