id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
232484323 | pes2o/s2orc | v3-fos-license | Board Gender Diversity and Dividend Policy in Chinese Listed Firms
This study investigates the relationship between gender diversity on the board and dividend payouts in China using a large sample over the period 2003–2017. Our results provide robust and strong evidence showing that gender diversity on the board is positively associated with cash payments of dividends. The empirical outcomes confirm that gender diversity on the board facilitates corporate governance and subsequently promotes dividend payouts. We demonstrate that gender diversity on the board has the greatest effect when the board has critical mass participation (three or more female directors) compared with only their token participation. However, independent female directors increase dividend payouts, while female executive directors do not have a significant impact. Furthermore, we extend the literature on the relationship between dividend payments and government ownership by providing evidence that gender diversity has a higher impact on dividend payouts for state-owned enterprises than non-state-owned enterprises. After controlling the endogeneity problems, our findings are reliable and robust. JEL classifications : G30, G35
Introduction
Recent studies reveal an upward trend for women's participation in the boards of U.S. public companies in the last decade (Catalyst, 2016). Conversely, the involvement of women in the boardroom is underrepresented worldwide (Catalyst, 2017;Institutional Shareholder Services, 2017). Within the context of best practices in corporate governance (CG), some countries (such as, Australia, the United Kingdom, and Canada) have recommend women's inclusion on corporate boards, whereas some countries in Europe (such as, France, Belgium, Spain, Norway, and Italy) have specified quotas on corporate boards for the proportion of female directors (Usman et al., 2018). In this regard, observers and regulators are still working toward greater participation of females in the boardroom (Adams & Mehran, 2012). Based on global listed companies' statistics for 2016, approximately 11.7% of the average board of directors and 4.6% of their CEOs are female in globally listed companies. 1 Recently, legislators in California passed a bill on 29 August 2018 that requires having at least one female director on the corporate board in all significant listed companies by the end of 2019. Any corporation failing to adopt this quota may face severe financial penalties (Ye et al., 2019). Due to these proposals and policies, the need to examine the benefits that are linked with corporate leadership and gender diversity is vital.
A body of theoretical literature connects dividend policy with the uncertainty of cash flow, investment opportunities, and ownership structure (Chay & Suh, 2009;Denis & Osobov, 2008;Fama & French, 2001;La Porta et al., 2000). Jensen et al. (1992) and La Porta et al. (2000) asserted that governance quality may affect the policy of dividend payouts because dividend payouts are partly influenced by conflicts of interests between insiders of the firm and external shareholders. DeAngelo et al. (2006) stated that dividend payments reduce the free cash flow that is available for investments; this requires managers to acquire capital from external sources, imposes stricter regulations on management by external investors, and reduces agency problems. The improvement in dividend payment ratios may reduce 997807S GOXXX10.1177/2158244021997807SAGE OpenAin et al.
research-article20212021
1 Xi'an Jiaotong University, China 2 Sapienza Università di Roma, Italy 3 Shandong University of Science and Technology, Qingdao, China agency problems while financing restrictions are not strict (Chae et al., 2009). This demonstrates that an efficient governance mechanism will push for more dividend policies to overcome agency problems.
Most of the finance literature has examined the participation of female directors on the board as a business case (Adams & Ferreira, 2009;Adams & Mehran, 2012;Cumming et al., 2015;Liu, 2018;Post & Byron, 2015;Rose, 2007). Some studies have proposed that heterogeneous boards have better performance compared with non-diverse boards (Carter et al., 2003;Erhardt et al., 2003;Joecks et al., 2013;Liu et al., 2014), better governance (Adams & Ferreira, 2009), and increased market valuation (Campbell & Minguez-Vera, 2007) and reduce agency problem (Ain et al., 2020). Gender socialization theory suggests that females are more caring, socially concerned, and more expressive (Carlson, 1972;Eagly & Crowley, 1986;Gilligan, 1977). McGuinness et al. (2017) stated that these qualities help female directors and executives to manage the relationship with stakeholders. Besides this, female directors provide diverse opinions in the boardroom; due to diversity in the discussion, the dynamics of the board are improved and this enhances decision making (Erhardt et al., 2003;Gul et al., 2011;Miller & Triana, 2009;Zahra & Pearce, 1989). The findings of the above studies suggest that gender diversity on boards helps in making reasonable decisions and increases the tendency to promote the interests of shareholders by considering agency problems. As a result, there is an increased likelihood of dividend initiation and higher payout ratios for dividends.
On the basis of above arguments, some studies have examined the influence of female directors' participation on agency problems, for example, Jurkus et al. (2011), Chen et al. (2017), and Byoun et al. (2016) in the U.S. setting; Ain et al. (2020) and McGuinness et al. (2015) using Chinese data 2 ; Pucheta-Martinez and Bel-Oms (2016) using data for Spanish firms; Saeed and Sameer (2017) studying emerging economies 3 ; and Trinh et al. (2020) using U.K. data. These studies suggest that female participation on the board has an impact on dividend payouts but provide mixed results. Although these studies offer useful insights into the literature, their focus is on developed countries, with little consideration paid to developing countries. In general, developing countries have different regulatory environments compared with developed countries (Beck & Levine, 2004;La Porta et al., 2000). The debate on agency problems in developed countries is also relevant in developing countries. Some institutional settings, for example, monitoring shareholders' rights and government ownership of emerging economies, however, are different from developed markets. Therefore, these diverse market characteristics may affect decisions regarding dividend payouts differently.
Using 29,104 observations of Chinese listed firms from 2003 to 2017, we analyze the influence of board gender diversity on dividend payout policy. Our results provide strong and robust evidence that the participation of female directors in the boardroom positively affects decisions regarding dividend payouts, and that independent female directors have more effect as opposed to female executive directors. Our results also suggest that the critical mass of female directors plays an important role on corporate boards, having more impact on dividend decisions when there are three or more female directors on the board. Furthermore, our study reports higher positive relationship between the participation of female directors and dividend payouts in state-owned firms. All of our findings are in line with the opinion that female participation on the board promotes CG and also uses the dividend payment policy as a tool in the reduction of agency cost.
This study provides several contributions to the literature. First, going beyond the studies of developed countries, our research provides evidence from the largest emerging economy in the world, China. Our research adds value to the literature on gender diversity, which mainly focuses on the business case for gender heterogeneity by examining the influence of gender diversity on firm's dividend payouts. This article provides evidence of a strong positive association between female directors on the board and dividend payouts. Second, compared with previous studies, which have mainly focused on the institutional settings of developed countries (Byoun et al., 2016;Chen et al., 2017;Pucheta-Martinez & Bel-Oms, 2016), China has a different institutional setting. Therefore, it is important to study how a diversified board, with female directors, affects dividend payments in a different business environment. We consider the unique institutional factors of China while examining the governance role of female directors and dividend payouts. Our study extends the prior literature by investigating how institutional variations within the country affect the governance role of female directors on decisions regarding dividend payouts. The distinctive nature of Chinese ownership structure affects its institutional environment, shareholder protection, and CG effectiveness (Shleifer & Vishny, 1997). Thus, we contribute to institutional theory by providing novel evidence in China that gender diversity has a greater impact on the dividends payouts of state-owned enterprises (SOEs) than non-state-owned enterprises (non-SOEs) and that dividend payments increase with state ownership.
Third, critical mass theory proposes that female directors have more effect on the board when there are three or more (Kanter, 1977;Kristie, 2011). Liu et al. (2014) and Gyapong et al. (2016) also reported that a critical mass of females has the most significant impact on firm performance. Our study provides evidence that when the number of female increases from one to two, and from two to three, this results in more dividend payouts. These results suggest that when three or more female directors are involved on the board, they have greater ability to improve dividend payouts and free up corporate resources from the control of insiders. Finally, we study whether independent and executive female directors demonstrate the same governance role in increasing dividend payouts. Our results report that only independent female directors significantly increase the dividend payouts, while executive female directors have an insignificant impact on dividends. This outcome contributes the literature by providing new insights regarding the governance role of females. Some previous studies have overlooked this question regarding whether executive and non-executive female directors have an equal effect on effective monitoring (Adams & Ferreira, 2009;Gul et al., 2011;Nielsen & Huse, 2010).
The rest of the article is organized as follows: The next section defines the institutional background of China, followed by the "Theoretical Background" section. Details regarding hypothesis development are provided in the "Hypothesis Development" section, followed by the details of the empirical research design. Then the empirical results are reported, followed by details concerning endogeneity problems in "Endogeneity" section and robustness tests in the following section. Conclusions are provided in the last section.
Institutional Background of China
In Shenzhen in 1990 and in Shanghai in 1991, China allowed stock markets to open mainly to collect capital for the state and for state-controlled enterprises (SOEs), and the state continues to dominate much of the corporate economy (Areddy et al., 2008;Wang et al., 2011). The "structure of split shares" is well known in Chinese companies, through which two stock classes are traded publicly and two classes are not traded publicly. The publicly listed stocks include "A" and "B" shares. The A shares are denominated in renminbi, while B shares are denominated in Hong Kong Dollars (HKD) and in U.S. dollars. About two thirds of listed companies' shares are not tradable. Among the non-tradable shares, about half are "legal person" shares, held by other Chinese companies, SOEs, or non-bank financial entities. The remaining non-tradable shares are state-owned shares, directly held by central or local government departments, or held by SOEs.
The majority of listed firms in Latin American, Western, and Asian countries (except China) are controlled and owned by financial institutions, individuals, and wealthy families (Morck & Steier, 2005). In these countries, the state has nothing to do with the ownership of corporations in most cases, but this situation is different in China. The Chinese economy is the second largest economy in the world, but compared with Anglo-Saxon countries (e.g., the United States, Australia, and the United Kingdom), China is viewed as a transitional and new economy with distinctive features. Several studies have used the Anglo-Saxon environment to explain their concepts and conduct their research (Gulzar et al., 2019). Li and Zhang (2010) provided evidence that the state serves as the sole controller and controls over 63.5% of entities' ownership in China. Firth et al. (2016) reported that most firms in China provide little room for individual investors to influence corporate decisions. Based on state directives, the government (the fundamental controlling shareholder of most firms) is in more favor of cash dividend distributions to reduce free cash flows in firms and control managers' preferences. 4 Alternatively, private-owned firms face less political pressure and face more capital constraints in obtaining external equity and long-term debt compared with SOEs. Private-controlled firms depend on internal financing and distribute lower cash dividends. According to these arguments, dividend payments increase with state ownership (Bradford et al., 2013;Wang et al., 2011); their findings are consistent with agency theory regarding dividend policy in that dividend payments allow the state to obtain disproportional benefits from firms. 5 In the past two decades, the Chinese stock market has undergone some dramatic changes. The percentage of dividend-paying firms increased from 50.92% in 2003 to 80.73% in 2017, as shown in Figure 1. The considerable percentage of government ownership in SOEs differentiates them from non-SOEs. Hence, it is important to study the cash dividend issues of Chinese listed firms.
Theoretical Background
In CG, the growing research on gender diversity has received increasing attention (Pucheta-Martinez & Bel-Oms, 2016). Some previous studies (Jensen, 1986;Rozeff, 1982) have emphasized agency theory by analyzing the impact of dividend policy when there is a conflict of interest within the organization. Agency cost may be caused by asymmetric information between managers and shareholders; hence, it emerges as an agency problem. This situation makes investors more conscious and suspicious of future cash flows (Jensen & Meckling, 1976). Grossman and Hart (1980) analyzed the way in which dividend payments mitigate agency problems by reducing free cash flows in companies. The dividend policy has been asserted to be a device for CG and, more precisely, a way of mitigating Jensen's (1986) free cash flow problem, which thus reduces agency costs (Rozeff, 1982). Dividend payouts, debt-financing, and managerial equity ownership are noted as an effective mechanism to overcome agency problems in firms (Bathala & Rao, 1995). Rozeff (1982) stated that dividend payouts are considered a mechanism for monitoring firms. Carter et al. (2010) asserted that the participation of female directors on boards leads to powerful control mechanisms because their presence increases the board independence and they tend to ask more questions. Kandel and Lazear (1992) showed that gender diversity functions, such as a "watchdog" for shareholders, might also improve joint monitoring. Hence, female directors on the board decrease agency problems concerning free cash flows as they are inclined to pay more dividends, thus leading to more incentives between managers and shareholders. Kandel and Lazear (1992) supported these views and stated that board gender diversity is associated with lesser holdings of cash and higher dividend payments. Byoun et al. (2013) demonstrated that women's participation on boards reduces the problems of free cash flow and increases dividend payouts.
Board Gender Diversity and Dividend Payouts
A significant function of CG is to monitor and resolve the agency problems that originate from conflicts of interest between shareholders and managers (Fama, 1980;Fama & Jensen, 1983;Jensen & Meckling, 1976). Agency problems connected with conflict of interests may comprise extra salaries and perquisites' consumption by insiders (Jensen & Meckling, 1976); willingness to pursue firm growth such as developments unable to provide any benefit to external shareholders (Grossman & Hart, 1986); and the propensity to invest earnings in investments that are suboptimal (Jensen, 1986). La Porta et al. (2000) reported that, in line with the conflicts mentioned above, shareholders prefer to disgorge cash flows through dividend payments, because outside shareholders think that corporate insiders may use the extra cash for their own benefits or may invest inefficiently. The policies for dividend payments can be used as an essential mechanism for solving agency problems (Brav et al., 2005;Easterbrook, 1984). Academic literature also indicates that the board's heterogeneity gives rise to more efficient and operative governance mechanisms (Adams & Ferreira, 2009;Hillman et al., 2007), that firms that have a high proportion of female non-executive directors are more inclined to pay dividends (Chen et al., 2017). Levit and Malenko (2016) also reported that gender heterogeneity may be able to promote dividend payments by protecting the interests of shareholders and improving governance mechanisms.
Furthermore, women directors are considered good monitors that strengthen the rights of shareholders. In this situation, shareholders force managers to pay more dividends. Previous literature from advanced countries provides support for the monitoring role of female directors. For instance, Byoun et al. (2016), using U.S. corporations, and Pucheta-Martinez and Bel-Oms (2016), using Spanish firms, found that dividend payouts and gender heterogeneity are positively related with each other. Bae et al. (2012) asserted that a strong CG mechanism in firms is related to higher dividend payments and mitigates agency problems. Trinh et al. (2020) also demonstrated a positive relationship between gender diversity and dividend payments. Finally, McGuinness et al. (2015) provided positive but insignificant results in this context, while Mustafa et al. (2020) showed negative relationship between gender diversity and dividend announcements.
Diversity on the board affects its efficiency at different levels, for example, the individual level and the team level. For instance, women directors are more focused on participating in monitoring (Adams & Ferreira, 2009), are more sensitive to difficult issues (Gul et al., 2011), and are more likely to follow the rules and regulations than men (Bernardi & Arnold, 1997). Collectively, all the above studies suggest that female directors are more likely to follow the rules, are more sensitive to corporate issues, and are more focused on solving agency problems by increasing the efficiency of the governance mechanism and are more motivated to promote shareholders' interest at the individual level. Liu (2018) indicated that greater participation by female directors on the board leads to fewer lawsuit risks, and partly favors dividend payouts. Hence, we suppose that female directors are more likely to pay dividends at the individual level.
Previous literature shows that the participation of females at the team level is more likely to solve complicated problems. Niederle and Vesterlund (2007) and Agarwal et al. (2016) reported that female directors adopt a style of leadership that portrays collaboration and trust, because this trust helps in the exchange of productive information between board directors and the corporation. Gul et al. (2011) stated that board gender diversity provides different points of view and promotes the corporate board decision-making style. All above studies suggest that gender heterogeneity provides a wide range of views by incorporating different perspectives, which ultimately results in better decisions, potentially including those decisions that are taken for the betterment of shareholder interest and to solve agency problems. Thus, from team-level perspective, we suppose that more diversity on boards is positively connected with more dividend payouts. Thus, we hypothesize the following: Hypothesis 1 (H1): Board gender diversity positively affects the dividend payouts.
The board's monitoring tendency depends on its independence Osma, 2008). Bugeja et al. (2016) and Adams and Ferreira (2009) reported that the independence of directors can be calculated as the percentage of the boards' non-executive directors. Independent directors on the board are engaged more in a monitoring function than executive directors. Chen et al. (2017) asserted that, due to this effective monitoring, the payments of dividends are also increased. Fama and Jensen (1983) reported, from an agencytheory standpoint, that independent non-executive directors resolve agency conflicts between the agent and principal. Empirical literature proposes that non-executive directors enhance the transparency of the firm (Knyazeva et al., 2013) and help to reduce managerial misappropriation (Setia-Atmaja, 2010). These results show that independent directors are more likely to free up firm resources from the hands of insiders by increasing dividend payments than executive directors (Pucheta-Martinez & Bel-Oms, 2016).
Previous literature provides mixed outcomes on the relationship of executive versus independent directors with dividend payouts; for example, Saeed and Sameer (2017) found a negative relationship, Chen et al. (2017) found a positive relationship, while Pucheta-Martinez and Bel-Oms (2016) found no effect. On the other hand, Nekhili et al. (2020) and Lai et al. (2017) reported that women directors exhibit less opportunistic behavior and are more likely to increase dividend payouts (Hunter & Sah, 2014). Related to this point, if female directors are considered less opportunistic, then they tend to increase dividend payouts. From the above discussion, we suppose that both female independent directors and female executive directors may affect dividend payouts positively. However, we expect that this relationship is stronger for female independent directors because of their independence. Based on the above studies, we hypothesize the following: Hypothesis 2 (H2): The board gender diversity and dividend payouts relationship is stronger for female independent directors than female executive directors.
The critical mass of female may also have a significant effect on the firm-level performance of corporations (Glazer & Kristol, 1976). Kanter (1977) reported that, due to maledominated boards, the achievements of female directors were downplayed, leading to their low participation. Spangler et al. (1978) asserted that, due to the minority of women directors on boards, the pressure of performance, the communication gap, and role entrapment, the achievements of females on the board are diminished. Following critical mass theory, Kramer et al. (2007) proposed that female directors are considered more powerful when there are three or more on boards. Kristie (2011) recommended that the participation of one female director on board is token, two is presence, and three or more become voice. Gyapong et al. (2016) and Liu et al. (2014) provided evidence that when the participation of female directors on the board is high, they can significantly affect firm performance and value. Based on the above, we hypothesize the following:
Hypothesis 3 (H3):
The relationship between gender diversity and dividend payouts is stronger for female directors' critical mass participation than token participation.
In most listed firms in emerging markets (such as China), the government retains the rights of controlling shareholders, unlike Western countries (Saeed & Sameer, 2017). Based on an OECD report, Buge et al. (2013) revealed that the percentage of government ownership percentage is extremely high in BRIC countries compared with other developed and developing countries. Some studies have illustrated that SOEs have high dividend payments compared with private firms (Bradford et al., 2013;Firth et al., 2016;Wang & Shailer, 2012). Blanchard and Shleifer (2000) reported that state-owned firms obtain advantages in terms of governance and unique commercial treatment. Due to this homogeneity in state ownership, firms that are owned by the state are considered to have an unusual relationship with government banks. Accordingly, we argue that it is easier for state-owned firms to obtain finance, which mitigates unpredictability for Chinese listed firms.
Agency theory also provides support for the positive association between gender diversity and dividend payouts in SOEs of developing countries. Ben-Nasr, (2015) and Bradford et al. (2013) asserted that SOEs are more likely to suffer agency problems compared with non-SOEs. They identified two sources of agency problems; the first source is disagreements between controlling shareholders and managers, and the second source is the objective differences between politicians and firm owners (Menozzi et al., 2012). Jonge (2014) proposed that some empirical studies provide evidence for improving the governance structure of firms; SOEs are under pressure to increase the participation of females in boardrooms. SOEs may give an indication to society regarding the administrative efficiency and better governance system of state-run firms and are likely to serve as "model enterprises" (Saeed et al., 2016). To overcome agency problems, dividend payments are considered a crucial mechanism that gender-diverse boards can adopt. Therefore, we can hypothesize that gender-diverse boards are more likely to choose high dividend payouts in Chinese firms with government ownership, characterized by severe agency problems. Based on the above arguments, we hypothesize the following: Hypothesis 4 (H4): Board gender diversity has a higher impact on the dividend payments of SOEs as compared with non-SOEs.
Study Sample
We acquired all the data from the China stock market and research database (CSMAR) for all A-share listed companies on the Shanghai and Shenzhen stock exchanges for the sample period (2003)(2004)(2005)(2006)(2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014)(2015)(2016)(2017). We chose China as our sample for several reasons. According to the World Trade Organization (2015), China's rapid growth rate makes it the best representative of emerging markets. China is a key player in the global economy because of its human resources, rapid growth, and its share in world trade. Finally, state ownership is the major form of corporate ownership in China. Therefore, this country provides us with a better platform to study the influence of government ownership on firms' financial decisions (e.g., dividend policy). We excluded all financial companies from our sample because their features are unique to these firms, which may bias our results (Liu et al., 2014;Sila et al., 2016). We also excluded all firms whose net profits exceeded their sales. These steps provided us a final sample of 29,104 firm-year observations from 2003 to 2017. The details of the complete sample are provided in Table A1 across year and industry. The number of observations increases over time, which is consistent with China's market growth share and consistent with (Du, 2014) and He and Luo (2018).
Variable Measurement
Our study examines the influence of gender diversity on the dividend payouts of Chinese listed firms. By following Ferreira (2009), Chen et al. (2017), and Liu et al. (2014), FEMALE is calculated using the percentage of female directors (FDP), dummy of female directors (FDD), percentage of female independent directors (FID), percentage of female executive directors (FED), female dummy (FD1) for having one female, FD2 for having two females and FD3 for having three of more females on the board. We have also used some other measures of gender diversity for robustness tests.
DIVIDEND is used as the dependent variable. Our study uses dividend payout ratio measured by total dividend to net income (DIVPR) as the first dependent variable, in line with Bradford et al. (2013) and Saeed and Sameer (2017). Our second measure is the total amount of dividends to total assets of the firm (DTA) (Chen et al., 2017). In the robustness analysis, we adopted two other measures of dividend payouts: dividend dummy (DD); and dividends over sales (DSAL).
On the basis of the literature, we included different control variables. BDI is the independence of the board and is measured as the proportion of independent directors. BDS is measured as the total number of directors on the board. Pucheta-Martinez and Bel-Oms (2016) reported that board size and dividend payments are positively associated. Firm performance is also a control variable (TOBINQ and ROA), measured as market value over total assets and net income over total assets, respectively. Byoun et al. (2016), Chay andSuh (2009), andLintner (1956) reported a positive association between firm performance and dividend payments. FSIZE is measured as the natural log of total assets, and it defines the firm size. Previous studies have shown that firm size has a positive relation with dividend payments (Denis & Osobov, 2008;Fama & French, 2001). We also use LEV as a control variable, which is calculated as the total debt of the firm over total assets. We expect a negative relationship between leverage and dividend payments. Jensen et al. (1992) asserted that firms that are highly leveraged are likely to pay less dividends because they need internal cash for the payment of debts to creditors. RETA measures the ratio of retained earnings over total assets (Byoun et al., 2016;DeAngelo et al., 2006). We also include FCF, which is the free cash flow position of the firm and has a positive association with dividend payouts (Adjaoud & Ben-Amar, 2010). Finally, dividend payouts may differ through industries and years; to control this, we include year and industry dummies in all regressions. A detailed description of all the variables is presented in Table 1.
Estimation
To analyze the relationship between gender diversity and dividend payouts, we ran least square regressions. Numerous studies on gender diversity and dividend payments (Chen et al., 2017;Dezhu et al., 2019;McGuinness et al., 2017) have employed the same method to analyze the results. The model specification is as follows: where the dependent variable is DIVIDEND it (measured by DIVPR and DTA), Σindustry t i is the industry fixed effect, Σyear t i is the year fixed effect, and ε is the error term. We also compare the characteristics of firms having female directors (diversified board) with those that do not have female directors on board (non-diversified board) in un-reported results. 6 In a nutshell, firms having female directors on boards are associated with significantly higher dividend payments as compared with firms that do Diversified firms tend to have a higher proportion of independent directors and are therefore more probable to have duality. Diversified firms are more likely to have larger board size, firm size, and firm growth but smaller CEO tenure. Table 3 presents the distribution of female directors in relation to whether the firm pays a dividend or not. The results show that the difference of FDP in dividend-paying and non-dividend paying firms is 0.019, which is positive and significant at the 1% level. These results are consistent with our main findings and demonstrate a positive association between female directors and dividend payouts.
Descriptive Statistics
To check multicollinearity, we performed a Pearson correlation coefficient test among all the variables of the main regression, and the outcomes are reported in Table 4. According to the literature for the acceptable magnitude of correlation, a correlation level of 0.8 or more creates the problem of multicollinearity (Field, 2005). Similarly, Liu et al. (2014) report that a value of 0.7 is a sign of multicollinearity. The findings in Table 4 reveal some high correlations (highlighted). However, this high correlation is among the dependent variables and gender-diversity variables, which are used alternatively in separate regressions. Furthermore, we also performed variance inflation factor (VIF) test to recheck this issue. 7 Based on these outcomes, we conclude that our study has no multicollinearity issues. Table 5 presents the results of main regression. We used least squares regression, following Chen et al. (2017). The results in Table 5, Columns 1 and 3, where the explained variable is dividend payout ratio, show that gender diversity measures on the board increase dividend payouts and that they have a positive association (FDP = .045, t-stat = 4.167 and FDD = 0.009, t-stat = 3.335). These results are matched with the results in Columns 2 and 4 (FDP = 0.004, t-stat = 3.402 and FDD = 0.001, t-stat = 2.736), where the dependent variable is divided over total assets. These values are significant at the 1% level and support H1. The participation of female directors on the board improves its monitoring in two ways (Adams & Ferreira, 2009;Gul et al., 2008). First, through distinctive cognitive and psychological features, female directors may improve the monitoring of boards. These characteristics differentiate their decision-making capabilities from those of their male counterparts (Man & Wong, 2013;Pucheta-Martinez & Bel-Oms, 2016). Second, female directors create intra-director monitoring, improving male directors' monitoring behavior (Adams & Ferreira, 2009;Kandel & Lazear, 1992) and efficiency. As a result, dividend payouts increase, and agency conflicts are reduced (Byoun et al., 2016;Chen et al., 2017). Jensen and Meckling (1976) postulated that efficacy in the monitoring of the board may reduce managerial rent-seeking behavior, and this outcome is consistent with agency theory.
Note.
Variables are as defined in Table 1. VIF = variance inflation factor.
results. Debt and a high payout policy are both ways to mitigate Jensen's (1986) free cash flow problem, and LEV's negative sign is thus understandable (see Benito & Young, 2003). TOBINQ has a significant positive effect on dividend payouts (Ye et al., 2019). CEOT also shows positive significant results. The relationship between FSIZE and dividend payouts shows a positive effect (Denis & Osobov, 2008) which demonstrate that large companies have more net income, thereby providing conditions for paying dividends and, are more likely to reduce the agency cost, and thus prefer to adopt dividend distributions as a way of reducing agency problems (Ye et al., 2019). Next, to test H2, we re-estimated the female directors and dividend payout relationship by dividing female directors in two groups: (a) female independent directors FID; and (b) female executive directors FED and re-run the regression. The findings are presented in Table 5 (Columns 5 and 6) and indicate that, by using DIVPR, independent female directors have positive significant results (FID = 0.058, t-stat = 3.432), while FED has positive insignificant results (FED = 0.011, t-stat = 0.870). These findings are consistent when we used DTA as the dependent variable (FID = 0.004, t-stat = 2.522) and (FED = 0.001, t-stat = 1.210). These findings support H2 and demonstrate that gender diversity on board increases dividend payouts by having women independent directors on the board, this effect becomes more pronounced. The findings also suggest that female independent directors have more incentives to decrease confiscation through freeing up resources from insiders' hands (Easterbrook, 1984). The results in Table 5 provide relatively strong evidence of women independent directors' positive impact and insignificant evidence of women executive directors' positive impact on dividend payouts. Generally, our findings indicate that the positive impact of female directors on dividend payouts mainly comes from the monitoring effect of female independent directors rather than the executive effect of female executive directors. Our findings are compatible with Chen et al. (2017), who also found positive results between independent female directors and dividend payouts, but inconsistent with Pucheta-Martinez and Bel-Oms (2016). The main reason for this difference is the institutional settings of both countries.
Next, we examined the critical mass theory postulation that having more female directors on the board also increases the dividend payouts. For this purpose, we replaced the board gender diversity measures with FD1, FD2, and FD3. The outcomes are reported in Table 5 (Columns 7 and 8) using both ratios. DIVPR results suggest that female directors and dividend payouts are positively associated when the board has only one female (FD1 = 0.005, t-stat = 1.660). However, when we increase the participation of females on the board to two, this further increases the dividend payouts (FD2 = 0.009, t-stat = 2.530). The most extreme influence occurs when the board has three or more female directors (FD3 = 0.029, t-stat = 5.616). The results are consistent when we used DTA as dependent variable for one (FD1 = 0.001, t-stat = 1.513), two (FD2 = 0.003, t-stat = 2.219), and three or more female directors (FD3 = 0.005, t-stat = 4.122), showing significant positive association with dividend payouts. 8 The above findings provide support for H3 that a subsequent increase in the number of the female also increases the dividend payouts. Theoretically, our findings are compatible with Kramer et al. (2007) and Kristie (2011), who suggested that having three or more females on the board provides the greatest impact. Empirically, our results also support previous studies (Ahmed & Ali, 2017;Liu et al., 2014).
Furthermore, to test whether board gender diversity is expected to lead to higher dividends in SOEs, we divided the sample into SOEs and non-SOEs. If the firm was owned by the state or government, we considered it an SOE; otherwise, it was treated as a non-SOE. The results are presented in Table 5 (Columns 9 and 10) for SOEs, revealing a higher significant positive relationship between gender diversity and dividend payments. The results of DIVPR (FDP = .066, t-stat = 3.194) and DTA (FDP = .021, t-stat = 2.019) confirm that board gender diversity has a higher impact on the dividend payments of SOEs compared with non-SOEs (Columns 11 and 12), given the values for DIVPR (FDP = .019, t-stat = 1.754) and DTA (FDP = .002, t-stat = 1.670). These findings provide support for H4, and also partly support Lam et al. (2012) and Bradford et al. (2013), who showed that state-owned firms in China are associated with larger cash dividend payments.
Endogeneity
As everyone knows, CG researchers, especially those who deal with the structure of boards, face the possible problem of endogeneity. It can be argued that females are inclined to join the board in discrete groups, not randomly, which may be a result of biased coefficient estimations. Bilimoria and Piderit (1994) reported that it is the woman's choice that determines which firm's board she wants to join. There may also be a possibility that firms pay more dividends because of some other factors, for example, board structure, ownership structure, or some other economic firm variables other than gender diversity on the board. Thus, it may be possible that our findings are just a coincidence. To address potential endogeneity, we adopted various methods. To overcome the omitted variable, we use the instrumental variable (IV) approach. We used the lag of the explanatory variable for reverse causality. Finally, to address the problem of selective bias, we use the PSM approach. Gul et al. (2011) and An and Zhang (2013) used the lag of gender diversity measures in their studies. To control for reverse causality, we used the lag of t−1, t−2, and t−3 years (Dittmann et al., 2010;Joecks et al., 2013), as female directors may require some time to understand the functions of the board and to perform their monitoring roles successfully. Table 6 reports that board gender diversity and dividend payouts are positively associated, which provides further support for our results.
Two-Stage Least Squares
The results of our study are compatible with the findings of Chen et al. (2017), but still face the problem of endogeneity. One endogeneity problem is the problem of omitted variables. Although our study utilizes year and industry dummies to control the potential determinants of dividend payouts, they could be still affected by some unobservable variables. We therefore need to find an appropriate IV that has no direct relation with the policy of dividend payment, but that should be able to influence gender diversity directly. We used the lag (LFDP), then the industry average of FDP (IAFDP) of gender diversity measures, and finally both lagged values and IAFDP as an instrument variable Lin et al., 2013). Numerous researches have utilized the industrial average to build an instrument variable (Chen, 2015;Huang & Mazouz, 2018;Kim et al., 2017;Liu et al., 2014). Based on these studies, we consider that these IVs are suitable for our study.
The findings of the first-stage regressions are reported in Panel A of Table 7, by using the dependent variables as FDP and FDD after including the above instruments as independent variables and all the control variables used in the main regression. For the sake of brevity, we report only the coefficients of main variables. FDP and FDD are significantly positively correlated with IAFDP and LFDP consistent with the rationale of instruments and demonstrate that our instruments are valid (Ye et al., 2019). In Panel B using DIVPR and DTA as dependent variables and gender diversity measures FDP and FDD as independent variables, we run the second-stage regressions. The findings in Table 7 verify that the results are consistent with our main hypothesis.
(i.e., companies with female directors). The control group was considered to have no distinct characteristics other than gender diversity. The PSM results are listed in
Fixed-Effect Method (FEM)
There is a critical issue with the ordinary least squares methodology, which is endogeneity. One might argue that the association between female directors and dividend payouts may be affected by unobservable characteristics at the company level. Hence, after including the company's fixed effect and the company's year fixed effect, the model was estimated again. The findings are reported in Table 8 (Panel B) and confirm our main results and demonstrate that board gender diversity increases dividend payouts. To control for heteroskedasticity and auto-correlation, we also ran fixed effect regression with robust standard errors. The results also provide support for our previous findings. 10
Gender Diversity, Financial Crisis, and Dividend Payouts
Our sample period was from 2003 to 2017, so it includes the global financial crisis that started in 2007. Although the exact beginning of the crisis is still controversial in academia (González, 2016), there is no doubt that it had an Note. Variables are as defined in Table 1. t-statistics are reported in parentheses. *, **, and *** indicate significance at the 10%, 5%, and 1% levels, respectively. influence on most of the companies (Fahlenbrach & Stulz, 2011). Here, our main focus is on whether gender diversity on the board still has a positive impact on dividend payouts after controlling for the impact of financial crises. For this, we created a dummy variable (CRISISD) as a measure for the financial crisis when company exists in 2007 and 2008, it is equal to 1; otherwise 0 (An & Zhang, 2013). In Table 9 (Panel A), our results are similar with the outcomes of Floyd et al. (2015) in that although firms still paid dividends, the dividend payout ratio continued to decline gradually during the financial crisis. We also report that the FDP and FDD still had a significant positive relationship with dividend payouts. Therefore, our conclusions are not affected by the financial crisis.
Blau Index and Shannon Index
Based on the recent literature, we used two comprehensive (alternative) proxies for gender diversity, that is, the Blau index (FBI) (Blau, 1977) and the Shannon index (FSI) (Shannon, 1948). The outcomes are reported in Table 9 (Panel B), which also shows the positive relationship of FBI and FSI with dividend payouts. Besides these comprehensive measures, we also used other measures of gender diversity such as the number of female directors (FNUM), the lag of numbers of female directors (LFNUM), female CEOs (FCEO), and the lag of female CEOs (LFCEO). The results remain robust. For the sake of brevity, we do not report the results.
Further Tests
Alternative estimation model. In our baseline regression analysis, we used the least square analysis. Hence, to eradicate the bias of measuring methods, we used the logit regression model by using dividend dummy (DD) as a dependent variable. Changing the estimation model does not alter our key conclusions. For the sake of brevity, we do not reveal the findings.
Alternative dependent variable. We also used DSAL (ratio of dividend over sales) as a dependent variable in the least squares method. The outcomes remain robust and corresponding results are not shown for brevity.
Summary and Conclusions
Our study provides a novel insight and finds reliable evidence that the involvement of female directors on the board has a positive influence on dividend payment by controlling the CG and other firm-related variables such as size of the firm, board size, leverage, and cash flow. The outcomes of our study are also in accordance with Adams and Ferreira (2009), who suggested that female participation may improve agency problems by enhancing the monitoring ability of boards. The results also support critical mass theory, which suggests that three or more females on the board have more influence on dividend payments compared with only their token participation (Kanter, 1977;Kristie, 2011). The results also show that independent female directors increase the dividend payments of firms, whereas female executive directors have no impact on dividend payouts, indicating that females are not a uniform group. Furthermore, we show that gender diversity has a greater influence on the dividends payouts of SOEs than non-SOEs (Wang et al., 2011). The findings of our study have several implications for policy, practice, and theory. The results support the argument that the effectiveness of the board is improved due to female participation. The policy implications for our study are exhibited in two aspects. The first aspect concerns the gender diversity of the board. Adams and Kirchmaier (2016) reported that for the implementation of gender quotas in listed companies, European countries frequently announce relevant laws. Our research indicates that female directors may have value in relation to CG, which is relevant to policymakers. The second aspect concerns the career development of females. To resolve the agency problems in CG, gender diversity on the boards is likely to offer a wide range of views on such issues. Therefore, to encourage career development, policymakers must initiate some professional training to improve skills and construct a rational competitive atmosphere for females. The practical implications of our article concern its support for the belief that gender diversity is a vital issue for CG. The findings suggest that female participation on the board may expand the firm-level governance in emerging economies, where investors' protection is inclined to be weak. Our study highlights the diversity practice of China at the boardroom level and also guides the regulatory bodies of China on this issue. As due to this growing concern, a better understanding for the role of female director in improving CG will help academics, policymakers, and regulators in decision making about the value of female directors. Regarding theoretical implications, our study extends agency theory by signifying that the female directors on the board improve its effectiveness. Furthermore, our results also contribute new insights regarding how the governance role of female directors is affected by country-specific institutional factors and provide support to the recommendations of regulatory bodies around the world on gender diversity in the boardroom. More precisely, gender diversity on board can strengthen China's poor governance system.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the Natural Science Foundation of China (Grants 11631013, 11971372, and 11801433 (2015) focused on other impacts of salient demographics and characteristics on corporate dividend policy (e.g., CEO age, CEO tenure, CEO duality). However, this study excluded and provided no contributions regarding (a) the significant contributions of females in reducing agency problems (by providing robust evidence); (b) the effect of independent versus executive female directors; (c) the critical mass of female directors; and (d) the ownership structure. 3. Saeed and Sameer (2017) suggested that women directors on boards use conservative policies and reported significant negative results using data from emerging economies. 4. Chinese financial regulators have adopted different strategies to urge listed companies to increase cash dividend distributions with the objective of reducing agency problems since the 1990s. These regulators have more control over state-controlled firms. 5. Wang et al. (2011) reported that a part of the dividends is paid by investors to the state in the form of taxes. 6. These results are available upon request. 7. The results are also reported in Table 4 and show that none of the VIF values exceed 2.443, which is well below the accepted level of 10 (Gujrati, 2003). 8. We have also checked the results by adding FD1, FD2, and FD3 in separate regressions and found the same results as before. These results are available upon request. 9. Specifically, we use this method without any replacement.
Matching without any replacement means that the same firm with one or more female directors can only be matched with only one all-male director company. 10. A fixed-effect regression with robust standard errors deals with possible heterogeneity and auto-correlation. Our conclusions are robust; for the sake of brevity, the corresponding results are not shown. | 2021-04-02T13:13:46.950Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "1065843563ed86f9eedd85887ebc2cf39b98f95a",
"oa_license": "CCBY",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/2158244021997807",
"oa_status": "GOLD",
"pdf_src": "Sage",
"pdf_hash": "1065843563ed86f9eedd85887ebc2cf39b98f95a",
"s2fieldsofstudy": [
"Business",
"Economics"
],
"extfieldsofstudy": []
} |
232170129 | pes2o/s2orc | v3-fos-license | Picosecond acoustic dynamics in stimulated Brillouin scattering
Recent experiments demonstrating storage of optical pulses in acoustic phonons based on stimulated Brillouin scattering raise a number of questions about the spectral and temporal capacities of such protocols and the limitations of the theoretical frameworks routinely used to describe them. In this work, we consider the dynamics of photon-phonon scattering induced by optical pulses with temporal widths comparable to the period of acoustic oscillations. We revisit the widely adopted classical formalism of coupled modes and demonstrate its breakdown. We propose a simple extension to generalise the formulation and find potentially measurable consequences in the dynamics of Brillouin experiments involving ultra-short pulses.
Stimulated Brillouin Scattering (SBS), the coherent scattering of light waves and acoustic vibrations, has in recent years been reinvigorated [1] as a powerful tool for shaping optical waves. Confined in optoacoustic waveguides, it can be used for a wide range of optical processing applications [2]: narrow-linewidth filtering [3,4], controlling coherent cross-talk between waveguides [5], signal integration and derivation [6], optoacoustic delay [7,8] and storage [9][10][11][12] as well as optical signal demodulation [13]. Many of these experiments require harnessing Brillouin effects for short optical pulses -that is, for pulses with durations shorter than the acoustic lifetime in the material, typically 1 −10 ns. Short pulses are used either to broaden the Brillouin gain spectrum for broadband signal processing or to ensure that the pulses fit entirely into a waveguide of finite length. The application of SBS to further extremes -where the optical pulses are in the picosecond regime -although difficult due to the short interaction time [14], has nevertheless been demonstrated in chalcogenide waveguides [15]. However, the conventional formalism of Brillouin scattering in waveguides is based on the assumption that the pulses remain long, and it is currently unclear where this formalism remains applicable and where it breaks down. An analysis capable of accurately describing SBS in waveguides in the * jpiotrowski@ethz.ch picosecond regime is needed for the full understanding of short-pulse Brillouin experiments.
In this paper, we theoretically explore SBS in the regime where the optical pulses are shorter than the acoustic period. The usual tool for describing Brillouin scattering is the coupled mode formalism [16,17] (for clarity referred to here as the conventional formalism). We examine the assumptions and limitations of this model for short pulses, and show where it remains applicable. Revisiting the underlying approximations -in particular the Slowly-Varying-Envelope Approximation (SVEA) and the Rotating-Wave Approximation (RWA) [18,19] -we show that it is necessary to retain the second order derivative in time of the acoustic field, while neglecting the second order derivative in space remains appropriate. We apply the resulting model to the protocol of acoustic memory storage. We find that the abandoning of the underlying approximations leads not only to a more accurate model, but predicts physical effects that dominate the dynamics of SBS in the shortpulse regime, observable for any Brillouin system once pulses are sufficiently short. We note that related issues have been explored for opto-elastic interactions in liquids where the retention of a higher order derivative has also been found appropriate [20,21].
The widely adopted classical formalism for SBS in waveguides [16,17] describes the spatial and temporal evolutions of three coupled complex envelope functions associated with the amplitude of the pump (a p ), signal (a s ) and acoustic (b) waves: These envelopes, represented schematically in Fig. 1(a), are related to the electric and elastic displacement fields of the optical and acoustic modes, and their respective transverse mode profiles (marked as tilded vector fields), . Temporal and spatial evolution of the pump/signal/acoustic waves is governed by frequencies ω p /ω s /Ω and wavenumbers k p /k s /q respectively (k s < 0), which are connected by the group velocities v p /v s /v a (all positive consistent with Eqs 1). The quantities P p/s/a and Q p/s/a describe the mode powers for normalisation and coupling constants, respectively, and α is the linear loss of the acoustic wave. On backwards SBS resonance, these parameters are related by the phase-and energy-matching relations k p − k s = q and ω p − ω s = Ω (sketched in Fig. 1 (b)) and conservation of energy also requires [17] Q a = (Q p + Q s )/2.
The above formulation relies on two key physical approximations: the slowly-varying envelope approximation (SVEA) and the rotating wave approximation (RWA), each of them in a temporal and spatial dimension. Limitations to these approximations are given by the dynamics of the acoustic wave at short time scales. We therefore review the derivation of (3) in a limit where the frequency and wavenumber spectra of the optical and acoustic waves are as shown in Fig. 1(b), i.e. they are well-defined in q, but their frequency spectra have widths comparable to Ω. This is intuitively justified by considering the physical parameters of a picosecond Brillouin setup. Counter-propagating pump and signal pulses of width τ will interact within domains spanning (v p + v s )τ /2 and τ /2 in space and time, respectively, which define the widths of the wavenumber ∆q ∼ [(v p + v s )τ ] −1 and frequency ∆Ω ∼ τ −1 spectra of the optical force that induces acoustic excitations. The spatial RWA and SVEA are valid if the wavenumber spectrum is narrow: ∆q q. Substituting the above approximate expression for ∆q, and using q ≈ Ω/v a , we find that this condition breaks down and Ω/2π = 10 GHz. On the other hand, the limits of the temporal SVEA are reached when ∆Ω ∼ Ω, or for τ ∼ Ω −1 ∼ 10 −10 s pulses. Therefore, Fig. 1(b) accurately represents the spectrum of the dispersion of the optical force induced by picosecond optical pulses. It also emphasises the possibility of coupling to the backwardpropagating acoustic modes, which forces us to embrace a more complete picture of the acoustic pulse dynamics.
Acoustic waves are described by the elastic displacement field U i and three material dependent parameters -the density ρ, elasticity tensor c ijkl and viscosity tensor η ijkl -and satisfy the acoustic wave equation [22] − In the context of SBS, the driving force F i results from the electrostriction and radiation pressure [16,17] induced by the interfering optical waves. By using the modal description of the pump and signal modes, and the appropriate ansatz given earlier, we find that the force F i (blue arrows in Fig. 1(b)) from signal and pump pulses has a spectrum (blue shaded region) cen- Fig. 1(b) we only denote the positive wavenumber components). For resonant SBS, (Ω, q) matches the dispersion of a particular acoustic mode, driving the field U(r, . For optical forces with a narrow wavenumber spectrum, as shown in Fig. 1(b), we can separate the positive and negative wavenumber components of the optical forces and acoustic fieldsthis is equivalent to a spatial RWA. However, if the optical force has a wide frequency spectrum, it may excite acoustic waves propagating in the opposite direction, centred at ±(−Ω, q), which are also modes of the waveguide. Physically, this corresponds to a case where the optical fields interact in a spatially extended region, as to precisely define the acoustic wavenumber, but too quickly to properly define the velocity of the acoustic field driven by a wide spectrum of frequencies. In this picture, the acoustic ansatz should be rewritten to account for the negative velocity waves: However, the choice of envelopes b ± is arbitrary, and dictated solely by convenience of the expansion around a particular point in the frequency-wavenumber space. In particular, we can readily define the entire acoustic field by setting b − ≡ 0, and allowing b + to support the entire acoustic spectrum shown in Fig. 1(b). This however means that the envelope b + must evolve rapidly in time, as it needs to account for terms oscillating at −Ω. Noting that the positive and negative wavenumber components of the optical force and the acoustic field are decoupled from each other, we can remove the c.c. terms from both sides of the equation, thereby applying a spatial RWA. Furthermore, to retrieve the formulation reminiscent of (3), we can follow the derivation from Ref. [17], by projecting the above equation onto the mode profile u, and identifying the average power flow over the waveguides' cross-section and the loss term as the inverse dissipation length found in equations (16), (18) and (45) of Ref. [17], respectively. As in that work, we identify terms proportional to ∂ 2 z b + + 2iq∂ z b + , which for a slowly spatially varying envelope b + satisfy |∂ 2 z b + | |q∂ z b + |, allowing us to drop the second derivative, and thus embrace the spatial SVEA.
We also find terms ∂ 2 t b + −2iΩ∂ t b + which, following our previous observation that the envelope b + include components oscillating with frequency −2Ω, cannot be simplified through a temporal SVEA. Therefore, we finally arrive at the equation for short acoustic pulses driven by SBS: where we dropped the index writing b + = b for convenience. Comparing it to the long-pulse formulation given in Eq. 3, we identify the unchanged coupling term, loss and first order derivatives. Our considerations on short pulse Brillouin interactions add a second order time derivative of the envelope and a modified loss term proportional to the first order derivative in time.
Eqs. (1)-(3) and (5) are nonlinear coupled partial differential equations with no analytical solution. Earlier numerical studies relied upon undepleted pump approximations, stationary or soliton solutions [23,24]. Here we obtain numerical solutions resolved in space and time for arbitrary initial conditions, coupling parameters and combinations of wavelengths by employing a split-step method [25,26]. The linear time evolution is computed by a Crank-Nicholson method with symmetric differenc-ing in time and space and the nonlinear part by the 4thorder Runge-Kutta algorithm. From the initial complex field vectors of a p , a s , b, B(t = 0, z) we calculate the next time step by alternating between linear and nonlinear evolution. We introduce an auxiliary field B = ∂ t b and therefore can write ∂ t B = 2(iΩ + αv a )B + 2iΩv a ∂ z b − 2iΩv a αb − 2Ω 2 v a Q a a p a * s /P o to obtain first order differential equations from Eq. (5), giving a system of four first order equations together with Eqs. (1) and (2). The effects of the full modified acoustic Eq. (5) describing short acoustic pulses can be observed in the system of SBS-based optical data storage. Here, a strong write (pump) pulse interacts with a weaker signal pulse and transfers some of its energy into an acoustic pulse of the same shape. A second read (pump) pulse inverts this process and partially restores the optical signal (see Fig. 2(a)). All inputs are Gaussian pulses. We trace the resulting optical signal in Fig. 2(b) for the input of two τ = 50 ps optical pulses and an acoustic mode with Ω/2π = 10 GHz. The absolute values of five readout signal envelopes (a s ) are marked by the orange shaded areas, where each is shifted in time by the delay time (60, 150, 240, 330 and 420 ps) between the write and read pump pulses. One example of the readout resulting from the conventional Eq. (3) for 60 ps delay is underlaid in grey as a reference. We calculate 500 more readouts of modified and conventional Eq. (5) and Eq. (3), spanning from 60 to 420 ps, and draw their enveloping lines in orange and grey, respectively. We omit results for delay times shorter than the pulse widths, as write and read pulse significantly overlap in this case. Both the conventional and short pulse solutions show a slow decay due to the finite acoustic lifetime chosen as 10 ns. As τ < 2π/Ω we are in the short pulse regime and the solution to the short pulse equation deviates significantly from the conventional one, exhibiting 20 GHz oscillations. These are a surprising feature of SBS storage with short pulses, as usually the storage efficiency decays over time in accord with the acoustic lifetime. Here, reading out at a later time may result in more peak signal recovered when coinciding with a maximum of the enveloping line. Additionally, the readout signal's shape is modified from the conventional Gaussian pulse, with a modulating twopeak structure visible during the transition between the maxima.
To develop an intuition for the effect of short optical pulses inducing the generation of an acoustic wave, we can simplify the modified acoustic Eq. (5) by neglecting the spatial propagation terms (on the basis of the very low group velocity relative to the optical field) and solving it formally as whereQ(t) = −2Ω 2 v a Q a a p (t)a * s (t)/P o describes the optical driving terms. Considering the low velocity of the acoustic wave, it is reasonable to assume this as the physical system of a local (z = z 0 ) acoustic buildup from two counter-propagating optical beams with envelopes a p/s (t, z = z 0 ). Note, that counter-propagating pulses with width τ will result in an optical driving termQ with half the width τ /2. For high-mechanical quality acoustic waves αv a Ω, we find that the kernel of the second integral oscillates quickly with frequency 2Ω. Therefore, unless the optical forceQ varies at timescales comparable to (2Ω) −1 , this integral averages to 0. This observation formalises the initial discussion for the applicability of the RWA for the acoustic dynamics. We note, that for low quality acoustic waves αv a > Ω the acoustic wave will quickly decay in overdampened oscillations even for long pulse interactions τ > 2π/Ω. These systems are however unlikely to be experimentally viable.
We observe the behaviour of the high quality case in Fig. 3(a-d), which shows the buildup of the acoustic wave (blue lines) due to the optical driving forcẽ Q(t) (red shape) with decreasing pulse widths. The amplitudes of the Gaussian envelopes are scaled up to keep the total pulse energy constant. As we reduce the width to Ω −1 (∼ 100 ps for Ω/2π = 10 GHz), we find oscillations in b(t). We can compare this behaviour to that described by the first order formulation ((3) with the first spatial derivative removed), yield- shown with grey lines, which diverge from (6) for short pulse lengths. More quantitatively, we can define the interferometric visibility of the oscillations by the peak difference [max t>τ |b(t)| − min t>τ |b(t)|]/[max t>τ |b(t)| + min t>τ |b(t)|] calculated within the first few oscillations. The visibility is shown in Fig. 3(e), with points marking the cases of Fig. 3(a-d) and the short pulse regime (τ /2 < 50 ps) shaded. Finally, in Fig. 3(d) we consider a response of the acoustic field to the Gaussian optical driving force with 25 ps width (50 ps optical pulses), and find a significant deviation between the predictions of the second (b) and first (b (1) ) order formulations. As this uses the same parameters as Fig. 2(b), we can ascribe the oscillations in the optical readout of short pulsed SBS storage to the oscillations of the intermediate acoustic wave.
In conclusion, we show that the applicability of current descriptions of SBS in waveguides are limited to pulse widths larger than the acoustic period time (τ > 2π/Ω ≈ 100 ps) as they rely on approximating the interacting fields as slowly varying envelopes, and assuming that the direction of the generated acoustic waves in pre-cisely defined. We extend the theoretical description by reevaluating the spatial and temporal RWA and SVEA, and obtain a more suitable equation for dynamics of the acoustic field. Our approach remains valid until spatial oscillations and the optical periods become relevant at around 1 fs. The addition of second order terms gives rise to dampened oscillations of the acoustic field when excited by short optical pulses. Our results offer a reference for any Brillouin system when operating in the short pulse regime with the relevant time scales depending on their particular acoustic frequency. | 2021-03-11T06:29:55.749Z | 2021-03-09T00:00:00.000 | {
"year": 2021,
"sha1": "196a979c1924bf641204dc85a3b1ade807febe8c",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2103.05732",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "c8866bad4c0ee934c18d665f43bcf5e042821d98",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
252369332 | pes2o/s2orc | v3-fos-license | Fair Must Testing for I/O Automata
The concept of must testing is naturally parametrised with a chosen completeness criterion or fairness assumption. When taking weak fairness as used in I/O automata, I show that it characterises exactly the fair preorder on I/O automata as defined by Lynch&Tuttle.
Introduction
May-and must-testing was proposed by De Nicola & Hennessy in [2]. It yields semantic equivalences where two processes or automata are distinguished if and only if they react differently on certain tests. The tests are processes that additionally feature success states. Such a test T is applied to a process A by taking the CCS parallel composition T |A, and implicitly applying a CCS restriction operator to it that removes the remnants of uncompleted communication attempts. The outcome of applying T to A is deemed successful if and only if this composition yields a process that may, respectively must, reach a success state. It is trivial to recast this definition of may-and must-testing equivalence using the CSP parallel composition } [8] instead of the one from CCS.
I/O automata [9] are a model of concurrency that distinguishes output actions, which are under the control of a given automaton, from input actions, which are stimuli from the environment on which an automaton might react. The parallel composition } of I/O automata, exactly like the one of CSP, imposes synchronisation on actions the composed automata have in common. However, it allows forming the composition A}B only when A and B have no output actions arXiv:2212.11248v1 [cs.LO] 21 Dec 2022 in common. This makes it impossible to synchronise on actions c where both A and B have the option not to allow c in certain states.
Must testing equivalence for CCS and CSP partially discerns branching time, in the sense that is distinguishes the processes τ.pa`bq and τ.a`τ.b displayed in Figure 1. This is not the case for I/O automata, as the synchronisations between test and tested automaton that are necessary to make such distinctions are ruled out by the restriction described above.
It is not a priori clear how a given process or automaton must reach a success state. For all we know it might stay in its initial state and never take any transition leading to this success state. To this end one must employ an assumption saying that under appropriate circumstances certain enabled transitions will indeed be taken. Such an assumption is called a completeness criterion [5]. The theory of testing from [2] implicitly employs a default completeness criterion that in [7] is called progress. However, one can parameterise the notion of must testing by the choice of any completeness criterion, such as the many notions of fairness classified in [7].
Lynch & Tuttle [9] defined a trace and a fair preorder on I/O automata, which were meant to reason about safety and liveness properties, respectively, just like the may-and must testing preorders of [2]. Unsurprisingly, as formally shown in Section 5 of this paper, the trace preorder on I/O automata is characterised exactly by may testing. Segala [12] has studied must-testing on I/O automata, employing the default completeness criterion, and found that on a large class of I/O automata it characterises the quiescent trace preorder of Vaandrager [13]. It does not exactly characterise the fair preorder, however.
In my analysis this is due to the choice of progress as the completeness criterion employed for must testing, whereas the fair preorder of I/O automata is based on a form of weak fairness. In this work I study must testing on I/O automata based on the same form of weak fairness, and find that it characterises the fair preorder exactly.
Although I refer to must-testing with fairness as the chosen completeness criterion as fair must testing, it should not be confused with the notion of fair testing employed in [1,10]. The latter is also known as should testing. It incorporates a concept of fairness that is much stronger than the notion of fairness from I/O automata, called full fairness in [7].
In [6] another mode of testing was proposed, called reward testing. Rewardtesting equivalence combines the distinguishing power of may as well as must testing, and additionally makes some useful distinctions between processes that are missed by both may and must testing [6]. As for must testing, its definition is naturally parametrised by a completeness criterion. When applied to I/O automata, using as completeness criterion the form of fairness that is native to I/O automata, it turns out that reward testing is not stronger than must testing, and also characterises the fair preorder.
I/O automata
An I/O automaton is a labelled transition system equipped with a nonempty set of start states, with each action that may appear as transition label classified as an input, an output or an internal action. Input actions are under the control of the environment of the automaton, whereas output and internal actions, together called locally-controlled actions, are under the control of the automaton itself. I/O automata are input enabled, meaning that in each state each input action of the automaton can be performed. This indicates that the environment may perform such actions regardless of the state of the automaton; an input transition merely indicates how the automaton reacts on such an event. To model that certain input actions have no effect in certain states, one uses self-loops. I/O automata employ a partition of the locally-controlled actions into tasks to indicate which sequences of transitions denote fair runs. A run is fair unless it has a suffix on which some task is enabled in every state, yet never taken.
Definition 1 An input/output automaton (or I/O automaton) A is a tuple pactspAq, statespAq, startpAq, stepspAq, partpAqq with -actspAq a set of actions, partitioned into three sets inpAq, outpAq and intpAq of input actions, output actions and internal actions, respectively, -statespAq a set of states, -startpAq Ď statespAq a nonempty set of start states, -stepspAq Ď statespAqˆactspAqˆstatespAq a transition relation with the property that @s P statespAq. @a P inpAq. Dps, a, s 1 q P stepspAq, and -partpAq Ď Pplocal pAqq a partition of the set local pAq :" outpAq Y intpAq of locally-controlled actions of A into tasks. Let extpAq :" inpAq Y outpAq be the set of external actions of A.
An action a P actspAq is enabled in a state s P statespAq if Dps, a, s 1 q P stepspAq. A task T P partpAq is enabled in s if some action a P T is enabled is s. Definition 2 An execution of an I/O automaton A is an alternating sequence α " s 0 , a 1 , s 1 , a 2 , . . . of states and actions, either being infinite or ending with a state, such that s 0 P startpAq and ps i , a i`1 , s i`1 q P stepspAq for all i ă lengthpαq.
Here lengthpαq P IN Y t8u denotes the number of action occurrences in α. The sequence a 1 , a 2 , . . . obtained by dropping all states from α is called sched pαq. An execution α of A is fair if, for each suffix α 1 " s k , a k`1 , s k`1 , a k`2 , . . . of α (with k P IN^k ď lengthpαq) and each task T P partpAq, if T is enabled in each state of α 1 , then α 1 contains an action from T .
In [9] two semantic preorders are defined on I/O automata, here called Ď T and Ď F , the trace and the fair preorder. In [9] S Ď T I and S Ď F I are denoted "I implements S" and "I solves S", respectively. Here S is an I/O automaton that is (a step closer to) the specification of a problem, and I one that is (a step closer to) its implementation. The preorder Ď T is meant to reason about safety properties: if S Ď T I then I has any safety property that S has. In the same way, Ď F is for reasoning about liveness properties. In [12] and much subsequent work S Ď F I is written as I Ď F S. Here I put I on the right, so as to orient the refinement symbol Ď in the way used in CSP [8], and in the theory of testing [2]. I/O automata are a typed model of concurrency, in the sense that two automata will be compared only when they have the same input and output actions.
Definition 3 Let tracepαq be the finite or infinite sequence of external actions resulting from dropping all internal actions in sched pαq, and let fintracespAq be the set ttracepαq | α is a finite execution of Au. Likewise fairtracespAq :" ttracepαq | α is a fair execution of Au. Now One writes A " T B if A Ď T B^B Ď T A, and similarly for " F . By [7, Thm. 6.1] each finite execution can be extended into a fair execution. As a consequence, The parallel composition of I/O automata [9] is similar to the one of CSP [8]: participating automata A i and A j synchronise on actions in actspA i q XactspA j q, while for the rest allowing arbitrary interleaving. However, it is defined only when the participating automata have no output actions in common.
no action is contained in infinitely many sets actspA i q.
The composition A " ś iPI A i of a countable collection tA i u iPI of strongly compatible I/O automata is defined by ś iPI startpA i q, -stepspAq is the set of triples p s 1 , a, s 2 q such that, for all i P I, if a P actspA i q then p s 1 ris, a, s 2 risq P stepspA i q, and if a R actspA i q then s 1 ris " s 2 ris, and -partpAq :" Ť iPI partpA i q. Clearly, composition of I/O automata is associative: when writing A 1 }A 2 for ś iPt1,2u A i then pA}Bq}C -A}pB}Cq, for some notion of isomorphism -, included in " T and " F . Moreover, as shown in [9], composition is monotone for Ď T and Ď F , or in other words, Ď T and Ď F are precongruences for composition: The first condition of strong compatibility is not a limitation of generality. Each I/O automaton is " T and " F -equivalent to the result of bijectively renaming its internal actions. Hence, prior to composing a collection of automata, one could rename their internal actions to ensure that this condition is met. Up to " T and " F the composition would be independent on the choice of these renamings.
Testing preorders
Testing preorders [2] are defined between automata A, defined as in Def. 1, but without the partition partpAq and without the distinction between input and output actions, and therefore also without the input enabling requirement from Item 4. The parallel composition of automata is as in Def. 4, but without the requirement that the participating automata have no output actions in common.
Definition 5 An automaton A is a tuple pactspAq, statespAq, startpAq, stepspAqq with -actspAq a set of actions, partitioned into two sets extpAq and intpAq of external actions and internal actions, respectively, -statespAq a set of states, -startpAq Ď statespAq a nonempty set of start states, and -stepspAq Ď statespAqˆactspAqˆstatespAq a transition relation.
and no action is contained in infinitely many sets actspA i q.
ś iPI startpA i q, and -stepspAq is the set of triples p s 1 , a, s 2 q such that, for all i P I, if a P actspA i q then p s 1 ris, a, s 2 risq P stepspA i q, and if a R actspA i q then s 1 ris " s 2 ris.
A test is such an automaton, but featuring a special external action w, not used elsewhere. This action is used to mark success states: those in which w is enabled. The parallel composition T }A of a test T and an automaton A, if it exists, is itself a test, and rT }As denotes the result of reclassifying all its non-w actions as internal. An execution of rT }As is successful iff it contains a success state.
Definition 6 An automaton A may pass a test T , notation A may T , if rT }As has a successful execution. It must pass T , notation A must T , if each complete execution 1 of rT }As is successful. It should pass T , notation A should T , if each finite execution of rT }As can be extended into a successful execution.
Write A Ď may B if extpAq " extpBq and A may T implies B may T for each test T that is compatible with A and B. The preorders Ď must and Ď should are defined similarly.
The may-and must-testing preorders stem from [2], whereas should-testing was added independently in [1] and [10]. I have added the condition extpAq " extpBq to obtain preorders that respect the types of automata. A fourth mode of testing, called reward testing, was contributed in [6]. It has no notion of success state, and no action w; instead, each transition of a test T is tagged with a real number, the reward of taking that transition. A negative reward can be seen as a penalty. Each transition ps, a, s 1 q of rT }As with a P actspT q inherits its reward from the unique transition of T it projects to; in case a R actspT q it has reward 0. The reward reward pαq of an execution α is the sum of the rewards of the actions in α. 2 Now A Ď reward B if extpAq " extpBq and for each test T that is compatible with A and B and for each complete execution β of rT }Bs there exists a complete execution α of rT }As such that reward pαq ď reward pβq.
In the original work on testing [2,6] the CCS parallel composition T |A was used instead of the CSP parallel composition T }A; moreover, only those executions consisting solely of internal actions mattered for the definitions of passing a test. The present approach is equivalent, in the sense that it trivially gives rise to the same testing preorders.
The may-testing preorder can be regarded as pointing in the opposite direction as the others. Using CCS notation, one has τ.P Ĺ may τ.P`τ.Q, yet τ.P`τ.Q Ĺ must τ.P , τ.P`τ.Q Ĺ should τ.P and τ.P`τ.Q Ĺ reward τ.P . The inverse of the may-testing preorder can be characterised as survival testing. Here a state in which w is enabled is seen as a failure state rather than a success state, and automaton A survives test T , notation A surv T , if no execution of rT }As passes through a failure state.
The only implications between reward, must and may/survival testing are Namely, any must test T witnessing A Ę must B can be coded as a reward test by assigning a reward`1 to all transitions of T leading to a success state (and 0 to all other transitions). Likewise any survival test T witnessing A Ę surv B can be coded as a reward test by assigning a reward´1 to all transitions of T leading to a failure state. The notions of may-and should-testing are unambiguously defined above, whereas the notions of must-and reward testing depend on the definition of a complete execution. In [5] I posed that transition systems or automata constitute a good model of distributed systems only in combination with a completeness criterion: a selection of a subset of all executions as complete executions, modelling complete runs of the represented system.
The default completeness criterion, employed in [2,6] for the definition of must-and reward testing, deems an execution complete if it either is infinite, of ends in deadlock, a state without outgoing transitions. Other completeness criteria either classify certain finite executions that do not end in deadlock as complete, or certain infinite executions as incomplete.
The first possibility was explored in [5,7] by considering a set B of actions that might be blocked by the environment in which an automaton is running. Now a finite execution can be deemed complete if all transitions enabled in its last state have labels from B. The system might stop at such a state if indeed the environment blocks all those actions. Since in the application to must-and reward testing, all non-w transitions in rT }As are labelled with internal actions, which cannot be blocked by the environment, the above possibility of increasing the set of finite complete executions does not apply.
The second possibility was extensively explored in [7], where a multitude of completeness criteria was defined. Most of those can be used as a parameter in the definition of must-and reward testing. So far, the resulting testing preorders have not been explored. 3
Testing preorders for I/O automata
Since I/O automata can be seen as special cases of the automata from Section 3, the definitions of Section 3 also apply to I/O automata. The condition extpAq " extpBq should then be read as inpAq " inpBq^outpAq " outpBq. The only place where it makes an essential difference whether one works with I/O automata or general automata is in judging compatibility between automata and tests. Given two I/O automata A and B, let A Ď LTS must B be defined by first seeing A and B as general automata (by dropping the partitions partpAq and partpBq), and then applying the definitions of Section 3, using the default completeness criterion. In contrast, let A Ď Pr must B be defined as Section 3, but only allowing tests that are themselves I/O automata (seeing the special action w as an output action), and that are strongly compatible with A and B. The superscript Pr stands for "progress", the name given in [7] to the default completeness criterion. The difference between Ď LTS must and Ď Pr must is illustrated in Figure 1. Here A and B are automata with actspAq " actspBq " tτ, a, bu, and T is a test with actspT q " ta, b, wu. The short arrows point to start states. Test T witnesses that A Ę LTS must B, for A must T , yet pB must T q. Here it is crucial that a P actspT q, even though this action labels no transition of T , for otherwise the a-transition of A would return in rT }As and one would not obtain A must T. To see A, B and T as I/O automata, one needs to take inpAq " inpBq " inpT q " H, and thus a, b P outpAq X outpBq X outpT q. However, this violates the strong compatibility of T with A and B, so that T is disqualified as an appropriate test. There is no variant of T that is strongly compatible with A and B and yields the same result; in fact A " Pr must B.
May testing
For may-testing on I/O automata there is no difference between Ď LTS may -allowing any test that is compatible with A and B-and Ď may -allowing only tests that are strongly compatible with A and B. These preorders both coincide with the trace preorder Ě T .
Proof. Suppose B Ď T A, i.e., inpAq"inpBq^outpAq"outpBq and fintracespAq Ď fintracespBq, and let T be any test compatible with A and B. The automaton T need not be an I/O automaton, and even if it is, it need not be strongly compatible with A and B. It is well-known that Ď T is a precongruence for composition [8], so fintracespT }Aq Ď fintracespT }Bq. Since C may T (for any C) iff w occurs in a trace σ P fintracespT }Cq, it follows that A may T implies B may T . Thus A Ď LTS may B. That A Ď LTS may B implies A Ď may B is trivial. Now suppose A Ď may B. Then inpAq " inpBq^outpAq " outpBq. Let σ " a 1 a 2 . . . a n P fintracespAq. Let T be the test automaton n W a n E w with outpT q :" inpAq Z twu, inpT q :" outpAq and intpT q :" H. To make sure that T is an I/O automaton, the dashed arrows are labelled with all input actions of T , except for a i (if a i P inpT q) for the dashed arrow departing from state i. By construction, T is strongly compatible with A and B. Now C may T (for any C) iff σ P fintracespCq. Hence A may T , and thus B may T , and therefore σ P fintracespBq.
[ \ 6 Must testing based on progress
Definition 7 An I/O automaton T is complementary to I/O automaton A if
outpT q " inpAq Z twu, inpT q " outpAq and intpT q X intpAq " H.
In this case T and A are also strongly compatible, so that T }A is defined, and inpT }Aq " H. I now show that for the definition of Ď Pr must it makes no difference whether one restricts the tests T that may be used to compare two I/O automata A and B to ones that are complementary to A and B.
For use in the following proof, define the relation " between I/O automata by C " D iff statespCq " statespDq^startpCq " startpDq^stepspCq " stepspDq. Note that T }A " T 1 }A implies that A must T iff A must T 1 .
Proposition 1 A Ď Pr
must B iff inpAq " inpBq^outpAq " outpBq and A must T implies B must T for each test T that is complementary to A and B.
Proof. Suppose A Ď Pr must B. Then inpAq"inpBq^outpAq"outpBq and A must T implies B must T for each test T that is strongly compatible with A and B, and thus certainly for each test T that is complementary to A and B. Now suppose inpAq " inpBq^outpAq " outpBq but A Ę Pr must B. Then there is a test T , strongly compatible with A and B, such that A must T , yet pB must T q. It suffices to find a test T 2 with the same properties that is moreover complementary to A and B.
First modify T into T 1 by adding extpAqzextpT q to inpT 1 q, while adding a loop ps, a, sq to stepspT 1 q for each state s P statespT 1 q and each a P extpAqzextpT q. Now T }A " T 1 }A and T }B " T 1 }B, and thus A must T 1 , yet pB must T 1 q. Moreover, extpAq " extpBq Ď extpT 1 q.
Modify T 1 further into T 2 by reclassifying any action a P inpT 1 q X inpAq as an output action of T 2 and any a P extpT 1 qzpextpAq Z twuq as an internal action of T 2 . How partpT 2 q is defined is immaterial. Then T 1 }A " T 2 }A and T 1 }B " T 2 }B, and thus A must T 2 , yet pB must T 2 q. Now outpT 2 q " inpAq Z twu, inpT 2 q " outpAq, intpT 2 q X intpAq " H and intpT 2 q X intpBq " H.
[ \ Using the characterisation of Prop. 1 as definition, the preorder Ď Pr must on I/O automata has been studied by Segala [12,Section 7]. There it was related to the quiescent trace preorder Ď Q defined by Vaandrager [13]. Similar as for the preorders of Section 2, I write S Ď Q I for what was denoted I Ď Q S in [12], and I Ď qT S in [13]. Note that an execution is quiescent iff it is fair and finite. By [12,Thm. 5.7], if A is strongly convergent then A Ď F B implies A Ď Q B. (For let A Ď F B. If σ P qtracespBq, then σ P fairtracespBq Ď fairtracespAq so A has a fair execution α with tracepαq " σ. As A is strongly convergent, α is finite. Hence σ P qtracespAq.) This does not hold when dropping the side condition of strong convergence. Take A " and B " τ with actspAq " H and actspBq " intpBq " tτ u.
Must testing based on fairness
As explained in Section 3, the notion of must testing is naturally parametrised by the choice of a completeness criterion. As I/O automata are already equipped with a completeness criteria, namely the notion of fairness from Def. 2, the most appropriate form of must testing for I/O automata takes this concept of fairness as its parameter, rather than the default completeness criterion used in Section 6.
A problem in properly defining a must-testing preorder Ď F must involves the definition of the operator r s employed in Def. 6. In the context of standard automata, this operator reclassifies all its external actions, except for the success action w, as internal. When applied to I/O automaton A, it is not a priori clear how to define partprAsq, for this is a partition of the set of locally-controlled actions into tasks, and when changing an input action into a locally-controlled action, one lacks guidance on which task to allocate it to. This was a not a problem in Section 6, as there the must-testing preorder Ď Pr must depends in no way on part.
Below I inventorise various solutions to this problem, which gives rise to three possible definitions of Ď F must . Then I show in Section 9 that all three resulting preorders coincide, so that it doesn't matter on which of the definitions one settles. Moreover, these preorders all turn out to coincide with the fair preorder Ď F that comes with I/O automata.
My first (and default) solution is to simply drop the operator r s from Def. 6: This is a plausible approach, as none of the testing preorders discussed in Sections 3-6 would change at all were the operator r s dropped from Def. 6. This is the case because the set of executions, successful executions and complete executions of an automaton A is independent of the status (input, output or internal) of the actions of A.
The above begs the question why I bothered to employ the operator r s in Def. 6 in the first place. The main reason is that the theory of testing [2] was developed in the context of CCS, where each synchronisation of an action from a test with one from a tested process yields an internal action τ . Def. 6 recreates this theory using the operator } from CSP [8] and I/O automata [9], but as here synchronised actions are not internal, they have to be made internal to obtain the same effect. A second reason concerns the argument used towards the end of Section 3 for not parametrising notions of testing with a set B of actions that can be blocked; this argument hinges on all relevant actions being internal.
My second solution is to restrict the set of allowed tests T for comparing I/O automata A and B to those for which inpT }Aq " inpT }Bq " H. This is the case iff inpT q Ď outpAq and inpAq Ď outpT q. In that case rT }As and rT }Bs are trivial to define, as the set of locally-controlled actions stays the same. Moreover, it makes no difference whether this operator is included in the definition of must or not, as the set of fair executions of a process is not affected by a reclassification of output actions as internal actions.
Definition 10 Write
inpBq^outpAq " outpBq and moreover A must F T implies B must F T for each test T that is strongly compatible with A and B, and for which inpT }Aq " inpT }Bq " H.
A small variation of this idea restricts the set of allowed tests even further, namely to the ones that are complementary to A and B, as defined in Def. 7. This yield a fair version of the must-testing preorder employed in [12].
inpBq^outpAq " outpBq and A must F T implies B must F T for each T that is complementary to A and B.
As a last solution I consider tests T that are not restricted as in Defs. 10 or 11, while looking for elegant ways to define rT }As and rT }Bs. First of all, note that no generality is lost when restricting to tests T such that extpAqp" extpBqq Ď extpT q, regardless how the operator r s is defined. Namely, employing the first conversion from the proof of Prop. 1, any test T that is strongly compatible with I/O automata A and B can converted into a test T 1 satisfying this requirement, and such that T }A " T 1 }A and T }B " T 1 }B.
An application of r s to T }A consists of reclassifying external actions of T }A as internal actions. However, since for the definition of the testing preorders it makes no difference whether an action in T }A is an internal or an output action, one can just as well use an operator r s 1 that merely reclassifies input actions of T }A as output actions. Note that inpT }Aq Ď inpT q, using that extpAq Ď extpT q. Let T˚be a result of adapting the test T by reclassifying the actions in inpT }Aq from input actions of T into output actions of T ; the test T˚is not uniquely defined, as there are various ways to fill in partpT˚q.
Observation 1 Apart from the problematic definition of partprT }As 1 q, the I/O automaton rT }As 1 is the very same as T˚}A.
In other words, the reclassification of input into output actions can just as well be done on the test, instead of on the composition of test and tested automaton. The advantage of this approach is that the problematic definition of partprT }As 1 q is moved to the test as well. Now one can use T˚}A instead of rT }As 1 in the definition of must testing for any desired definition of partpT˚q. This amounts to choosing any test T˚with inpT˚}Aq " H. It makes this solution equivalent to the one of Def. 10.
Action-based must testing
The theory of testing from [2] employs the success action w merely to mark success states; an execution is successful iff it contains a state in which w is enabled. In [3] this is dubbed state-based testing. Segala [11] (in a setting with probabilistic automata) uses another mode of testing, called action-based in [3], in which an execution is defined to be successful iff it contains the action w.
Although the state-based and action-based may-testing preorders obviously coincide, the state-based and action-based must-testing preorders do not, at least when employing the default completeness criterion. An example showing the difference is given in [3]. It involves two automata A and B, which can in fact be seen as I/O automata, such that A Ę Pr must B, yet A ab " Pr must B. Here ab " Pr must is the action-based version of " Pr must . So far I have considered only state-based testing preorders on I/O automata. Let ab Ď F must be the action-based version of Ď F must . It is defined as in Def. 9, but using must F ab instead of must F . Here A must F ab T holds iff each fair trace of T }A contains the action w. Below I will show that when taking the notion of fairness from [9] as completeness criterion, state-based and action-based must testing yields the same result, i.e., ab Ď F must equals Ď F must . In fact, I need this result in my proof that Ď F must coincides with Ď F . 9 Fair must testing agrees with the fair traces preorder The following theorem states that the must-testing preorder on I/O automata based on the completeness criterion of fairness that is native to I/O automata, in each of the four forms discussed in Sections 7 and 8, coincides with the standard preorder of I/O automata based on reverse inclusion of fair traces.
Proof. Suppose AĎ F B, i.e., inpAq"inpBq^outpAq"outpBq and fairtracespBqĎ fairtracespAq, and let T be any test that is strongly compatible with A and B.
Since Ď F is a precongruence for composition (cf. Section 2), fairtracespT }Bq Ď fintracespT }Aq. Since for action-based must testing C must F ab T (for any C) iff w occurs in each fair trace σ P fairtracespT }Cq, it follows that In order to show that A Ď F must B, suppose that A must F T , where T is a test that is strongly compatible with A and B. Let the test T˚be obtained from T by (i) dropping all transitions ps, a, s 1 q P stepspT q for s a success state and a ‰ w, and (ii) adding a loop ps, a, sq for each success state s and a P inpT q. Since for state-based must testing it is irrelevant what happens after encountering a success state, one has C must F T iff C must F T˚ (1) for each I/O automaton C. Moreover, I claim that for each C one has Here "if" is trivial. For "only if", let α be a fair execution of T˚}C, and suppose, towards a contradiction, that α contains a success state ps, rq, with s a success state of T˚and r a state of C, but does not contain the success action w. Let α 1 be the suffix of α starting with the first occurrence of ps, rq. Then all states of α 1 have the form ps, r 1 q, and the action w is enabled in each of these states. Let T P partpT˚}Cq be the task containing w. Since w is a locally controlled action of T˚, by Def. 4 all members of T must be locally controlled actions of T˚. No such action can occur in α 1 . This contradicts the assumption that α is fair (cf. Def. 2), and thereby concludes the proof of (2). From the assumption A must F T one obtains A must F ab T˚by (1) and (2), and B must F ab T˚by the assumption that A ab Ď F must B. Hence B must F T by (2) and (1).
Then inpAq " inpBq^outpAq " outpBq. Let σ " a 1 a 2 . . . a n P fairtracespBq. Let T be the test automaton W E w 1 τ 2 a 1 τ 3 a 2 τ n τ S a n with outpT q :" inpAq Z twu, inpT q :" outpAq and intpT q :" tτ u. The dashed arrows are labelled with all input actions of T , except for a i (if a i P inpT q) for the dashed arrow departing from state i. By construction, T is complementary to A and B. Now C must T (for any C) iff σ R fairtracespCq. Hence B may not T , and thus A may not T , and therefore σ P fairtracespAq.
The case that σ " a 1 a 2 . . . P fairtracespBq is infinite goes likewise, but without the state S in T . Hence A Ď F B.
Reward testing
The reward testing preorder taking the notion of fairness from Def. 2 as underlying completeness criterion can be defined on I/O automata by analogy of Definitions 9, 10 or 11. Here I take the one that follows Def. 9, as it is clearly the strongest, i.e., with its kernel making the most distinctions.
Definition 12 Write A Ď F reward B if inpAq " inpBq^outpAq " outpBq and for each reward test T that is strongly compatible with A and B and for each fair execution β of T }B there is a fair execution α of T }A with reward pαqďreward pβq.
When taking progress as underlying completeness criterion, reward testing is stronger than must testing; the opening page of [6] shows an example where reward testing makes useful distinctions that are missed by may as well as must testing. When moving to fairness as the underlying completeness criterion, must testing no longer misses that example, and in fact must testing becomes equally strong as reward testing. In order to show this, I will use the following notation.
Definition 13 Let A 1 and A 2 be two strongly compatible I/O automata. A state s of A 1 }A 2 is a pair p s r1s, s r2sq with s rks P statespA k q for k " 1, 2. Let α " s 0 , a 1 , s 1 , a 2 , . . . be an execution of A 1 }A 2 . The projection αrks of α to the k th component A k , for k " 1, 2, is obtained from α by deleting ", a i , s i " whenever a i R actspA k q, and replacing the remaining pairs s i by s i rks.
Moreover, if σ is a sequence of external actions of A 1 }A 2 , then σaeA k is what is left of σ after removing all actions outside actspA k q.
Note that if σ " tracepαq, for α an execution of A 1 }A 2 , then σaeA k " tracepαrksq. Moreover, if α is an execution of T }A, were T is a test and A a tested automaton, then all rewards of the actions in α are inherited from the ones in αr1s, so that reward pαq " reward pαr1sq. (3) Proof. That A Ď F reward B implies A Ď F must B has been shown in [6,Thm. 7] and is also justified in Section 3.
That A Ď F must B implies A Ď F B has been demonstrated by Thm. 3. Suppose A Ď F B, i.e., inpAq " inpBq^outpAq " outpBq and fairtracespBq Ď fairtracespAq, and let T be any test that is strongly compatible with A and B. Let β be a fair execution of T }B. By [9,Prop. 4], βr1s is a fair execution of T , and βr2s is a fair execution of B. Since A Ď F B, automaton A has a fair execution γ with tracepγq " tracepβr2sq. Let σ :" tracepβq. Then σ is a sequence of external actions of T }A such that σaeT " tracepβr1sq and σaeA " σaeB " tracepβr2sq " tracepγq. By [9,Prop. 5], there exists a fair execution α of T }A such that tracepαq " σ, αr1s " βr1s and αr2s " γ. By (3) one has reward pαq " reward pαr1sq " reward pβr1sq " reward pβq. Thus A Ď F reward B.
Conclusion
When adapting the concept of a complete execution, which plays a central rôle in the definition of must testing, to the weakly fair executions of I/O automata, must testing turns out to characterise exactly the fair preorder on I/O automata. Moreover, reward testing, which under the default notion of a complete execution is much more discriminating than must testing, in this setting has the same distinguishing power. Interesting venues for future investigation include extending these connections to timed and probabilistic settings. | 2022-09-20T13:11:38.841Z | 2022-12-21T00:00:00.000 | {
"year": 2022,
"sha1": "94185f1269f3a249350d5c1254db8ad7a291fa96",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "d0624ff346aa134e9713da26bb2281d44523e8ab",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
17275478 | pes2o/s2orc | v3-fos-license | Oxytocin-induced antinociception in the spinal cord is mediated by a subpopulation of glutamatergic neurons in lamina I-II which amplify GABAergic inhibition
Background Recent evidence suggests that oxytocin (OT), secreted in the superficial spinal cord dorsal horn by descending axons of paraventricular hypothalamic nucleus (PVN) neurons, produces antinociception and analgesia. The spinal mechanism of OT is, however, still unclear and requires further investigation. We have used patch clamp recording of lamina II neurons in spinal cord slices and immunocytochemistry in order to identify PVN-activated neurons in the superficial layers of the spinal cord and attempted to determine how this neuronal population may lead to OT-mediated antinociception. Results We show that OT released during PVN stimulation specifically activates a subpopulation of lamina II glutamatergic interneurons which are localized in the most superficial layers of the dorsal horn of the spinal cord (lamina I-II). This OT-specific stimulation of glutamatergic neurons allows the recruitment of all GABAergic interneurons in lamina II which produces a generalized elevation of local inhibition, a phenomenon which might explain the reduction of incoming Aδ and C primary afferent-mediated sensory messages. Conclusion Our results obtained in lamina II of the spinal cord provide the first clear evidence of a specific local neuronal network that is activated by OT release to induce antinociception. This OT-specific pathway might represent a novel and interesting therapeutic target for the management of neuropathic and inflammatory pain.
Background
Oxytocin (OT) is a nonapeptide synthesized in magnocellular neurons of the paraventricular and supraoptic nuclei of the hypothalamus and acts as a neurohormone during parturition and milk ejection reflex [1]. It is also synthesized in parvocellular neurons of the paraventricular nucleus (PVN) which project to various areas of the central nervous system including the spinal cord. In the spinal cord, oxytocinergic projection sites [2][3][4][5] match well OT binding sites [6,7] in the superficial layers of the dorsal horn and in the autonomic regions (intermediolateral columns, intermediomedial grey matter, lamina X and sacral parasympathetic nucleus). Using electron microscopy, OT-positive synaptic terminals were found to form mainly axodendritic synapses on lamina I-II neurons [8].
Although some conflicting results have been reported in the literature [9,10], analgesic effects of OT have been reported in most studies after systemic and/or central administration of OT in naive rodents [11] or during the development of inflammatory [12][13][14] or neuropathic pain [15]. An OT-sensitive antinociception can also be induced in rats by massage-like stimulation [16,17], swim stress [18] and electrical stimulation the PVN in naïve [19,20] and neuropathic rats [21,22], suggesting the involvement of an endogenous OT receptor-dependent analgesic system. Little is known about the possible mechanisms and neuronal networks mediating these antinociceptive effects of OT in the spinal cord. PVN stimulation or OT infusion reduced the incoming peripheral Aδ and C-fiber activation in dorsal horn spinal neurons [21,23]. It should be noted here that most of these in vivo results were obtained from neurons receiving convergent sensory informations (from Aβ, Aδ and C fibers) and recorded in deep layers (Lamina III-V) of the dorsal horn. In some rare cases, neurons were also shown to be excited after application of OT [23] or after PVN stimulation [24]. However, in spinal cord slices of rats and mice, OT reduced electricallyevoked glutamatergic synaptic currents between primary afferents and lamina II neurons [18] whereas in primary cultures of neonatal laminae I-III dorsal horn neurons, OT produced a facilitation of glutamatergic synaptic transmission via a presynaptic mechanism of action [25]. Finally, OT was shown to inhibit glutamate-induced excitation of spinal neurons in vivo [23], and, in a recent study, antinociceptive action of spinal OT is proposed to be mediated by a GABAergic mechanism [24].
In order to clarify the effects of OT in spinal antinociception, we determined its effects on excitatory and inhibitory synaptic transmission in lamina II, a layer containing mainly local interneurons which are thought to play an important role in spinal pain processing by controlling the local balance between excitation and inhibition. Using electrophysiological and morphological tools, we characterized the effects of OT on lamina II neurons, identified a subpopulation of OT-sensitive postsynaptic target neurons, specify their neurochemical identity and their role in recruiting a local interneuronal network leading to antinociception.
Characterization of fast spontaneous synaptic transmission in lamina II interneurons
In all lamina II neurons recorded in the whole-cell configuration of the patch-clamp technique (n = 39), spontaneous inhibitory and excitatory postsynaptic currents (sIPSCs and sEPSCs, respectively) were observed ( Figure 1). Under our experimental conditions, the chloride equilibrium potential was set at -60 mV, and IPSCs were recorded as transient outward currents when lamina II neurons were held at 0 mV ( Figure 1A). Both glycine-and GABA A receptor-mediated sIPSCs (GABAA-R sIPSCs) were present and were blocked by 1 μM strychnine (not shown) and 10 μM bicuculline respectively ( Figure 1A). We found that pharmacologically-isolated GABAA-R sIP-SCs occurred at a mean frequency of 0.12 ± 0.02 Hz and exhibited a mean amplitude of 11.1 ± 1.5 pA (n = 6). For comparison, fast glutamatergic sEPSCs were detected as transient inward currents at a holding potential of -60 mV ( Figure 1B). They occurred at a mean frequency of 1.26 ± 0.47 Hz and had a mean amplitude of -16.3 ± 0.8 pA (n = 16). These currents were blocked by 6-cyano-7-nitroquinoxaline-2,3-dione (CNQX, 10 μM) ( Figure 1B), indicating that they were mediated by the AMPA subtype of glutamate receptors. The detailed properties of the kinetics of each type of current are given in Table 1 for spontaneous and miniature (i.e. recorded in the presence of 0.5 μM of TTX) synaptic events.
Consequences of oxytocin agonist application (TGOT) on spontaneous synaptic transmission recorded in lamina II neurons
Spontaneous AMPA-receptor-mediated EPSCs (AMPA-R sEPSCs) were recorded in the presence of 1 μM strychnine and 10 μM bicuculline ( Figure 2A). Bath application of the selective oxytocin receptor agonist (TGOT 1 μM) increased the frequency of occurrence (but not the amplitude) of spontaneous AMPA-R sEPSCs (Figure 2A, histogram) in 5 out of 14 lamina II neurons (35.7%). This frequency of sEPSCs was of 1.04 ± 0.38 Hz under control conditions and increased to 1.79 ± 0.55 Hz in the presence of TGOT (n = 5). This change was reversible and found to be statistically significant (p < 0.05) and represented an average increase of 103 ± 27%. By contrast to its relatively modest effect on sEPSCs, TGOT (1 μM) induced a significant increase in the amplitude (Control: 14.4 ± 1.8 pA; TGOT: 25.5 ± 2.6 pA, n = 6; p < 0.05) and frequency of GABAA-R sIPSCs (Control: 0.12 ± 0.02 Hz; TGOT: 1.69 ± 0.61 Hz, n = 6; p < 0.05) in all lamina II neurons from which we recorded (n = 6 out of 6). The effect of TGOT was fully reversible after 10-15 minutes of washout and was never observed in the presence of OTA characterization of fast spontaneous inhibitory and excitatory synaptic currents in lamina II Figure 1 characterization of fast spontaneous inhibitory and excitatory synaptic currents in lamina II. Glycine-receptormediated sIPSCs were blocked in these experiments by adding 1 μM strychnine in the general bath perfusion. A: GABA Areceptor-mediated sIPSCs (GABAA-R sIPSCs) were identified as outward currents (left panel) at a holding potential of 0 mV. These sIPSCs were reversibly blocked in the presence of 10 μM bicuculline, a selective antagonist of GABAA-Rs. Panel on the right illustrates an averaged current obtained from 46 individual traces. GABAA-R sIPSCs were best described by a monoexponential decay time constant in both the activation phase (τ R = 1.3 ms) and deactivation phase (τ D = 45.2 ms). B: At a holding potential of -60 mV (i.e. at the equilibrium potential for Clions), only fast inward currents were detected (left panel). These events were AMPA-type glutamate-receptor-mediated sEPSCs (AMPA-R sEPSCs) since they were reversibly inhibited in the presence of CNQX (10 μM) a specific blocker of AMPA subtype glutamate receptor. The trace on the right was obtained by averaging 168 individual AMPA-R sEPSCs. These sEPSCs were characterized by a τ R of 0.9 ms and τ D of 4.1 ms).
Presynaptic OT receptors on glutamatergic synaptic terminals
In the presence of TTX (0.5 μM), TGOT also stimulated the frequency of AMPA-R mEPSCs (Figure 2A, left panel and histogram) in a subpopulation (50%, 6 out of 12) of lamina II neurons. The mean frequency of mEPSCs increased from 1.27 ± 0.30 Hz under control conditions to 2.31 ± 0.65 Hz in the presence of TGOT (1 μM). This change in frequency was statistically significant (p < 0.05) and represented an average increase of +96.8 ± 31.0% (n = 6; Figure 2A, right histogram) which was similar to that observed for sEPSCs. In sharp contrast with this result, TGOT failed to change the frequency of occurrence of GABAA-R mIPSCs ( Figure 2B, right histogram) since, in all lamina II neurons recorded, the mean frequency remained stable (Control: 0.13 ± 0.03 Hz; TGOT: 0.12 ± 0.04 Hz, n = 6; p > 0.05; average change: -1.2 ± 9.1%, n = 6). TGOT had no apparent effect on the frequency of glycine receptor-mediated spontaneous and miniature synaptic transmission (not shown).
TGOT reveals a novel population of AMPA-R mEPSCs having slow rise and decay kinetics
Detailed analysis of mEPSC kinetics revealed that in the presence of TGOT a subpopulation of mEPSCs remained unchanged whereas an additional population of mEPSCs seemed to appear. As illustrated in Figure 3B-C, this "novel" population of miniature AMPA-R EPSCs had larger time to peak values (τ R : +20.9 ± 4.6%, n = 6, p < 0.01), slower deactivation time constants (τ D : +52.7 ± 7.9%; n = 6, p < 0.001) and smaller mean amplitudes (Amp.: -20.9 ± 2.0%; n = 6, p < 0.001). The effect of TGOT became particularly clear when the distribution of the monoexponential decay time constants of AMPA-R mEP-SCs was represented under the form of a binned histogram ( Figure 3A). Under control conditions ( Figure 3A, top histogram), the distribution of the decay time constants could be adjusted by a Gaussian distribution cen-tered at 3.21 ms, while in the presence of TGOT this distribution was strongly asymmetric because of the presence of the "newly" detected mEPSCs with slow decay time constants ( Figure 3A, bottom histogram). These TGOT-sensitive mEPSCs were fully blocked by CNQX (10 μM) and disappeared upon washout of TGOT.
These results suggest that presynaptic OT receptors facilitate glutamatergic synaptic transmission by revealing a novel or previously silent or undetected population of EPSCs.
Immunohistochemical identification of PVN-activated (OT-sensitive) dorsal horn neurons
In order to localize and to tentatively identify the phenotype of the lamina II neurons activated upon OT release, we used an immunohistochemical approach based on c-Fos nuclear expression following PVN electrical stimulation in anaesthetized rats. As shown in Figure 4A and 4C, the c-Fos positive nuclei of PVN-activated cells in the spinal cord were rare and systematically localized in the most superficial layers of the dorsal horn (lamina I and outer part of lamina II: I/IIo) as well as in the sympathetic intermedio-lateral column (not shown). Lamina I/IIo cells immunopositive for c-Fos were mainly found in the ipsilateral dorsal horn, with a slight preference for the medial half of superficial laminae versus lateral half. This labeling matched the distribution of OT-positive fibers and OTbinding sites (see additional file 1). In good agreement, OT-containing fibers (brown staining in Figure 4B, arrowheads) could be found close to the c-Fos positive nuclei (blue staining in Figure 4B, arrows). PVN-activated cells were identified as neurons because c-Fos-positive nuclei were always surrounded by a thin (perinuclear) cytoplasmic compartment which was immunopositive for the neuronal marker MAP2 and negative for the astroglial marker GFAP (not shown). In a double immunofluorescence protocol against c-Fos and glutamic acid decarboxylase (GAD), the enzyme synthesizing GABA, c-Fos positive nuclei of PVN-activated cells were found in laminae with poor GAD-immunoreactivity (I/IIo) while lamina IIi showed a strong GAD-immunoreactivity ( Figure 4C). Among 720 c-Fos positive nuclei observed in lumbar segments of two animals (80% in lamina I, 20% in lamina Mean values and SEM are shown for amplitude, rise and decaying monoexponential time constants (τ R and τ D , respectively) and frequency of occurrence of spontaneous (sIPSCs and sEPSCs) and miniature (mIPSCs and mEPSCs) events. Note that the mean values were not statistically different while recorded in the absence (spontaneous) or in the presence of tetrodotoxin (miniatures).
Differential effects of a selective oxytocin receptor (TGOT) on glutamatergic and GABAergic synaptic currents TGOT-induced increase in frequency was always associated with the appearance of slow rising and decaying currents having lower amplitudes. All changes observed during TGOT application were significant: p < 0.01 (**: τ R , n = 6), and p < 0.001 (***: Amp and τ D , n = 6) using the Student's t-test.
which was in good agreement with the small number of c-Fos positive lamina neurons II following PVN stimulation in vivo. It should be noted that although TGOT was able to increase the frequency of occurrence of sEPSPs (spontaneous excitatory postsynaptic potentials) in some recorded neurons, these events did not reach an amplitude or a frequency sufficient to trigger action potentials.
Discussion
In this study, we show that only a subpopulation of gluta-
Presynaptic effect of OT on glutamatergic lamina II neurons
As a first approach, we compared the effect of OT on spontaneous and miniature EPSCs/IPSCs. Whereas miniature currents (i.e. recorded in the presence of TTX) reflect only the activity of individual release sites, spontaneous synaptic currents allow action potentials and network activitydriven neurotransmitter release. TGOT did not change the frequency of occurrence of GABA A -R-mediated miniature IPSCs ( Figure 2B) but significantly increased AMPA-Rmediated synaptic transmission in 50% of the recorded lamina II neurons (Figure 2A). This suggested the presence of functional OT receptors on the presynaptic terminals of a subpopulation of glutamatergic neurons and the absence of such receptors on the synaptic endings of GABAergic dorsal horn interneurons. Although c-Fos expression was not observed in GAD positive neurons following PVN stimulation (Figure 4), we can not fully exclude that OT receptors are expressed by GABAergic interneurones. Should this be the case, we can however conclude that their activation is certainly not sufficient to directly increase the release probability of GABA (i.e. as seen in miniature transmission experiments; Figure 2) or to induce a significant action potential discharge in these neurons (see Figure 5 and related results in the text). These results are in agreement with those reported previously on cultured laminae I-III dorsal horn neurons [25]. A major difference observed in slices compared to dorsal horn primary cultures concerned the recruitment by OT of a population of AMPA-R mEPSCs with slow rise and decay kinetics (Figure 3). These mEPSCs were rare under control conditions (indicated by the histogram skewness in figure 3A) but were clearly seen after OT receptor stimulation. Their slow kinetics might indicate that they originate at synapses distant from the neuronal cell body, possibly on distal dendrites. In culture, dorsal horn neurons possess simpler dendritic trees and synapses normally impinging on distal dendrites might be established on the cell body and/or proximal dendrites. The existence of silent synapses due to the absence of functional postsynaptic AMPA-R has been previously shown in the spinal cord, in normal and pathological situations [26,27]. Our results indicate for the first time that a pool of "presynaptically silent" synapses, which can be recruited or turned on functionally by a neuromodulator such as OT (Figure 3), might exist in the dorsal horn of the spinal cord. Although we cannot completely exclude a postsynaptic locus for the expression of this phenomenon [28,29], the fast onset and reversibility TGOT effect on glutamatergic transmission argue rather in favor of the unmasking (i.e. by increasing the release probability) of presynaptically silent or whispering synapses at distal dendrites [29].
A subpopulation of OT-sensitive glutamatergic neurons recruits GABAergic interneurons in lamina II
In contrast with the effect on miniature IPSCs, OT receptor activation induced a massive and generalized increase in spontaneous GABAergic transmission, since it was observed in all lamina II neurons from which we recorded ( Figure 2). This result is a key finding our study because it allows us to postulate that OT sensitive neurons, which are unlikely to be GABAergic (i.e. no effect of OT on the frequency of GABAA mIPSCs), are functionally connected (directly or indirectly) to all GABAergic interneurons in lamina II and are responsible for the facilitation of GABA release. Under the same conditions, however, the increase in spontaneous AMPA-R-mediated EPSCs concerned only a subpopulation of neurons (37%) comparable to that in which OT-receptor activation increased the frequency of mEPSCs. Our double immunofluorescence analysis indicates that c-Fos expressing neurons after PVN stimulation did not contain GAD, suggesting that the c-Fos positive neurons were likely to be glutamatergic. An interesting finding of the recent literature is the existence of a class of interneurons displaying no action potential discharge under resting conditions and receiving no apparent input from peripheral sensory fibers even when the latter were stimulated at C fiber strength [24]. Simplified diagram proposal summarizing the OT effects seen in lamina I-II at the light of our present study Figure 6 Simplified diagram proposal summarizing the OT effects seen in lamina I-II at the light of our present study.
Our results indicate that OT, which is physiologically released by hypothalamospinal fibers terminating in lamina I-II, activates a subpopulation of glutamatergic neurons (a) and, in particular, stimulates receptors present on presynaptic glutamatergic synaptic terminals (b). We also show that population of OT-sensitive glutamatergic neurons recruits the whole population of GABAergic neurons (c) resulting in an elevated GABAergic inhibitory tone in lamina II (represented as a green box). The dissection of the spinal action of OT can now unify previous observations which were in apparent contradiction. First, a network driven increase in GABAergic inhibition is likely to explain the specific inhibition of action potential triggered by the nociceptive activation of C and Aδ fibers [21,24]. In this case, a direct presynaptic inhibition of these fibers may be responsible for this effect (d). Alternatively, it remains possible that a specific C/Aδ network is inhibited through presynaptic fibre of neuronal inhibition (d). It remains that neurons, receiving convergent sensory inputs are not the direct target for OT effect since only C/Aδ fibre activity is reduced (e). Second, our findings and mechanism proposal fully answer to experiments based on the revelation of c-Fos positive nuclei following PVN stimulation ( Figure 4) were in line with our electrophysiological results. Only a small number of neurons was labeled after PVN stimulation and these neurons were located in lamina I and in the outer part of lamina II. It is interesting to note that this area is the main target in the dorsal horn for OT-positive hypothalamo-spinal fibers [8]. Recent work on rat or OT-knockout mice spinal cord has shown that OT reduced glutamatergic monosynaptic EPSCs evoked by electrical stimulation of primary afferents in the superficial layers of neonatal spinal cord slices [18]. These results are not contradictory to our data since, as discussed in the work of Robinson [18], the presynaptic effect on primary afferent transmission might have been due to the activation of GABAergic neurons exerting an inhibitory effect on synaptic glutamate release from primary afferents.
Taken together, our results are therefore consistent with the hypothesis that the spinal excitation resulting from peripheral C and Aδ sensory neurons is reduced by a local network involving a subpopulation of glutamatergic neuron activated by OT which excites GABAergic interneurons in lamina II ( Figure 6). This synaptic organization might also explain earlier observations made in vivo and showing that application of OT induced either excitation or inhibition of dorsal horn neurons [23].
Conclusion
In conclusion, we show that a descending hypothalamospinal control using OT as neurotransmitter exerts its effects via a specific local neuronal network in lamina II. This network involves a subpopulation of OT receptorexpressing glutamatergic neurons that distribute their excitation to all GABAergic neurons in lamina II in order to limit or block nociceptive afferent messages originating from C and Aδ primary afferents [21]. These PVN-and OT-activated lamina II interneurons certainly represent an interesting target for the development of specific therapeutic strategies to reduce nociceptive sensory messages at the spinal level.
Patch clamp recordings, data acquisition, and analysis
All electrophysiological experiments were performed at room temperature. Lamina II neurons were recorded blindly using the whole-cell configuration of the patchclamp technique and systematically dialyzed with biocytin in order to localize and determine the morphological characteristics of the recorded interneurons. Synaptic currents were recorded under voltage-clamp using an Axopatch 200B amplifier. The patch pipettes were filled with a solution containing (in mM): 80 Cs 2 SO 4 , 8 CsCl, 2 MgCl 2 , 10 HEPES, 10 Biocytin, and adjusted to pH 7.3 with CsOH. In voltage-clamp mode, lamina II neurons were held at potentials of -60 mV and 0 mV to record glutamatergic and GABAergic synaptic currents, respectively. In a subset of experiments, we have used the current clamp mode to measure changes in the membrane potential (Em). Em was initially adjusted at -60 mV (i.e. with a small current injection if necessary), a value often corresponding to the resting membrane potential in our condition (no current injection: i = 0). For these experiments, cesium was replaced by potassium in the pipette solution described above. Synaptic currents were detected and analyzed individually using the WinEDR and WinWCP softwares (courtesy Dr J. Dempster, University of Strathclyde, Glasgow, UK) to extract their frequency of occurrence, amplitude, monoexponential rise time (τ R ), and decay time constants (τ D ). Results are expressed as mean value ± S.E.M. Statistical analysis was performed using parametric one-way ANOVA with Student's t-tests to compare means; sets of data were considered as different when p < 0.05.
Immunocytochemistry following in vivo stimulation of PVN
In anesthetized rats (alpha-chloralose 2% and urethane 20%) a bipolar electrode was implanted in the left paraventricular nucleus to perform 90 minutes stimulation (1 msec, 100 nA, 20 Hz). Immediately after the stimulation, tissues were fixed by an intracardiac perfusion of paraformaldehyde 4%. After a few hours of post fixation in the same fixative, spinal cords were removed, embed- | 2018-04-03T02:20:47.420Z | 2008-01-01T00:00:00.000 | {
"year": 2008,
"sha1": "039b52e2fa849dd0f0b46cd45022f8f1bd5f332a",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/1744-8069-4-19",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "846113d64e7d8d581e233f835f75668b19d6fa42",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
265471010 | pes2o/s2orc | v3-fos-license | The Agropyron mongolicum bHLH Gene AmbHLH148 Positively Involved in Transgenic Nicotiana benthamiana Adaptive Response to Drought Stress
: While bHLH transcription factors have been linked to the regulation of various abiotic stressors, research on drought-related bHLH proteins and their molecular processes in Agropyron mongolicum has remained limited. In this study, a bHLH gene from A. mongolicum , designated as AmbHLH148 , was successfully cloned and isolated. AmbHLH148 was exclusively localized within the nucleus. Additionally, qRT-PCR analysis demonstrated a significant upregulation of AmbHLH148 in response to drought stress. When transferred into tobacco ( Nicotiana benthamiana ), the heterologous expression of AmbHLH148 led to enhanced drought tolerance. Under drought stress conditions, AmbHLH148 -OE transgenic tobacco plants exhibited increased activities of antioxidant defense enzymes, such as SOD (superoxide dismutase), POD (peroxidase), and CAT (catalase). These enzymes efficiently mitigated the accumulation of reactive oxygen species (ROS) compared to wild-type plants. Furthermore, AmbHLH148 -OE transgenic tobacco showed elevated levels of PRO (proline) and reduced MDA (malondialdehyde) content, contributing to enhanced stability in the plant’s cell membrane system during drought stress. In summary, this study underscores that the overexpression of AmbHLH148 in transgenic tobacco acts as a positive regulator under drought stress by enhancing the plant’s antioxidant capacity. These findings shed light on the molecular mechanisms involved in bHLH transcription factors’ role in drought resistance, contributing to the discovery and utilization of drought-resistant genes in A. mongolicum for enhancing crop drought resistance.
As the discovery of bHLH transcription factors continues to expand, mounting evidence supports their significant role in plant responses to abiotic stresses.For example, the overexpression of EcbHLH57 from millet (Eleusine coracana) in tobacco significantly improved tolerance to both salinity and drought stress [38].PebHLH35 from Populus euphratica, when expressed in transgenic Arabidopsis thaliana, bolstered drought resistance by regulating stomatal development and photosynthesis [39].In Arabidopsis thaliana, the overexpression of bHLH122 in transgenic plants induced heightened resistance to drought, NaCl, and osmotic stress [40].In wheat, the TabHLH1 gene increased drought tolerance by regulating the ABA pathway [41].Similarly, the overexpression of OsbHLH148 in rice led to increased drought tolerance through the regulation of the JA pathway [42].Additionally, the transgenic expression of VvbHLH1 in grapes not only significantly increased flavonoid accumulation but also enhanced drought tolerance in Arabidopsis [43].Collectively, these studies underscore the pivotal role of bHLH transcription factors in enhancing drought tolerance across diverse plant species, offering valuable insights into potential strategies for enhancing drought resistance in crops.
A. mongolicum Keng diploid (2n = 14) perennial grass is a common forage crop in the deserts and grasslands of China [44]. A. mongolicum is a valuable forage grass resource that holds significant ecological value for its attributes of drought resistance, cold resistance, salt tolerance, and disease and insect resistance.It is often cultivated to improve grassland, support soil and water conservation, and establish windbreak forests [45,46].Some of its exceptional genetic resources for resistance have been harnessed for the genetic modification of grain crops such as wheat and barley [47].Recent research on A. mongolicum has concentrated on genetic diversity, remote hybridization, and double breeding.For instance, the MwLEA1 gene, crucial for drought tolerance, was effectively extracted from A. mongolicum using the homologous sequence of wheat [48].Genes associated with drought tolerance have also been discovered.Ao et al. employed RT-PCR and RACE techniques to extract the MwAP2/EREBP genes from A. mongolicum, suggesting their involvement in the physiological process linked to drought tolerance [49].However, until now, there have been no studies exploring the drought-responsive bHLH family genes in A. mongolicum.In a previous investigation that leveraged transcriptome sequencing data obtained from A. mongolicum during different drought treatment periods (NCBI, PRJNA742257 [50]), 23 bHLH genes were identified through bioinformatics screening.Among them, A. mongolicum's AmbHLH148 exhibited significant upregulation in response to drought stress [51].To understand its potential role in drought tolerance, AmbHLH148 was heterologously inserted into tobacco plants.This research offers valuable insights into the function of AmbHLH148 in drought tolerance and offers genetic resources that could enhance drought resistance in other crops.
Experimental Materials and Growth Condition
A. mongolicum seeds with full seeds were carefully selected by peeling off the seed coat, followed by thorough seed sterilization.In detail, the seeds were put into a 2 mL centrifuge tube and washed with 1.5 mL of sterilized ddH 2 O for 30 s.This washing procedure was repeated 5 times, and the liquid was then discarded.Subsequently, the seeds underwent a 30 s wash with 1.5 mL of 75% ethanol, and the ethanol was then discarded.The seeds were washed again with 1.5 mL of sterilized ddH 2 O for 30 s, and this process was repeated 3 times before discarding the liquid.A solution comprising sterilized ddH 2 O and sodium hypochlorite at a 1:1.3 ratio was used for a 10 min wash, followed by rinsing with sterilized ddH 2 O for 30 s.This rinsing procedure was repeated 5 times, and the liquid was discarded.After these washing steps, the seeds were transferred to an ultra-clean bench, where they were left to dry on sterilized filter paper.The sterilized seeds were then sown in a germination box containing sterilized filter paper, which was moistened with water twice daily at 8:00 a.m. and 8:00 p.m.The germination box was placed inside an artificial climatic chamber (RLD-1000D-4, Ningbo Ledian Instrument Manufacturing Co., Ltd., Ningbo, China), providing a controlled environment with 14 h of light and 10 h of darkness.The temperature was maintained at 25 ± 1 • C, with a relative humidity of 60% and a constant light intensity of 20,000 LX, until germination.Upon successful germination, the seedlings were transplanted into pots measuring 35 cm in length, 27 cm in width, and 10 cm in depth.These pots were filled with 3 L of 1/5 Hoagland's nutrient solution for hydroponic growth.
N. benthamiana L. seeds were evenly sown in pots containing a substrate mixture (comprising nutrient soil, vermiculite, and perlite at a 3:1:1 ratio) within pots measuring 10 cm in diameter and 10 cm in depth.These pots were initially sealed with cling film to create a controlled environment.Following seedling emergence at 4-5 days, small air vents were introduced in the cling film.After approximately 10 days of seedling growth, individual transplanting was conducted.The transplanted seedlings were placed in an artificial climatic chamber (RLD-1000D-4, Ningbo Ledian Instrument Manufacturing Co., Ltd., Ningbo, China) with 14 h of light and 10 h of darkness.The chamber maintained a consistent temperature of 25 ± 1 • C, with a relative humidity of 60% and a light intensity of 20,000 LX.The cling film was removed when the seedlings reached a height of 3-5 cm.For subcellular localization experiments, N. benthamiana seedlings were grown for approximately 20 days, a stage deemed suitable for use as test material.
T 1 -generation transgenic tobacco seeds were subjected to sterilization procedures before being placed onto 1/2 MS solid medium containing 50 mg/L of kanamycin for positive screening.The transgenic tobacco plants that exhibited positive screening results were subsequently transplanted into pots, each measuring 10 cm in diameter and 10 cm in depth.In each pot, four transgenic plants were placed, and this arrangement was replicated across four pots for each transgenic strain.To serve as a comparative control, a selection of wild-type (WT) tobacco seedlings, closely matching the growth height of the transgenic plants, was chosen and placed in an artificial climate chamber under conditions identical to those described previously.Throughout the growth phase, the plants were watered every two days with 200 mL of water.Additionally, a water-soluble fertilizer, with a concentration of 1 g/L, was supplemented to the plants on a weekly basis.
Plant Drought Treatment
When the A. mongolicum seedlings reached the three-leaf, one-center stage of growth, a 25% PEG-6000 solution was added to the 1/5 Hoagland nutritional solution to induce PEG stress.Subsequent sampling involved the collection of 0.5 g of plant material from each individual.Leaves were collected at specific time intervals, including 0-day (CK), 1-day, 3-day, 5-day, 7-day, and fs24 h (fs24 h indicates the sample collected after 24 h of restoration in 1/5 Hoagland's nutrient solution).These collected leaves were promptly placed in enzyme-free freezing tubes and flash-frozen in liquid nitrogen at −80 • C until further analysis.
In the drought treatment experiment, both the WT and AmbHLH148-OE tobacco seedlings, which were cultured to approximately 5 weeks of age, were subjected to a drought treatment by withholding water for a duration of 7 days.Upon completing the 7-day drought treatment, tobacco seedlings were harvested and underwent a thorough wash with distilled water in preparation for subsequent analysis.The experimental procedure encompassed the following steps: (1) the selection of AmbHLH148-OE transgenic lines; (2) observation of the phenotype characteristics of the tobacco plants; (3) measurement of the root length and root surface area of tobacco subjected to drought stress; (4) assessment of proline (PRO) and malondialdehyde (MDA) content, the activities of antioxidant en-zyme superoxide dismutase (SOD), peroxidase (POD), and catalase (CAT), as well as gene expression levels using the 3rd leaf (counted from bottom to top).
Homology Analysis, Expression Analysis and Cloning of the AmbHLH148 Gene
The AmbHLH148 gene sequence was searched for homology using the blastn tool available on the NCBI.Total RNA was extracted from A. mongolicum leaves using TRIzol reagent (Invitrogen, Carlsbad, CA, USA).Specific primers were constructed using Premier 5.0 software, based on the full-length Open Reading Frame (ORF) sequence of AmbHLH148 obtained from A. mongolicum transcriptome data (GenBank accession: OR786931).For the reverse transcription reaction, 1 µg of total RNA was utilized, and the cDNA was synthesized using the FastKing RT Kit (With gDNase) (TIANGEN KP116, Beijing, China).
A full-length sequence fragment encompassing the Open Reading Frame (ORF) of the A. mongolicum AmbHLH148 gene was successfully amplified from A. mongolicum.The PCR amplification system comprised of the following components: 1 µL of cDNA template, 1 µL of AmbHLH148-F, 1 µL of AmbHLH148-R, 10 µL of 2 × Taq PCR Master Mix (Tiangen Co., Beijing, China), and 7 µL of ddH 2 O.The PCR amplification program involved an initial denaturation step at 95 • C for 5 min, followed by 35 cycles, each consisting of denaturation at 95 • C for 30 s, annealing at 52 • C for 30 s, extension at 72 • C for 90 s, and a final extension step at 72 • C for 10 min.All PCR products were stored at 4 • C for further analysis.The PCR products were detected through 1% agarose gel electrophoresis.Subsequently, they were purified and recovered following the operating instructions provided by the DNA Gel/PCR Purification Miniprep Kit (BIOMIGA, BIOMIGA Medical Technology, San Diego, CA, USA).The recovered products were ligated with a cloning vector (pEASY ® -T1 Simple Cloning Vector, Beijing TransGen Biotech Co., Ltd., Beijing, China) and sequenced (BGI, Shenzhen, China).
The A. mongolicum seedlings were cultivated for 30 d, and leaves were collected from untreated control (CK) and from seedlings subjected to simulated drought treatments for 1 d, 3 d, 5 d, 7 d, and 24 h of rehydration.The expression of the AmBHLH148 gene under drought stress conditions was detected using qRT-PCR.The qRT-PCR was performed on a real-time quantitative thermal cycler (FTC-3000P, Funglyn Biotech Inc., Toronto, ON, Canada) using the MonAmpTM SYBR ® Green qPCR Mix kit (Mona Biotechnology Co., Ltd., Suzhou, China).The U6 gene was used as an internal reference gene.The qRT-PCR amplification system consisted of 10 µL of MonAmpTM SYBR ® Green qPCR Mix, 0.4 µL of AmbHLH148-q-F, 0.4 µL of AmbHLH148-q-R, 1.2 µL of cDNA, and 8 µL of nuclease-free water.The reaction program consisted of an initial step at 95 • C for 30 s, followed by 40 cycles at 95 • C for 10 s, 57 • C for 10 s, and 72 • C for 30 s.The relative expression levels of the genes were quantified using the 2 −∆∆CT method [52].
Subcellular Localization of AmbHLH148
For subcellular localization studies, the pBI121 vector plasmid (Miao Ling Biotechnology Co., Wuhan, China) was digested using Xhol and SalI restriction endonucleases (Takara, Beijing, China).The pBI121-AmbHLH148-EGFP recombinant vector was constructed by ligating the AmbHLH148 product without a termination codon into the pBI121-EGFP vector fragment.The 25µL enzyme digestion system included 15 µL of pBI121, 1 µL each of Xhol and SalI, 1.3 µL of 10 × K buffer, and 6.7 µL of ddH 2 O.The digestion process was carried out overnight at 37 • C. The 10 µL ligation system included 1 µL of linearized pBI121 vector, 1 µL of AmbHLH148, 4 µL of Quick-clone Mix (Takara, China), and 4 µL of ddH 2 O.The ligation reaction took place in a water bath at 55 • C for 20 min and 30 • C for an additional 20 min that transformed the ligation products into E. coli receptor DH5α cells.Positive transformants were selected on LB agar plates containing 50 µg/mL kanamycin for PCR analysis.The PCR amplification system and procedure were consistent with those described above, and the products were sent for sequencing (BGI, China).The correctly sequenced recombinant vector pBI121-AmbHLH148-EGFP and the empty vector pBI121-EGFP were transformed into Agrobacterium rhizogenes (GV3101, Shanghai Vidi, Shanghai, China).The positive bacterial solution was added to 20 mL of LB liquid medium, containing 100 µg/mL kanamycin, 100 µg/mL Rif, and 15 µmol/L As.The culture was incubated overnight at 28 • C with agitation at 200 rpm in a thermostatic oscillator (OD 600 :0.6).The culture was then centrifuged at 5000 rpm for 5 min to remove the supernatant.The bacterial pellet was resuspended in an equal volume of tobacco-specific infiltration solution (Beijing coolaber Technology Co., Beijing, China; Product No. SL0911) containing 100 µmol/L As, an appropriate amount of MgCl 2 , and MES.The resuspension was adjusted to OD 600 :0.6 and the suspension was left at room temperature for 2 h.The plasmid was introduced into tobacco leaves instantly by Agrobacterium tumefaciens infection.Tobacco infested with pBI121-AmbHLH148-EGFP was incubated in the dark for 1 day and then in a 14 h light/10 h dark cycle for the next 2 days.In contrast, tobacco infested with pBI121-EGFP broth required incubation in the dark for 1 day, followed by an additional day in a 14 h light/10 h dark cycle.Subsequently, the lower epidermis of the transfected tobacco was observed using a laser confocal microscope.
AmbHLH148 Vector Construction and Genetic Transformation
The pBI121 vector plasmid was digested with Xhol and SalI restriction endonucleases, and the AmbHLH148 product was subsequently ligated to the pBI121 vector fragment.The enzymatic digestion system was prepared in 25 µL volumes and consisted of the following components: 15 µL of pBI121, 1 µL of Xhol and SalI, 1.3 µL of 10 K buffer, and 6.7 µL of ddH 2 O.The digestion process was carried out at 37 • C overnight.For the ligation, a 10 µL system was prepared, comprising 1 µL of linearized pBI121 vector, 1 µL of AmbHLH148, 4 µL of Quick-clone Mix, and 4 µL of ddH 2 O.The ligation reaction was performed in a water bath at 55 • C for 20 min, followed by an additional incubation at 30 • C for another 20 min.The ligated products were subsequently transformed into E. coli receptor DH5α cells.Positive transformants were selected by plating on LB agar containing 50 µg/mL kanamycin, and PCR assays were conducted on these colonies.The PCR amplification system and procedure used were consistent with the methods described earlier.The sequences of the correctly assembled recombinant vector were verified through sequencing services provided by BGI, Shenzhen, China.The correctly sequenced recombinant vector was then transferred into GV3101 receptor cells (AC1001, Shanghai Weidi, Shanghai, China).After the identification of positively transformed Agrobacterium, tobacco plants were transformed using the leaf disk method.Leaves from positively transgenic tobacco seedlings were collected for DNA extraction using the Plant Genomic DNA Kit (DP305, Tiangen, Beijing, China).PCR assays were performed using the same PCR amplification system and procedure as described previously.The specific experimental steps for genetic transformation of tobacco were referred to Japelaghi et al. [53].The primer sequences used in this experiment are shown in Table S1.
qRT-PCR Analysis
Total RNA was extracted from both drought-stressed and normally watered young tobacco leaves (WT and transgenic lines), and cDNA was synthesized following the previously outlined procedure.qRT-PCR was performed using the MonAmpTM SYBR ® Green qPCR Mix kit (Mona Biotechnology Co., Ltd., Suzhou, China) on a Real Time Quantitative Thermal Cycler (FTC-3000P, Funglyn Biotech Inc., Toronto, ON, Canada).Each treatment and control group were subjected to three biological replicates, with each biological replicate including three technical replicates.The L25 gene was used as the internal reference gene.The qRT-PCR amplification system consisted of 10 µL of MonAmp TM SYBR ® Green qPCR Mix, 0.4 µL of forward primer, 0.4 µL of reverse primer, 1.2 µL of cDNA, and 8 µL of nuclease-free water.The amplification program involved an initial denaturation at 95 • C for 30 s, followed by 40 cycles including denaturation at 95 • C for 10 s, annealing at 58 • C for 10 s, and extension at 72 • C for 30 s.The relative expression levels of genes were quantified using the 2 −∆CT method [54].
Determination of Physiological Indexes
Leaves weighing 0.2 g were collected from the corresponding parts of both WT and transgenic tobacco plants subjected to regular watering and those under 7 days of drought stress.These samples were rapidly frozen with liquid nitrogen and then placed in a −80 • C freezer.The assay encompassed three biological replicates, with each biological replicate consisting of three technical replicates.The activities of SOD (A001-3), POD (A084-3-1), and CAT (A007-1-1), were evaluated using specific test kits (Nanjing Jiancheng Bioengineering Research Institute, Nanjing, China).Additionally, the content of MDA (A003-1) and PRO (A107-1-1) was determined using test kits.To minimize errors introduced by variations in the external environment, the three biological replicates of each strain were combined at the time of testing.Detailed experimental procedures for each kit assay were followed in accordance with the provided instructions inside the kit.
Statistical Analysis
Statistical analyses were performed using Microsoft Excel 2019 (Microsoft Corporation, Washington, DC, USA) and SPSS 23 (SPSS Inc., Chicago, IL, USA).One-way analysis of variance (ANOVA) was used to test the significance of the samples, with a significance level of 0.05.In order to minimize errors and ensure the reliability of the results, the assay experiments were conducted with three biological replicates and three technical replicates.Standard deviation (STDEV) was used to quantify and calibrate for significant differences among the data sets.
Analysis of AmbHLH148 Gene Expression in A. mongolicum under 25% PEG-6000 Stress
The seedlings of A. mongolicum were subjected to a simulated drought stress treatment with 25% PEG-6000, and the expression of the AmbHLH148 gene was assessed at different time points along the stress gradients (Figure 1).
Agronomy 2023, 13, x FOR PEER REVIEW The expression of the AmbHLH148 gene exhibited significant variation A. mongolicum control (CK) and the various drought treatment periods.I consistently surpassed the control levels during each treatment period.No expression displayed an overall increasing trend and reached its peak at t of drought stress, where it was approximately 2.9 times higher than the c The expression of the AmbHLH148 gene exhibited significant variations between the A. mongolicum control (CK) and the various drought treatment periods.Importantly, it consistently surpassed the control levels during each treatment period.Notably, the gene expression displayed an overall increasing trend and reached its peak at the 5-day mark of drought stress, where it was approximately 2.9 times higher than the control sample.
These results suggest that the AmbHLH148 gene in A. mongolicum is positively regulated in response to drought stress induced by the 25% PEG-6000 simulated treatment.
Homology Analysis of the AmbHLH148 Gene and Full-Field Amplification of ORF
The homology of the A. mongolicum DN27532_c1_g3 gene was examined using the NCBI Blastn tool to compare it with genes from other species.The results showed that the Aegilops tauschii AtbHLH148 gene (XP_020179773.1)and AmbHLH148 exhibited the highest similarity, and the similarity was 90.02%.As a consequence of this significant similarity, the gene from A. mongolicum was tentatively named AmbHLH148 (Figure S1).
The full-length ORF sequence of the AmbHLH148 gene was successfully amplified using the specific primers of AmbHLH148-F/R, and the resulting bands were detected through 1% agarose gel electrophoresis (Figure S2).The electrophoresis results revealed that the target band fell within the expected size range of 500-700 bp, which aligned with the projected 513 bp size.The band displayed a clear, well-defined appearance with no signs of smearing or trailing.This distinctive band, presumed to represent the AmbHLH148 gene, was subsequently purified and recovered.The isolated product was then ligated into the cloning vector, sequenced, and the sequencing results verified the successful acquisition of the A. mongolicum AmbHLH148 gene sequence.
Subcellular Localization of AmbHLH148 Gene
To verify the position of AmbHLH148 in the subcellular, a recombinant vector named pBI121-AmbHLH148-EGFP, which carried the green fluorescent protein (EGFP), was constructed.Subsequently, this recombinant vector was injected into tobacco epidermal cells.Observations made through confocal microscopy revealed that the GFP signals originating from the control pBI121-EGFP empty vector were distributed throughout all the cell compartments of the tobacco lower epidermal cells.In contrast, the pBI121-AmbHLH148-EGFP fusion protein was exclusively localized in the nucleus (Figure 2).
Production and Selection of AmbHLH148 Transgenic Plants
Agrobacterium tumefaciens (GV3101) containing the pBI121-AmbHLH148 recombinant plasmid was used to transform the sterile young tobacco leaves via the leaf disc method.Transgenic tobacco lines were subsequently obtained through shoot-induced differentiation and rooting culture (Figure S3).Positive transgenic plants were identified via PCR amplification, resulting in the acquisition of 11 AmbHLH148 overexpressing tobacco transgenic lines, denoted as OE lines.To investigate the potential role of the AmbHLH148 gene in enhancing drought tolerance in tobacco, the expression of AmbHLH148 in all 11 overexpressing tobacco transgenic lines was analyzed.Among these lines, OE6 and OE18
Production and Selection of AmbHLH148 Transgenic Plants
Agrobacterium tumefaciens (GV3101) containing the pBI121-AmbHLH148 recombinant plasmid was used to transform the sterile young tobacco leaves via the leaf disc method.Transgenic tobacco lines were subsequently obtained through shoot-induced differentiation and rooting culture (Figure S3).Positive transgenic plants were identified via PCR amplification, resulting in the acquisition of 11 AmbHLH148 overexpressing tobacco transgenic lines, denoted as OE lines.To investigate the potential role of the AmbHLH148 gene in enhancing drought tolerance in tobacco, the expression of AmbHLH148 in all 11 overexpressing tobacco transgenic lines was analyzed.Among these lines, OE6 and OE18 (as shown in Figure S4) were selected for further investigation due to their highest transcript levels.
Observation of Tobacco Phenotypes under Drought Stress
The natural drought treatment was initiated by discontinuing watering for both AmbHLH148-OE and WT tobacco, and the data were gathered during two time points, namely, at the initiation (0 day) and after 7 days of the drought stress period.We closely examined morphological variations between WT and AmbHLH148-OE tobacco under normal and drought stress conditions (Figure 3).Under normal growth conditions, there were no significant aboveground phenotypic differences between AmbHLH148-OE and WT plants.However, notable disparities emerged below the surface, with AmbHLH148-OE plants displaying an increased abundance of lateral roots compared to WT plants.Under drought stress circumstances, WT plants exhibited a high incidence of wilting leaves, including yellowed, wilted leaves near the base of the plant.In contrast, transgenic tobacco plants showed a remarkable reduction in wilted leaves, and they exhibited longer root systems, and greater root surface area compared to the WT plants.This phenomenon indicates that the overexpression of AmbHLH148 under drought stress conditions may positively regulate the drought tolerance of tobacco by increasing root growth, a speculation that requires further validation.
AmbHLH148 Affects the Expression of Stress Response Genes
Plants adapt to stress conditions by mobilizing the expression of a large number of stress-related genes.To investigate the expression of AmbHLH148 and stress-responsive genes to drought stress, a comprehensive analysis of their transcript accumulation levels was conducted.The transcript accumulation of AmbHLH148 and some classical stressresponsive genes, such as NtNAC2 [55] and NtHSP70-8 [56], was detected via qRT-PCR (Figure 4).Both AmbHLH148-OE6 and AmbHLH148-OE18 tobacco plants exhibited higher transcript levels of accumulation under drought conditions when compared to well-watered conditions (Figure 4A).Moreover, we found that the expression levels of NtNAC2 and NtHSP70-8 genes were significantly higher in the overexpressed tobacco (OE6 and OE18) compared to the WT under drought conditions (Figure 4B,C).These results collectively indicate that AmbHLH148 plays a pivotal role in regulating plant responses to drought stress, and the overexpression of AmbHLH148-OE enhanced the expression of stress-responsive genes (NtNAC2, NtHSP70-8).This coordinated upregulation contributes to the improved drought tolerance observed in tobacco plants.
AmbHLH148 Affects the Expression of Stress Response Genes
Plants adapt to stress conditions by mobilizing the expression of a large number of stress-related genes.To investigate the expression of AmbHLH148 and stress-responsive genes to drought stress, a comprehensive analysis of their transcript accumulation levels was conducted.The transcript accumulation of AmbHLH148 and some classical stress-responsive genes, such as NtNAC2 [55] and NtHSP70-8 [56], was detected via qRT-PCR (Figure 4).Both AmbHLH148-OE6 and AmbHLH148-OE18 tobacco plants exhibited higher transcript levels of accumulation under drought conditions when compared to well-watered conditions (Figure 4A).Moreover, we found that the expression levels of NtNAC2 and NtHSP70-8 genes were significantly higher in the overexpressed tobacco (OE6 and OE18) compared to the WT under drought conditions (Figure 4B,C).These results collectively indicate that AmbHLH148 plays a pivotal role in regulating plant responses to drought stress, and the overexpression of AmbHLH148-OE enhanced the expression of stress-responsive genes (NtNAC2, NtHSP70-8).This coordinated upregulation contributes to the improved drought tolerance observed in tobacco plants.B,C): Effects of normal watering and drought for 7 days on the expression levels of NtNAC2 and NtHSP70-8 genes in wildtype and transgenic plants.In tobacco, the L25 gene was used as an internal reference gene.Note: lower case letters in the above graph represent significant differences at the 0.05 level.The values presented in the above column were calculated using the 2 −∆CT method.All data are expressed as the mean ± SD.
Overexpression of AmbHLH148 Enhanced Antioxidant Capacity of Tobacco
In order to examine the involvement of the AmbHLH148 gene in the antioxidant de- B,C): Effects of normal watering and drought for 7 days on the expression levels of NtNAC2 and NtHSP70-8 genes in wild-type and transgenic plants.In tobacco, the L25 gene was used as an internal reference gene.Note: lower case letters in the above graph represent significant differences at the 0.05 level.The values presented in the above column were calculated using the 2 −∆CT method.All data are expressed as the mean ± SD.
Overexpression of AmbHLH148 Enhanced Antioxidant Capacity of Tobacco
In order to examine the involvement of the AmbHLH148 gene in the antioxidant defense mechanism of transgenic tobacco, we assessed the levels of MDA and PRO, along with the activities of antioxidant enzymes such as SOD, POD, and CAT, in both AmbHLH148-OE and WT tobacco plants under conditions of drought stress (Figure 5).At the 7-day mark of drought stress, the SOD activities of AmbHLH148-OE tobacco (OE6 and OE18) were 1.09 and 1.06 times higher than those of WT tobacco, respectively.Similarly, the activities of POD in transgenic tobacco were 1.10 and 1.13 times those of WT, respectively.These results indicate that the membrane defense enzymes SOD and POD were more active in the transgenic plants, indicating that AmbHLH148-OE tobacco plays a more effective role in stress resistance under drought stress.The activities of CAT in transgenic tobacco were 1.18 and 1.17 times those of WT, respectively.Furthermore, the PRO content of transgenic tobacco was 1.50 and 1.35 times higher than that of WT, respectively.Proline, an important osmoregulator, plays a crucial role in maintaining osmoregulation and scavenging ROS to improve the stability of the cell membrane system when plants are exposed to adverse stress conditions [57].In addition, the MDA content of WT tobacco was higher than that of transgenic tobacco at 7 days of drought stress, by 0.9 and 0.62 times, respectively.This suggests that the cell membrane of transgenic plants was less damaged under drought stress, while the membrane system of WT plants suffered severe damage.In conclusion, the results above strongly suggest that AmbHLH148-OE enhances the antioxidant capacity of tobacco, resulting in greater drought tolerance.than that of transgenic tobacco at 7 days of drought stress, by 0.9 and 0.62 times, respectively.This suggests that the cell membrane of transgenic plants was less damaged under drought stress, while the membrane system of WT plants suffered severe damage.In conclusion, the results above strongly suggest that AmbHLH148-OE enhances the antioxidant capacity of tobacco, resulting in greater drought tolerance.
Discussion
The bHLH protein family, as the second largest transcription factor (TF) family, plays a crucial role in plant responses to drought stress.Cui et al. used Camellia sinensis transcriptome data and identified 39 differential CsbHLH genes showing various expression under drought stress conditions [58].Further validation through qRT-PCR of nine selected CsbHLH genes, which corroborated the transcriptome data, suggested their responsiveness to drought stress.Myrothamnus flabellifolia's MfbHLH38 overexpression, on the other hand, was shown to increase drought tolerance in Arabidopsis thaliana.Additionally, it increased Arabidopsis thaliana's sensitivity to mannitol and abscisic acid and increased ABA levels during drought stress [59].Similarly, in Malus domestica, MdbHLH130 acts as a positive regulator of the water stress response by regulating tobacco stomatal closure and the
Discussion
The bHLH protein family, as the second largest transcription factor (TF) family, plays a crucial role in plant responses to drought stress.Cui et al. used Camellia sinensis transcriptome data and identified 39 differential CsbHLH genes showing various expression under drought stress conditions [58].Further validation through qRT-PCR of nine selected CsbHLH genes, which corroborated the transcriptome data, suggested their responsiveness to drought stress.Myrothamnus flabellifolia's MfbHLH38 overexpression, on the other hand, was shown to increase drought tolerance in Arabidopsis thaliana.Additionally, it increased Arabidopsis thaliana's sensitivity to mannitol and abscisic acid and increased ABA levels during drought stress [59].Similarly, in Malus domestica, MdbHLH130 acts as a positive regulator of the water stress response by regulating tobacco stomatal closure and the scavenging of ROS [60].Moreover, AtbHLH68 is involved in the regulation of lateral root extension and the response to drought stress by modulating ABA signaling and/or metabolism in a direct or indirect manner [61].However, there is a dearth of information on the abiotic stress of the bHLH gene in A. mongolicum.This study bridges this knowledge gap by elucidating the isolation of the AmbHLH148 gene from A. mongolicum.Sequence analysis using the NCBI's blastn tool revealed that AmbHLH148 exhibits a high degree of similarity with AtbHLH148 (Aegilops tauschii), TabHLH148-like (Triticum aestivum), and TdbHLH148-like (Triticum dicoccoides).Nevertheless, existing literature fails to provide insights into the functional characterization of these genes.Building upon previous research where we examined 23 A. mongolicum bHLH genes and 9 other bHLH genes known for their drought-resistant capabilities, we constructed an evolutionary tree and conducted sequence comparisons.These analyses showcased that AmbHLH148 from A. mongolicum forms a distinct branch with OsbHLH148 (O.sativa), demonstrating a kinship of 41%.Intriguingly, gene expression analysis revealed an upregulation trend of A. mongolicum bHLH genes with the duration of drought treatment [51].In light of these findings, we hypothesized that AmbHLH148 might be related to drought stress.We then heterologously transformed this gene into tobacco and thoroughly examined its effects on overexpressed tobacco plants following drought stress conditions.
Numerous studies have consistently demonstrated that plant bHLH family proteins are predominantly localized within the nucleus.These nuclear regulatory proteins typically exert their influence by either activating or repressing the transcription of target genes, thereby regulating gene expression.Subcellular localization analyses of several bHLH proteins associated with drought resistance have consistently revealed their concentration within the nucleus.For instance, AbbHLH122 [62] and TabHLH49 [63] are noteworthy instances of such nuclear localization.In line with these findings, our study also identified the nuclear localization of the AmbHLH148 protein.
Plant root growth is closely related to drought tolerance.Studies have shown that plants with longer root systems and larger root surface areas exhibit enhanced tolerance to drought stress [64].In this study, we selected two AmbHLH148 transgenic lines, OE6 and OE18, for drought stress experiments using transgenic tobacco.Our observations following the drought treatment revealed that the transgenic tobacco plants showed greater vitality.AmbHLH148-OE plants displayed longer roots and a larger root surface area when compared to their wild-type counterparts.This extended root system provides AmbHLH148-OE plants with a greater capacity to absorb water and nutrients from the soil, which may positively contribute to drought tolerance.Furthermore, our results also show that under drought stress, AmbHLH148-OE plants had higher SOD, POD, and CAT activities than WT plants.In addition, AmbHLH148-OE plants displayed elevated levels of PRO and reduced levels of MDA when contrasted with WT plants, aligning with the results reported by Song et al. [56].This suggests that overexpression of the AmbHLH148 gene enhances the antioxidant capacity of tobacco plants, thereby enhancing their drought tolerance.These observations collectively suggest that AmbHLH148 may be actively involved in osmotic regulation within tobacco plants during drought stress, thus contributing to an overall enhancement in the plant's ability to endure such adverse conditions.
However, the precise mechanisms by which AmbHLH148 regulates drought stress warrant further investigation.For example, the OsbHLH148 gene has been shown to enhance drought tolerance in rice by regulating the jasmonic acid metabolic pathway and interacting with the OsJAZ protein [42].In addition, OsbHLH148 positively regulates the expression of the Osr40C1 gene, conferring drought tolerance in rice [65].In future studies, we aim to further identify the types of signaling and metabolic pathways engaged by A. mongolicum through techniques such as the ChIP-seq assay, gel electrophoresis migration assay (EMSA), yeast two-hybrid technique, and other experimental methods.We will also endeavor to screen potential downstream target genes of A. mongolicum, thereby gaining a comprehensive understanding of the regulatory network established by A. mongolicum with its downstream target genes and elucidating the regulatory mechanism of AmbHLH148 in response to drought.
Conclusions
AmbHLH148 is evidently a nuclear-localized protein that may be a key regulator during drought stress in plants.These findings contribute to the identification of crucial candidate genes for genetic engineering aimed at enhancing crop drought tolerance.
Figure 1 .
Figure 1.Relative expression of AmbHLH148 gene.Note: lower case letters in the a resent significant differences at the 0.05 level.CK represents the pre-treatment sam d, 5 d, and 7 d correspond to samples taken at different time points following expo PEG-6000 stress treatment.Additionally, fs24 h indicates the sample collected after tion in 1/5 Hoagland's nutrient solution.The values presented in the above column using the 2 −∆∆CT method.All data are expressed as the mean ± SD.
Figure 1 .
Figure 1.Relative expression of AmbHLH148 gene.Note: lower case letters in the above graph represent significant differences at the 0.05 level.CK represents the pre-treatment sample, while 1 d, 3 d, 5 d, and 7 d correspond to samples taken at different time points following exposure to the 25% PEG-6000 stress treatment.Additionally, fs24 h indicates the sample collected after 24 h of restoration in 1/5 Hoagland's nutrient solution.The values presented in the above column were calculated using the 2 −∆∆CT method.All data are expressed as the mean ± SD.
Figure 3 .Figure 3 .
Figure 3. Characterization of AmbHLH148 transgenic tobacco under drought stress conditions.(A): Phenotypes of WT and transgenic tobacco under normal watering conditions.(B): Phenotypes of WT and transgenic tobacco under drought for 7 days.(C): Root phenotypes of WT and transgenic tobacco under normal watering conditions.(D): Root phenotypes of WT and transgenic tobacco under drought for 7 days.(E): Root length of WT and transgenic tobacco under normal watering and drought for 7 days.(F): Root surface area of WT and transgenic tobacco under normal watering and drought for 7 days.(G): Germination of WT and transgenic tobacco under 5% PEG stress for 7 days.(H): WT and transgenic tobacco were assayed for germination at 7 days of 5% PEG stress.Note: lower case letters in the above graph represent significant differences at the 0.05 level.All data are Figure 3. Characterization of AmbHLH148 transgenic tobacco under drought stress conditions.(A): Phenotypes of WT and transgenic tobacco under normal watering conditions.(B): Phenotypes of WT and transgenic tobacco under drought for 7 days.(C): Root phenotypes of WT and transgenic tobacco under normal watering conditions.(D): Root phenotypes of WT and transgenic tobacco under drought for 7 days.(E): Root length of WT and transgenic tobacco under normal watering and drought for 7 days.(F): Root surface area of WT and transgenic tobacco under normal watering and drought for 7 days.(G): Germination of WT and transgenic tobacco under 5% PEG stress for 7 days.(H): WT and transgenic tobacco were assayed for germination at 7 days of 5% PEG stress.Note: lower case letters in the above graph represent significant differences at the 0.05 level.All data are expressed as the mean ± SD. (G): Three biological replicates were used in the experiment, and the number of seeds sown on each medium was 40.
Agronomy 2023, 13, x FOR PEER REVIEW 10 of 15 expressed as the mean ± SD. (G): Three biological replicates were used in the experiment, and the number of seeds sown on each medium was 40.
Figure 4 .
Figure 4.The expression levels of the AmbHLH148 gene and stress-responsive genes in both control and drought conditions (7 days) in AmbHLH148-OE plants.(A): Expression levels of the AmbHLH148 gene in transgenic plants in control and drought for 7 days.(B,C): Effects of normal watering and drought for 7 days on the expression levels of NtNAC2 and NtHSP70-8 genes in wildtype and transgenic plants.In tobacco, the L25 gene was used as an internal reference gene.Note: lower case letters in the above graph represent significant differences at the 0.05 level.The values presented in the above column were calculated using the 2 −∆CT method.All data are expressed as the mean ± SD.
Figure 4 .
Figure 4.The expression levels of the AmbHLH148 gene and stress-responsive genes in both control and drought conditions (7 days) in AmbHLH148-OE plants.(A): Expression levels of the AmbHLH148 gene in transgenic plants in control and drought for 7 days.(B,C): Effects of normal watering and drought for 7 days on the expression levels of NtNAC2 and NtHSP70-8 genes in wild-type and transgenic plants.In tobacco, the L25 gene was used as an internal reference gene.Note: lower case letters in the above graph represent significant differences at the 0.05 level.The values presented in the above column were calculated using the 2 −∆CT method.All data are expressed as the mean ± SD.
Figure 5 .
Figure 5. Analysis of the antioxidant capacity of AmbHLH148-OE transgenic tobacco under both control and drought conditions.(A) SOD activity of WT and transgenic plants.(B) POD activity of WT and transgenic plants.(C) CAT activity of WT and transgenic plants.(D) PRO content of WT and transgenic plants.(E) MDA content of WT and transgenic plants.Note: lower case letters in the above graph represent significant differences at the 0.05 level.All data are expressed as the mean ± SD.
Figure 5 .
Figure 5. Analysis of the antioxidant capacity of AmbHLH148-OE transgenic tobacco under both control and drought conditions.(A) SOD activity of WT and transgenic plants.(B) POD activity of WT and transgenic plants.(C) CAT activity of WT and transgenic plants.(D) PRO content of WT and transgenic plants.(E) MDA content of WT and transgenic plants.Note: lower case letters in the above graph represent significant differences at the 0.05 level.All data are expressed as the mean ± SD.
Figure S2.The electrophoresis detection of AmbHLH148 PCR amplification.Figure S3.The flow of genetic transformation of AmbHLH148 transgenic tobacco.Figure S4.Relative expression levels of the AmbHLH148 gene in transgenic plants.Author Contributions: The study was conceived and designed by Y.M.; X.Z. and B.Q. conducted the experiments and analyzed the data; B.F. and Y.Z.(Yongqing Zhai) analyzed the data; Y.Z.(Yan Zhao) provided the seeds; X.Z. and Y.M. wrote the manuscript.B.Q., F.S., L.N., Y.F. and Z.Y. performed the editing of the manuscript.All authors have read and agreed to the published version of the manuscript.Funding: This study was funded by the National Natural Science Foundation of China (No. 31860670 and No. 32360338). | 2023-11-29T16:05:22.391Z | 2023-11-27T00:00:00.000 | {
"year": 2023,
"sha1": "24596b636444b8ce3e29de1a420a246fdf72b851",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4395/13/12/2918/pdf?version=1701101815",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "d88912598db60e9e03db1447c9e5b46fd7096111",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science",
"Biology"
],
"extfieldsofstudy": []
} |
36283238 | pes2o/s2orc | v3-fos-license | NOTCH3 is expressed in human apical papilla and in subpopulations of stem cells isolated from the tissue
NOTCH plays a role in regulating stem cell function and fate decision. It is involved in tooth development and injury repair. Information regarding NOTCH expression in human dental root apical papilla (AP) and its residing stem cells (SCAP) is limited. Here we investigated the expression of NOTCH3, its ligand JAG1, and mesenchymal stem cell markers CD146 and STRO-1 in the AP or in the primary cultures of SCAP isolated from AP. Our in situ immunostaining showed that in the AP NOTCH3 and CD146 were co-expressed and associated with blood vessels having NOTCH3 located more peripherally. In cultured SCAP, NOTCH3 and JAG1 were co-expressed. Flow cytometry analysis showed that 7%, 16% and 98% of the isolated SCAP were positive for NOTCH3, STRO-1 and CD146, respectively with a rare 1.5% subpopulation of SCAP co-expressing all three markers. The expression level of NOTCH3 reduced when SCAP underwent osteogenic differentiation. Our findings are the first step towards defining the regulatory role of NOTCH3 in SCAP fate decision.
Introduction
Since the discovery of stem cells from apical papilla (SCAP), new clinical treatment concepts have emerged. A clinical regenerative protocol has been proposed based on some research studies and many clinical case reports. 1e5 The theory behind this protocol is that SCAP in the apical papilla may be induced to regenerate damaged or lost pulp tissue in the canal space. 6e8 This possibility is further supported by the capacity of SCAP to regenerate pulpdentin-like tissues in vivo in animal models. 6,9 However, despite such clinical endeavors to practice regenerative treatments, the understanding of the biology of SCAP is still limited.
NOTCH signaling pathway plays a key role in the development and morphogenesis of different organs and tissues in various species. It promotes or suppresses cell proliferation, initiates or inhibits cell differentiation and determines the fate of different types of stem cells. 10 There are four NOTCH receptors (NOTCH1e4), which are activated by direct contact with their membrane-bound ligands (JAGGED (JAG) 1, JAG2, Delta-like (DLL) 1, DLL3 and DLL4) on neighboring cells. Upon activation of NOTCH receptors, enzymatic activities are triggered resulting in the cleavage of NOTCH intracellular domain (NICD), which is then translocated into the nucleus to activate the transcription of target genes, such as HEY1 and HEY2. 10,11 NOTCH signaling pathway is also believed to be involved and play an important role in the process of tooth development and pulp regeneration and repair after injury. 12e14 NOTCH3 and its ligand JAG1 are upregulated during tooth development in the vicinity of blood vessels and in the subodontoblast layer, but not in odontoblasts. Similar expression pattern was found in pulp tissue undergoing repair and regeneration after pulp exposure and capping. 12,15,16 Interestingly, NOTCH3 is expressed in the cervical loop (stem cell niche) of continuously erupting teeth (mouse incisors and vole molars), suggesting its possible role in maintaining the undifferentiated state of cells within that niche. 13 Little is known regarding the expression of NOTCH3 and its ligand JAG1 in apical papilla (AP) and SCAP. The aim of this study was to investigate whether and where NOTCH3 is expressed in AP and its expression along with JAG1 in cultured SCAP, as well as its co-expression with mesenchymal stem cell markers CD146 and STRO-1.
Sample collection
This study was approved by the Human Research Ethics Committee of United Arab Emirates University (#11/10) and Boston University Medical Institutional Review Board (#H-28882). Freshly extracted, intact human teeth were obtained from 10 to 24 years old consented healthy patients (n Z 5). The teeth were caries-free and had incompletely formed root apices. The root apical papillae were microdissected from extracted teeth and SCAP were isolated and cultured as described below based on a previous report. 6 Immunohistochemical staining of apical papillae Apical papillae were obtained as mentioned above and processed for cryosectioning; 8 mm-thick sections were fixed with cold acetone at À20 C for 15 min, washed in PBS, treated with 1.5% hydrogen peroxide for 30 min and blocked with 2.5% normal horse serum (Vectastain Elite ABC kit; Vector laboratories) for 1 h. Tissue sections were then incubated with mouse monoclonal anti-NOTCH3 antibodies (dilution 1:100, Abcam, USA) for 1 h at room temperature, followed by washing and incubation with biotinylated antimouse immunoglobulin G (secondary antibody) for another 1 h. After washing, avidin-peroxidase complex was added and incubated for 30 min, followed by washing and the addition of peroxidase substrate solution for 5 min. Sections were counterstained with hematoxylin solution (Sigma, USA). Negative control slides were prepared in parallel without adding the primary antibody.
For immunofluorescence staining, frozen sections of AP were fixed with cold acetone at À20 C for 15 min, washed and blocked for 1 h. Sections were then incubated with primary antibodies: NOTCH3 (dilution 1:100, Abcam, USA) and CD146 (dilution 1:50, Invitrogen, USA) for 1 h at room temperature, followed by washing and incubation with the appropriate fluorophore-conjugated secondary antibodies for another 1 h.
Isolation and culture of SCAP
The papillae were minced and digested in a physiological solution containing 3 mg/mL collagenase type I (GIBCO/ Invitrogen) and 4 mg/mL dispase (GIBCO/Invitrogen) for 45 min at 37 C. Isolated cells were then plated in culture dishes containing alpha-modification of Eagle's medium (GIBCO/Invitrogen) supplemented with 10% fetal bovine serum (GIBCO/Invitrogen), 2 mM L-glutamine (GIBCO/Invitrogen), 100 U/mL penicillin and 100 mg/mL streptomycin (GIBCO/Invitrogen) and incubated in a humidified incubator (Thermo Scientific) at 37 C in 5% CO 2 . Once the cells reached w80% confluence, they were trypsinized and passaged.
Immunocytochemical and immunofluorescence staining of SCAP
For immunocytochemistry, isolated SCAP of passage 2 were seeded on sterile coverslips. At 80% confluence, cells were fixed using 4% paraformaldehyde (PFA) for 30 min, washed with phosphate-buffered saline (PBS) and permeabilized with 0.1% Triton-X100 (Sigma, USA). To inhibit endogenous peroxidase activity, cells on coverslips were incubated in 1% hydrogen peroxide in PBS for 35 min. Nonspecific binding was blocked by incubating cells in 1% bovine serum albumin (BSA) containing 0.5% Tween-20 in PBS for 45 min. Cells were then incubated with goat polyclonal anti-NOTCH3 antibody (dilution 1:25, clone M-20, Santa Cruz Biotechnology Inc., USA) overnight at 4 C. Cells on coverslips were washed with PBS and then incubated with biotinylated donkey-anti-goat immunoglobulin G (dilution 1:500, Jackson ImmunoResearch Laboratories Inc., USA) for 1 h and then were incubated in extravidin/peroxidase conjugate (dilution 1:1000, Sigma, USA) for 1 h. The antigeneantibody binding sites were revealed by incubating tissue sections with 3,3'-diaminobenzidine tetrahydrochloride (DAB, Sigma, USA). Negative control slides were prepared in parallel without adding the primary antibody.
For immunofluorescence staining, SCAP were grown on chamber slides and they were fixed, when reaching 80% confluence, in 4% PFA for 30 min, permeabilized, blocked and incubated with primary antibodies; NOTCH3 (dilution 1:100, Abcam, USA) and JAG1 (dilution 1:100, Santa Cruz Biotechnology, USA) for 1 h. The samples were subsequently incubated with the corresponding secondary antibodies for another 1 h followed by counterstaining with DAPI (Invitrogen, USA). The samples were then observed and images recorded under a fluorescence microscope.
Flow cytometry of SCAP SCAP at passage 3 were grown in a T25 mm flask to reach 80% confluence. They were harvested, washed with PBS, and resuspended in PBS containing 2% fetal bovine serum. Conjugated mouse IgG1 k anti-human monoclonal antibodies specific for CD146-APC, STRO-1-FITC and NOTCH3-PE or their corresponding isotype controls were added according to the recommended manufacturer concentrations (Biolegend, USA) for 30 min at 4 C. Cells were washed and resuspended in PBS. Flow cytometry was performed on FACSCalibur cytometer (BD Biosciences), and data were analyzed using FlowJo software (Tree Star Inc., Ashland, OR).
Immunohistochemical analysis of the apical papilla
The immunohistochemical staining showed that NOTCH3 expression is associated with blood vessels (Fig. 1). Immunoflorescence staining for NOTCH3 and CD146 confirmed the results of the immunohistochemical staining, and showed that cells in and around the wall of blood vessels were stained positive for NOTCH3 and CD146 with NOTCH3 located at the outer layer of the vascular vessel wall.
Detection of NOTCH3 in cultured SCAP
Immunocytochemical staining revealed the localization of NOTCH3 in the nucleus and the cytoplasm of SCAP especially in the perinuclear region ( Fig. 2A,B). Immunofluorescence double-staining showed that NOTCH3 and its ligand, JAG1 were co-localized in SCAP; NOTCH3 was detected in the perinuclear area of the cytoplasm and also in the nucleus, while JAG1 is concentrated in the perinuclear cytoplasm (Fig. 2C,D).
SCAP lose NOTCH3 expression after differentiation
To examine the relationship between SCAP differentiation and NOTCH3 expression, flow cytometry analysis was performed for SCAP grown in the regular medium or osteogenic differentiation medium. The flow cytometry results showed that NOTCH3 expression was detected on SCAP when cultured in the regular medium while the expression was absent when cultured in the osteogenic differentiation medium (Fig. 4).
Discussion
To understand the possible role of NOTCH signaling pathway in SCAP we undertook the first step by examining the expression of NOTCH3 and its ligand JAG1. We found that a subpopulation of SCAP is expressing NOTCH3 (Figs. 2 and 3) similar to recent findings reported by Sun et al. 17 These results suggest a possible role of NOTCH3 in regulating SCAP properties and behaviors. Mitsiadis et al found that during development, NOTCH3 is expressed in the subodontoblast layer but not in odontoblast. 12 In another study, they found it expressed in the stem cell niche of cervical loop of mouse incisor, but this expression decreases and disappears as the cells leave the niche and move coronally and start to differentiate into ameloblasts. 13 They suggest that NOTCH3 could be responsible for maintaining the cells in their undifferentiated state, and that once the cells differentiate they lose NOTCH3 expression. Our findings shown in Fig. 4 that SCAP lose NOTCH3 expression after osteo-induction suggests that NOTCH3 is associated with less differentiated SCAP. This indicates that NOTCH3 may play a role in maintaining SCAP in their undifferentiated state, as is suggested in the recent studies of SCAP and DPSCs by other investigators. 17,18 However, further experiments are needed to confirm this possibility and to explore the association of NOTCH3 with SCAP stemness and differentiation status.
STRO-1 and CD146 are considered as early markers of many types mesenchymal stem cells (MSCs) and it was found that 82% and 96% of the dental pulp derived colony forming cells were represented in the STRO-1 þ and CD146 þ subpopulations, respectively. 19 In addition, it was found that the STRO-1 þ subpopulation demonstrated higher expression of embryonic and MSCs makers and had a better odontogenic potential than the STRO-1subpopulation. 20,21 To investigate whether NOTCH3 þ SCAP share the expression of STRO-1 and CD146, our flow cytometry analysis showed that 7%, 16% and 98% of the isolated SCAP were positive for NOTCH3, STRO-1 and CD146 respectively. The latter two markers were examined by Sonoyama et al and the respective percentage is similar to our findings. 6 Our flow cytometry using triple staining is the first to demonstrate the presence of these subpopulations of SCAP. While our data indicate that almost all isolated SCAP expressed CD146, there are 4 distinct different subpopulations: 7% NOTCH3 þ STRO1 -, w16% NOTCH3 -STRO-1 þ , w75% NOTCH3 -STRO-1and a rare population of NOTCH3 þ STRO-1 þ (Fig. 3). Further investigation is warranted to explore the characteristics of these subpopulations such as NOTCH3 þ SCAP, especially in the context of regenerative potential, as other MSCs are known to be heterogeneous containing different subpopulations of varying regenerative potential. 19,20,22 As mentioned earlier, studies have indicated that during tooth development and after pulp injury, NOTCH3 is upregulated and expressed on the wall of blood vessels. 15,16 Lovschall et al found a similar expression pattern and that NOTCH3 was co-expressed with RGS5, which is one of the markers for peicytes. 15 These findings suggest that those NOTCH3-expressing cells likely reside in the perivascular niche and may be of pericyte origin. Our immunohistochemical staining indicates that NOTCH3 expression in apical papillae is exclusively associated with blood vessels. The doubled staining (NOTCH3 and CD146) further showed that both are restricted and co-localized in and around the vascular walls with NOTCH3 located more in the periphery (Fig. 1). The reason why we chose CD146 in addition to NOTCH3 was that 1) all the 7% NOTCH þ SCAP are expressing CD146, and 2) CD146 is one of the pericyte markers. 19,23 This expression pattern of NOTCH3 in SCAP and their localization on the outer layer of the vascular wall are in favor of the possibility that NOTCH3 þ SCAP is of pericyte origin in apical papillae.
In NOTCH signaling, its receptors are activated by binding to one of its ligands on neighboring cells and this require direct physical contact between two cells. 10 We attempted to identify other subpopulations of SCAP that provide the ligands to NOTCH þ SCAP. To our surprise, we found that NOTCH3 and JAG1 were co-localized on the same cells in culture (Fig. 2CeH). To confirm this finding, we repeated this immunohistochemical localization with antibodies from different sources, and the results were consistent (Data not shown). As mentioned earlier, NOTCH activation will result in the cleavage of NICD and its translocation to the nucleus. 10,11 Our data showed that NOTCH3 was also localized in the nucleus (Fig. 2C,F) and the anti-NOTCH3 antibodies we used were bound to the intracellular domain. This suggests that the receptor might have been already activated as it was translocated into the nucleus. This NOTCH3 activation could be a result of binding to a ligand (JAG1 or other ligands such as Delta-1) on adjacent cells; or via a different mechanism from the classical NOTCH activation mechanism, possibly in a cellautonomous NOTCH activation mode, in which both NOTCH receptor and its ligand are present on the same cell. The later possibility is supported by the expression of both NOTCH3 and JAG1 in the same cells, NOTCH3 localization in the nucleus, and this expression pattern is consistent when cells are isolated (Fig. 2CeE) or in contact with other cells (Fig. 2FeH). This mode of activation has been reported in Drosophila and in vascular smooth muscle cells in pulmonary arteries. 11,24 Conclusion Our findings set the basis for further studies to explore NOTCH3 signaling in SCAP and its role in controlling SCAP behavior and faith decision. A better understanding of SCAP biology may help develop clinical strategies to advance regenerative endodontics.
Conflicts of interest
All authors have none to declare. | 2018-04-03T03:44:09.244Z | 2015-05-30T00:00:00.000 | {
"year": 2015,
"sha1": "d4777fded4ecb75953c1160d98ff4a9854d99355",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.gendis.2015.05.003",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a8656f5afc28633af2e8266be743e64d6e7b031e",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
14806029 | pes2o/s2orc | v3-fos-license | Palliative care for patients with hematological malignancies—a case series
Dear Editor,
Despite significant therapeutic advances in the last decades, more than half of all adult patients with hematological malignancies (ICD 10: C81–C95) will eventually die from their disease (53.6% in Germany; [3]). Specialized palliative care (PC) institutions, that are characterized by a multidisciplinary team approach in order to improve the quality of life of patients with advanced and progressive disease, have gained broad acceptance in the care for patients with solid tumors. However, patients with hematological malignancies are considered to be underrepresented in specialized PC services [4, 5].
Patients with hematological malignancies had significantly later access to PC services, defined as the interval from first PC consultation until death (13 versus 46 days) [2], and a need of explicit PC concepts for patients with hematological malignancies undergoing stem cell transplantation has been claimed [1].
We therefore reviewed all patients with hematological malignancies at our specialized PC institution in order to define underlying clinical and disease specific characteristics, the subjective needs of these patients, the environment of care, and different modes of PC support, and tried to outline a specific PC approach that could serve as a first fundament and framework for closer PC/hematology cooperations.
Within 31 months (October 1, 2006–April 21, 2009), 79 patients with underlying hematological malignancies were treated at the Department of Palliative Medicine in Goettingen/Germany (5.1% of 1,555 PC patients total). Most patients (34 patients; 43.0%) were treated on an outpatient basis/PC home care service, while 33 patients (41.1%) also required inpatient palliative care. Ten patient contacts (12.7%) were limited to PC consultation service, and two patients (and relatives) were solely supported by PC volunteer services, at home and on a hematology ward, respectively (Table 1). In further 15 documented cases, PC services were contacted by the hematology department, but PC support was not initialized as care concepts were modified before transfer (6/15 = 40.0%) or due to shortage of a free PC unit bed (9/15 = 60.0%). Six of the waiting patients died while being on the waiting list for a PC unit transfer (6/15 = 40%), one of them on the same day PC services were contacted.
Table 1
Demographic and clinical characteristics (n = 79)
Aggressive lymphoma and acute leukemia, as well as myeloma, were the most frequent entities.
Pain was prevalent in 29.1% of cases (23 patients), 43.4% of those suffered from myeloma. Instead, the request for PC was predominantly (56.9%) motivated by psychosocial or nursing-related demands or general support needs for the patient and his/her family.
For the referred patients, the PC approach encompassed
counseling and pharmacotherapy for pain, dyspnea, fatigue, loss of appetite, and other measures of symptom control
tailored therapeutic interventions like symptom-guided blood product substitution, anti-infective therapy or antineoplastic therapy, as well as wound care or individual modifications of nutrition and infusional therapy. Aspects of blood product substitution, the eventual changes in transfusion requirements, the indications to modify or even restrict previous transfusion concepts with regard to end-of-life care aspects were addressed in 31 patients/39.2% of cases.
supporting individual disease-coping strategies by psychological interventions. This proved especially helpful in situations where disease trajectories and prognostication were uncertain.
advance care planning of precautionary and preparative measures for possible emergencies: bleeding, dyspnea, pain, and other end-of-life crises were anticipated by open communication. Instructions and medications (like on-demand medication for dyspnea, restlessness, or pain) were given in order to facilitate home care and to avoid involvement of emergency medical services at the end of life.
inpatient consultative PC services: numerous patients have been cared for by hematology services for years, and antineoplastic and supportive measures may prove useful throughout the disease; therefore, an “as-needed” consultative approach will permit continuous hematological care.
strengthening medical home care and outpatient nursing: facilitating home care included day and night contact options, early involvement of social services, and a flexible readmission policy that offers inpatient service if home care failed.
involvement of relatives, friends, and other non-professionals like community healthcare providers into home care. As general support needs were high (e.g., due to general weakness) but focal symptoms requiring specialized care not always present, non-professional support proved helpful for continuing care at home.
In our case series, the major therapeutic challenges were related to social problems (discharge planning, organizational tasks, home care, and family support) or psychical problems. Psychological consultations often aimed to provide relief for the patients’ (or the families’) ambivalence towards realistically understanding the narrow prognostic limitations but still hoping for disease stabilization or even clinical improvement. Reassessment of transfusion requirements with regard to altered treatment goals proved to be a major medical and ethical controversy in almost half of our patients.
We found a cohort of patients that did not necessarily suffer from focal symptoms, as described in epidemiological surveys of general PC patient populations, where, for instance, pain is reported to be a major clinical problem in 81.7% and dyspnea in 20.2% [6].
Eleven patients (13.9%) came from an allogenic stem cell transplantation background. For this particular group of patients, the transition from curative intervention to a PC approach is considered to be especially difficult, regarding the maximum efforts for cure from an otherwise fatal disease, accepting a high treatment-related morbidity and mortality. Transplanters who are about to decide to seek PC support might ask whether they had given up too soon, or if they should have better given up their curative efforts a long time ago. Several observational studies with stem cell transplantation patients showed that the introduction of a hospice team earlier in the disease process did not shorten survival or dismiss hope, but appeared to improve symptoms and allowed better planning [1].
Our case series illustrates the need for a specific PC concept for patients with hematological malignancies. From a PC perspective, traditional thinking in PC that suggests a “common pathway” for all disease entities at the end of life [7] have to be abandoned beforehand. The hematological perspective ought to consider that PC is more than pain therapy: it provides numerous multiprofessional competencies in the medical, nursing, social, psychological, spiritual, and ethical field that can (and should) be utilized within continuous hematological care.
Both approaches to patient care, PC and hematology, can be expected to benefit mutually from intensified communication that meets the specific needs of our patients.
Dear Editor, Despite significant therapeutic advances in the last decades, more than half of all adult patients with hematological malignancies (ICD 10: C81-C95) will eventually die from their disease (53.6% in Germany; [3]). Specialized palliative care (PC) institutions, that are characterized by a multidisciplinary team approach in order to improve the quality of life of patients with advanced and progressive disease, have gained broad acceptance in the care for patients with solid tumors. However, patients with hematological malignancies are considered to be underrepresented in specialized PC services [4,5].
Patients with hematological malignancies had significantly later access to PC services, defined as the interval from first PC consultation until death (13 versus 46 days) [2], and a need of explicit PC concepts for patients with hematological malignancies undergoing stem cell transplantation has been claimed [1].
We therefore reviewed all patients with hematological malignancies at our specialized PC institution in order to define underlying clinical and disease specific characteristics, the subjective needs of these patients, the environment of care, and different modes of PC support, and tried to outline a specific PC approach that could serve as a first fundament and framework for closer PC/hematology cooperations.
Within 31 months (October 1, 2006-April 21, 2009), 79 patients with underlying hematological malignancies were treated at the Department of Palliative Medicine in Goettingen/Germany (5.1% of 1,555 PC patients total). Most patients (34 patients; 43.0%) were treated on an outpatient basis/PC home care service, while 33 patients (41.1%) also required inpatient palliative care. Ten patient contacts (12.7%) were limited to PC consultation service, and two patients (and relatives) were solely supported by PC volunteer services, at home and on a hematology ward, respectively (Table 1). In further 15 documented cases, PC services were contacted by the hematology department, but PC support was not initialized as care concepts were modified before transfer (6/15=40.0%) or due to shortage of a free PC unit bed (9/15=60.0%). Six of the waiting patients died while being on the waiting list for a PC unit transfer (6/15=40%), one of them on the same day PC services were contacted .
Aggressive lymphoma and acute leukemia, as well as myeloma, were the most frequent entities.
Pain was prevalent in 29.1% of cases (23 patients), 43.4% of those suffered from myeloma. Instead, the request for PC was predominantly (56.9%) motivated by psychosocial or nursing-related demands or general support needs for the patient and his/her family.
For the referred patients, the PC approach encompassed counseling and pharmacotherapy for pain, dyspnea, fatigue, loss of appetite, and other measures of symptom control tailored therapeutic interventions like symptom-guided blood product substitution, anti-infective therapy or antineoplastic therapy, as well as wound care or individual modifications of nutrition and infusional therapy. Aspects of blood product substitution, the eventual changes in transfusion requirements, the indications to modify or even restrict previous transfusion concepts with regard to end-of-life care aspects were addressed in 31 patients/39.2% of cases. inpatient consultative PC services: numerous patients have been cared for by hematology services for years, and antineoplastic and supportive measures may prove useful throughout the disease; therefore, an "as-needed" consultative approach will permit continuous hematological care. strengthening medical home care and outpatient nursing: facilitating home care included day and night contact options, early involvement of social services, and a flexible readmission policy that offers inpatient service if home care failed. involvement of relatives, friends, and other nonprofessionals like community healthcare providers into home care. As general support needs were high (e.g., due to general weakness) but focal symptoms requiring specialized care not always present, non-professional support proved helpful for continuing care at home.
In our case series, the major therapeutic challenges were related to social problems (discharge planning, organizational tasks, home care, and family support) or psychical problems. Psychological consultations often aimed to provide relief for the patients' (or the families') ambivalence towards realistically understanding the narrow prognostic limitations but still hoping for disease stabilization or even clinical improvement. Reassessment of transfusion requirements with regard to altered treatment goals proved to be a major medical and ethical controversy in almost half of our patients.
We found a cohort of patients that did not necessarily suffer from focal symptoms, as described in epidemiological surveys of general PC patient populations, where, for instance, pain is reported to be a major clinical problem in 81.7% and dyspnea in 20.2% [6].
Eleven patients (13.9%) came from an allogenic stem cell transplantation background. For this particular group of patients, the transition from curative intervention to a PC approach is considered to be especially difficult, regarding the maximum efforts for cure from an otherwise fatal disease, accepting a high treatment-related morbidity and mortality. Transplanters who are about to decide to seek PC support might ask whether they had given up too soon, or if they should have better given up their curative efforts a long time ago. Several observational studies with stem cell transplantation patients showed that the introduction of a hospice team earlier in the disease process did not shorten survival or dismiss hope, but appeared to improve symptoms and allowed better planning [1].
Our case series illustrates the need for a specific PC concept for patients with hematological malignancies. From a PC perspective, traditional thinking in PC that suggests a "common pathway" for all disease entities at the end of life [7] have to be abandoned beforehand. The hematological perspective ought to consider that PC is more than pain therapy: it provides numerous multiprofessional competencies in the medical, nursing, social, psychological, spiritual, and ethical field that can (and should) be utilized within continuous hematological care. Both approaches to patient care, PC and hematology, can be expected to benefit mutually from intensified communication that meets the specific needs of our patients.
Open Access This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited. | 2014-10-01T00:00:00.000Z | 2010-09-01T00:00:00.000 | {
"year": 2010,
"sha1": "89149770afd82aba8617be0b100876ec5f873958",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00277-010-1057-3.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "89149770afd82aba8617be0b100876ec5f873958",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
15278866 | pes2o/s2orc | v3-fos-license | Renormalization of Anisotropy and Glueball Masses on Tadpole Improved Lattice Gauge Action
The Numerical calculations for tadpole-improved U(1) lattice gauge theory in three-dimensions on anisotropic lattices have been performed using standard path integral Monte Carlo techniques. Using average plaquette tadpole renormalization scheme, simulations were done with temporal lattice spacings much smaller than the spatial ones and results were obtained for the string tension, the renormalized anisotropy and scalar glueball masses. We find, by comparing the `regular' and `sideways' potentials, that tadpole improvement results in very little renormalization of the bare anisotropy and reduces the discretization errors in the static quark potential and in the glueball masses.
I. INTRODUCTION
Compact U (1) gauge theory in (2+1) dimensions is one of the simplest models with dynamical gauge degrees of freedom and possesses some important similarities with QCD [1]. The model has two essential features in common with QCD, confinement [2,3] and chiral symmetry breaking [4]. The theory is interesting in its own right, for it has analytically been shown to confine electrically charged particles even in the weak-coupling regime (at zero temperature) [2,3,5,6,7,8]. The confinement is understood as a result of the dynamics of the monopoles which emerge due to the compactness of the gauge field. The string tension as a function of the coupling behaves in a similar fashion to that of the 4-dimensional SU (N ) lattice gauge theory. This model also allows us to work with large lattices with reasonable statistics. Other common features of compact U (1) (2+1) and QCD are the existence of a mass gap and of a confinement-deconfinement phase transition at some non-zero temperature. Thus, being reasonably simple and theoretically well understood in the weak-coupling limit, the U (1) model provides a good testing ground for the development of new methods and new algorithmic approaches. In a recent paper [9] we have obtained the first clear picture of the static quark potential, showing very clear evidence of the linear confining behaviour at large distances. The evidence of the the scaling behaviour of the string tension and the mass gap has also been observed in this model.
In the present paper we want to extend the analysis of Ref. [9] in various respects. Since the measured ratio of spatial to temporal lattice spacings is not the same as the input parameter in the action, it becomes important to determine the true or renormalized anisotropy ξ phys. as a function of the bare anisotropy ξ 0 . An important advantage of using anisotropic lattices has been the need to measure the renormalization of anisotropy in the sim- * Electronic address: mushe@phys.unsw.edu.au ulation. The existing theoretical [10,11] and numerical studies [12] with Wilson action for SU (3) lattice gauge theory have shown that at finite coupling g, the renormalized anisotropy ξ phys. differs appreciably form bare anisotropy ξ 0 . However, it has recently been shown that the use of improved actions, supplemented by tadpole improvement, besides providing a better discretization scheme for QCD, offer the advantage of a significant reduction of renormalization of ξ 0 to a few percent [13,14]. The anisotropy parameter η (ratio of the renormalized and bare anisotropy) and anisotropic coefficients have been calculated to one-loop order for improved actions in various recent studies [11,15,16,17,18]. These calculations have provided very reliable results and the observed behaviour is confirmed non-perturbatively by large scale simulations on fine lattices [14,19,20]. We investigate the influence of tadpole improvement on isotropic and anisotropic lattices for the U (1) model in (2+1) dimensions in reducing the renormalization of the bare anisotropy at weak and strong couplings. We also apply tadpole improved U (1) lattice gauge theory to calculations of the static quark potential, the string tension and the scalar glueball masses and compare the results with simulations of the Wilson action.
The rest of this paper is organized as follows. After outlining the tadpole improved U (1) gauge model in (2+1) dimensions in Sect. II, we describe the method for determination of the renormalized anisotropy in Sect. III. We present results from our simulations on anisotropic lattices using both standard and tadpole improved Wilson gauge action in Sect. IV. We compare bare and renormalized anisotropies, static quark potential and scalar glueball masses from these actions. We conclude in Sect. V with a summary and outlook on future work.
II. COMPACT U (1) MODEL IN (2+1) DIMENSIONS
The tadpole-improved U (1) gauge action on an anisotropic lattice can be written in the following form [12]: (1) where P µν is the plaquette operator, ξ 0 = ∆τ = a t /a s is the bare anisotropy at the classical level and u s and u t are the mean fields for the tadpole improvement. The notation used in Eq.(1) differ slightly from that used in Refs. [11,17], where the spatial and temporal meanfield improvement factors, u s and u t were absorbed into definition of β and ξ 0 . This, however, follows the notation introduced in Ref. [14].
On the anisotropic lattice, the mean fields are determined using the measured values of the average plaquettes [21]. We first compute u s from spatial plaquettes, u 4 s = P ij , and then we compute u t from temporal plaquettes, u 2 t u 2 s = P it . Another way to determine the mean fields is to use the mean links in Landau gauge [22] where the lattice version of the gauge condition is obtained by maximizing the quantity, Since the temporal lattice spacing in our simulations is very small, we adopt the following convention [13,14,21] for the mean fields in tadpole improvement This prescription eliminates the need for gauge fixing and the results yield values for u s which differ from those using Landau gauge by only a few percent.
III. RENORMALIZATION OF ANISOTROPY
Following the procedure of Klassen [12] and Shakespeare and Trottier [19], we measure the static quark potential extracted from Wilson loops in the spatial and temporal directions. Accordingly on an anisotropic lattice there are two potentials, V xt (R) and V xy (R). The two potentials differ by a factor of ξ phys. and by an additive constant, since the self-energy corrections to the static potential are different if the quark and anti-quark propagate along the temporal or a spatial direction. Thus ξ phys. can be determined by comparing the static quark potential computed from the logarithmic ratio of timelike Wilson loops R(x, τ ), where with the potential computed from that of the space-like Wilson loops R(x, y), where Asymptotically, for large τ and y, the ratios R(x, τ ) and R(x, y) approach R(x, τ ) = Z xτ e −τ Vxt + (excited state contr.) (7) R(x, y) = Z xy e −yVxy + (excited state contr.). (8) To suppress the excited state contributions, a simple APE smearing technique [23,24,25] was used. In this technique an iterative smearing procedure is used to construct Wilson loop (and glueball) operators with a very high degree of overlap with the lowest-lying state. In our single-link smoothing procedure, we replace every spacelike link variable by where the sum over 's' refers to the "staples", or 3-link paths bracketing the given link on either side in the spatial plane, and P denotes a projection onto the group U (1), achieved by renormalizing the magnitude to unity. We used a smearing parameter α = 0.7 and up to ten iterations of the smearing process. To reduce the statistical errors, the time-like Wilson loops were constructed from "thermally averaged" time-like links [24,25]. The links making up the space-like and time-like Wilson loops are smeared by the same amount so that the ratios R(x, τ ) and R(x, y) have the same excited-state contribution. Similarly, finite-volume corrections to the R(x, τ ) and R(x, y) are the same if the temporal and spatial extents are equal in physical units, i.e. N s = ξ phys. N t in lattice units. These statements are expected to hold only for large x, y and τ ; otherwise there can be large O(a 2 s , a 2 t ) lattice errors. The physical anisotropy is determined from the ratio of the potentials V xt (R) and V xy (R) estimated from R xt and R xy respectively. The unphysical constant in the potentials is removed by subtraction of the simulation results at two different radii The measured renormalization of the anisotropy, η is then determined from
IV. SIMULATION AND RESULTS
Simulations were performed on four lattices of N 2 s × N t sites, with N s = 16 and N t ranging from 32 to 48 with mean-link improvement, and four lattices with the Wilson action. Configurations were generated by using the Metropolis algorithm. The details of the algorithm are discussed elsewhere [9]. 50000 sweeps were performed for thermalization of the configurations and self-consistent determination of the tadpole factors. Configurations are stored every 250 sweeps thereafter. Ensembles of about 1000 configurations were used to measure the static quark potential, while 1,400 configurations, at coupling values from β = 1.0 to 2.5, were generated for the glueball mass. We fixed ξ 0 = 16/N t in the first pass, so that the lattice size remains fixed at 16a s in all directions. The simulation parameters of the lattices analyzed here are given in Table I. After measuring the Wilson loops at fixed values of β, we compute the ratios R xt and R xy . We find that the individual ratios reach their plateaus for τ ≥ 3 and y ≥ 3 for fixed x as shown in Figures 1 and 2. These ratios are are expected to be independent of τ and y for τ , y ≥ 3 respectively. The estimates of the potentials V xt (R) and V xy (R) can now be found from these ratios. Figure 3 shows a graph of the static quark potentials, computed from spatial and temporal Wilson loops, as a function of radius R at β = 1.306 and ∆τ = 0.5. The potential in the lattice units obtained from the ratio of the time-like Wilson loops has been rescaled by the input anisotropy. To extract the string tension, the time-like potential is well fitted by a form including a logarithmic Coulomb term as expected for classical QED in (2+1) dimensions which dominates the behaviour at small distances, and a linear term as predicted by Polyakov [3] and Göpfert and Mack [8] dominating the behaviour at large distances and showing a clear evidence of the linear confining behaviour at large distances. We measured each anisotropy twice, using two different radii R 1 for subtraction, with fixed R 2 . Setting R 2 = 4, we computed the anisotropy with R 1 = 2 and √ 2. The two determinations of anisotropy, shown in Table II, are in excellent agreement. The numerical values of the renormalization of the anisotropy parameter η appears to be equal to unity even at large β. It is seen that for with mean-field improvement the input anisotropy is renormalized by few percent over the range of lattices analyzed here, whereas the measured value of anisotropy is about 15−20% lower than the bare anisotropy with the standard Wilson action. This can be seen from Figure 4 where the renormalization of the anisotropy is plainly visible as a difference in slope of the potentials computed from R xt and R xy .
Glueball correlation functions C(τ ) were also calcu- lated whereΦ i (τ ) is the optimized glueball operator found by a variational technique, following Morningstar and Peardon [14] and Teper [26], from a linear combination of the basic operators φ i , where The optimized correlation function was fitted with the simple form to determine the glueball mass estimates.
The results for the symmetric and the anti-symmetric glueball masses over the square root of the string tension are shown in Table III, along with the mean plaquette values at different β at ∆τ = 1.0. Figure 5 shows the behaviour of the logarithm of the antisymmetric mass gap over the square root of the string tension as a function of β. It can be seen that that the ratio scales exponentially to zero in the weak-coupling limit as should be in threedimensional confining theories. The solid line is a fit to the data over the range 0.916 ≤ β ≤ 2.12. The slope of the data matches the predicted form [8], however, the intercept of the scaling curve is large by a factor of 2 (our previous estimates of constant coefficient for standard Wilson action are large by a factor of 5.2 [9]). It would be interesting to test the sensitivity of the slope and intercept of the scaling curve by including the radiative corrections.
A plot of mass ratio against the effective lattice spacing a ef f [9] is shown in Figure 6. At weak coupling, the theory is expected to approach a theory of free bosons [8] so that symmetric state will be composed of two 0 −− bosons and the mass ration should approach two. The mass ratio of the lowest glueball states scale against the effective lattice spacing towards a value close to 2.0, as expected for a theory of free scalar bosons. We note that mass ratio exhibits scaling behaviour, even with the Wilson action [9], however, in contrast with the Wilson action, a significant reduction in the errors with tadpole improved action is apparent in the mass gap and the mass ratio.
V. SUMMARY AND OUTLOOK
Mean field improved U (1) lattice gauge theory in (2+1) dimensions was applied to calculations of the static quark potential, the renormalized lattice anisotropy and the scalar glueball masses. We analyzed the mean-link improved action on isotropic and anisotropic lattices and comparisons were made with the simulations of the Wilson action. By comparing the static quark potentials computed from space-like and time-like Wilson loops, we determined the physical anisotropy of the tadpole improved Wilson action. We found that with mean-link improved Wilson action, the bare anisotropy is renor-malized by less than a few percent, in contrast with the standard Wilson action, where the measured value of anisotropy is found to be about 15 − 20% lower than bare anisotropy on the lattices analyzed here. We found that tadpole improvement significantly reduces discretization errors in the static quark potential and the glueball masses. The mass ratio of the two lowest glueball states scales against the effective lattice spacing towards a value close to 2.0, as expected for a theory of free scalar bosons. We intend to extend PIMC techniques to Symanzik improved U (1) lattice gauge theory. The intention is to study the effects of improvement on the scaling slope, the constant coefficients and scaling behaviour observed in the weak-coupling regime of the theory. We also plan the study the one-loop correction to the anisotropy factor for Symanzik improved U (1) gauge action in three dimensions. We shall report on this work in the near future. | 2014-10-01T00:00:00.000Z | 2003-03-18T00:00:00.000 | {
"year": 2003,
"sha1": "6d5cb88f9e1274aef1a70ac9e6a130eedecc5b1f",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-lat/0303011",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "6d5cb88f9e1274aef1a70ac9e6a130eedecc5b1f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
56012615 | pes2o/s2orc | v3-fos-license | Distributed Drives Monitoring and Control: A Laboratory Setup
A laboratory setup of distributed drives system comprising a three-phase induction motor (IM) drive and a permanent magnet synchronousmotor (PMSM) drive ismodeled, designed, and developed for themonitoring and control of the individual drives.The integrated operation of IM and PMSMdrives systemhas been analyzed under different operating conditions, and their performance has been monitored through supervisory control and data acquisition (SCADA) system. The necessary SCADA graphical user interface (GUI) has also been created for the display of drive parameters. The performances of IM and PMSM under parametric variations are predicted through sensitivity analysis. An integrated operation of the drives is demonstrated through experimental and simulation results.
Introduction
Monitoring and control of drives is a necessary prerequisite for quality control of a product as well as for energy conservation in automated process plants.Electrical energy is supplied to the motors through power electronic converter to get the desired torque/speed characteristics of the motors for motion control in industrial processes.This is achieved through modern motor drives, advance control algorithms, and intelligent devices such as programmable logic controller (PLC), digital signal processor (DSP), and microcontroller.This makes the operation of drives complex, sophisticated, and expensive [1].Further, in production plant, the process is distributed at shop level based on functional requirements, which results in distribution of the various drives for different process operations.In distributed drives system, the processing tasks are physically distributed among the various drives, which requires placement of the necessary computing, with optimal volume of data, close to the process.Such system also provides fault tolerant and self-diagnostic capability and enhances the reliability of overall system.Thus, a distributed drives system has partially autonomous local computing devices with input, output, and storage capability, interconnected through a digital communication link coordinated by a supervisory control and data acquisition system.The distributed system has the advantages of local as well as centralized control.In such cases, the SCADA and programmable logic controller coordinate the local controllers through a communication link [2].
In the past few decades, limited literatures are available on distributed drives control using PLC.Applications of PLC have been reported for monitoring control system of an induction motor [3,4].PLC has been also used as a power factor controller for power factor improvement and to keep the voltage to frequency ratio of a three-phase IM, constant under all control conditions [5].Also, a vector-oriented control scheme, for the regulation of voltage and current of threephase pulse width modulation inverter, which uses a complex programmable logic device (CPLD) [6], has been reported.Remote control and operation of electric drive need a large amount of data to be acquired, processed, and presented by the SCADA system [7,8].In this paper, distributed control for a three-phase IM drive and a three-phase PMSM drive is configured, designed, and developed for experimental work, and integrated control operation is demonstrated through experimental and simulation results.The application of adjustable speed drives (ASDs) for fans, pumps, blowers, and compressors do not require very precise speed control.Speed sensor in a drive adds cost and reduces the reliability of the drive.Therefore, for applications requiring moderate performance, sensorless drive is a better option, and, hence, sensorless vector control is used for IM control [9][10][11][12][13][14].On the other hand, PMSMs are generally used for low-power servo applications where very precise position control is required.A PID controller is applied [15] to the position control, and a model reference adaptive control has been implemented for the PMSM [16].As speed estimators and observers rely on the knowledge of motor parameters, they are inadequate for accurate position estimation.In the present work, a position feedback encoder is used for PMSM, and an indirect fieldoriented control is employed for its control [17][18][19].
A detailed study on distributed drives including design, development, and testing of prototype distributed drives is demonstrated.The monitoring and supervisory control of IM and PMSM drives, thereby validating the concept of distributed drives, is also described.Further, the developed experimental setup enables and facilitates imparting training and providing the facilities with hands of experimentation, research, and practical training.The necessary SCADA GUI has also been created for the display of drive parameters such as speed.The performances of IM and PMSM are predicted by sensitivity analysis.
Control Algorithm
Sensorless control for IM and indirect field-oriented control for the PMSM have been used in distributed control of the drives.
Sensorless Control of Three-Phase IM Drive
2.1.1.Flux Estimator.The direct and quadrature rotor flux components ( and ) are estimated from the IM terminal voltages ( , , and ), currents ( , , and ), stator resistance of the motor, , the stator and rotor selfinductances and , respectively, and their mutual inductance , which are described in (1) to (3); [20], consider where where and , , , and are stator direct and quadrature axis currents and fluxes, respectively.Also, The correct alignment of current, , in the direction of flux, , and the current, , perpendicular to it are needful requirements in vector control.This alignment is depicted in Figure 1 using rotor flux vectors and , where - frame is rotating at synchronous speed with respect to stationary frame - , and at any instance, the angular position of axis with respect to the axis is , where (5)
Speed Estimator.
The speed is estimated by using the data of the rotor flux vector ( ), obtained in a flux estimator as follows.
The rotor circuit equations [20] where and (i.e., / ) is the rotor time response.Also, from (5), Differentiating the aforementioned, we get Combining ( 7), (8), and ( 13) and simplifying, one yields where and are first derivatives of s and , respectively.
The torque component of current * and the flux component of current * are evaluated from the speed control loop and the flux control loop, respectively, as follows: where * and * are the reference speed and flux; 1 and 2 are the gain of speed loop and flux loop; and are computed using the flux and speed estimators, respectively, as explained earlier.
The principal vector control parameters, * and * , which are DC values in synchronously rotating frame, are converted to stationary frame with the help of unit vectors (sin and cos ) generated from flux vectors and as given by ( 5).
The resulting stationary frame signals are then converted to phase current commands for the inverter [20].The torque is estimated using (16) as The block diagram of the sensorless vector control for the IM drive is shown in Figure 2.
Control Scheme for
Three-Phase PMSM Drive.The rotor of PMSM is made up of permanent magnet of Neodymiumiron-boron, which offers high energy density.Based on the assumptions that (i) the rotor copper losses are negligible, (ii) there is no saturation, (iii) there are no field current dynamics, and (iv) no cage windings are on the rotor, the stator - equations of the PMSM in the rotor reference frame are as follows [17,21]: where and are the , axis voltages, and are the , axis stator currents, and are the , axis inductance, and are the , axis stator flux linkages, is the flux linkage due to the rotor magnets linking the stator, while and are the stator resistance and inverter frequency, respectively.The inverter frequency is related to the rotor speed as follows: where is the number of poles, and the electromagnetic torque is This torque, , encounters load torque, moment of inertia of drive, and its damping constant.Thus, the equation for the motion is given by where is the load torque, moment of inertia, and damping coefficient.Figure 3 shows the typical block diagram of a PMSM drive.The system consists of a PMSM, speed/position feedback, an inverter, and a controller (constant torque and flux weakening operation, generation of reference currents, and PI controller).The error between the commanded and actual speed is operated upon by the PI controller to generate the reference torque.
The ratio of torque reference and motor torque constant is used during constant torque operation to compute the reference quadrature axis current, .For operation up to rated speed, the direct axis current is made equal to zero.From these - axes currents and the rotor position/speed feedback, the reference stator phase currents are obtained using Park's inverse transformation as given in ( 22) where 0 is the zero sequence current, which is zero for a balanced system.
The hysteresis PWM current controller attempts to force the actual motor currents to reference current values using stator current feedback.The error between these currents is used to switch the PWM inverter.The output of the PWM is supplied to the stator of the PMSM, which yields the commanded speed.The position feedback is obtained by an optical encoder mounted on the machine shaft.
In order to operate the drive in the flux weakening mode, it is essential to find the maximum speed.The maximum operating speed with zero torque can be obtained from the steady state stator voltage equations.The flux weakening controller computes the demagnetizing component of stator current, , satisfying the maximum current and voltage limits.For this direct axis current and the rated stator current, the quadrature axis current can be obtained from ( 23) These - axes currents and the rotor position/speed can be utilized to obtain the commanded speed.
Sensitivity Analysis of IM and PMSM
Sensitivity analysis is used by designers of machines for the prediction of the effect of parameter of interest on the performance variables of the motor.In the present study, sensitivity values of the performance variables like power input, power output, efficiency, power factor, stator current, starting current, magnetizing current, developed torque, and starting torque, with respect to the equivalent circuit parameters, are obtained for the IM.The sensitivity is computed by (24), as sensitivity of a variable with respect to a parameter can be represented as where is the performance variable with nominal parameters, and is the value of the performance variable when the value of the parameter is increased by defined deviation value.A similar analysis is also carried out for PMSM.
Laboratory Setup of the Distributed Drives System
To analyse the utility of distributed drive system, a laboratory setup has been designed and developed for research and development activities.
SCADA.
For the remote monitoring and control of the drives, GE Fanuc SCADA Cimplicity 7.5 software is used.
The SCADA software is loaded on the server PC which provides supervision in the form of graphical animation and data trends of the processes on the window of PC or screen of HMI.The Cimplicity project wizard window is used to configure various communication ports and the controller type and also to create new points corresponding to addresses used in the controller.This graphical interactive window is used to animate the drive system.At present, controls like start, stop, speed control, and so forth are developed on the software window to control the drives remotely.Each drive can be controlled locally at the field level, through the PLC, or through the SCADA interface.The SCADA GUI developed for the speed control of the distributed drives is shown in Figure 5(b).The control algorithm has been implemented and tested for a three-phase squirrel cage induction motor and three-phase PMSM drive.The technical specifications for these drives are presented in Tables 1 and 2.
Results and Discussions
A three-phase sensorless induction drive and a three-phase PMSM drive are configured in the SCADA system.The operation and performance characteristics of the drives are monitored and studied under varying torque and speed conditions.Simulation results are also described.In order to study the effect of parametric variation on the motor performance variable, sensitivity analysis is carried out for both IM and PMSM with respect to their respective equivalent circuits.
Performance of Three-Phase IM Drive under Different
Load Conditions. Figure 6 to Figure 11 show the variation of various parameters of sensorless control induction motor drive during starting at no load and 25% load conditions.Figures 6 and 7 show the variation of voltage and frequency.Both the voltage and frequency increase linearly till they attain a value of 385 V and 48 Hz, respectively, at rated speed under no load starting condition.While the machine is started with a load of 25%, the variations in voltage and frequency are almost similar to that of the previous case.Figure 8 shows the variation of current during starting under no load and 25% load conditions.The starting current was 4.38 A during starting which is settled down to a steady state value of 3.1 A in 2 s under no load case.While with 25% load, the starting current was 5.7 A which is settled down to steady state value of 3.25 A in about 10 s.
Figure 9 shows the variation of torque during starting with no load and 25% load.At no load starting, it is observed that the negative peak torque value is 5.9 Nm at the first instance, and then it reaches a positive peak value of 10.3 Nm and finally settles down to a steady state value of 0.7 Nm in about 10 s.While with 25% load starting, the negative peak torque value at the first instance is 6 Nm, and the positive peak value is 20.1 Nm which finally settles down to a steady value of 4.5 Nm in 12 s. Figure 10 shows the power variation at starting with no load and 25% load.The power drawn during transient period is 0.18 kW, which decreases to a value of 0.1 kW, then it increases linearly to a value of 0.3 kW and finally settles down to a value of 0.14 kW in 11 s.When the machine is started with 25% load, the initial power drawn is 0.42 kW, which then increases to a value of 1.3 kW and finally settles down to a value of 0.8 kW in 13 s.
Figure 11 shows the speed response of IM during starting at no load and 25% load.It is observed that the motor reaches its rated speed that is, 1440 rpm in about 9 s under no load starting and in about 10 s under 25% load at starting.Figure 12 shows simulated dynamic performance of IM drive under no load and at rated speed of 1440 rpm.The motor attains the desired speed of 1440 rpm in about 5.5 s.
Starting Performance of Three-Phase PMSM Drive under
No Load Condition.Figure 13 shows the dynamic performance of PMSM under no load with a reference speed of 3000 rpm.The motor attains the set synchronous speed of 3000 rpm in about 300 ms. Figure 14 shows simulated dynamic performance of PMSM at no load with a reference speed of 3000 rpm.The motor attains the set synchronous speed of 3000 rpm in about 280 ms.
Sensitivity Analysis for Performance Variables of PMSM.
Motor parameters like stator resistance and inductance vary depending on operating conditions, mainly motor duty cycle, effect of magnetic saturation, and so forth.The effect of parametric variations on the efficiency of PMSM has been analyzed and is shown in Figure 15 for rated speed and rated torque conditions, where represents the parameter variation coefficient and is defined as the ratio of new parameter value to the actual parameter value.That is, where and are the new stator resistance and stator inductance, respectively.In the present analysis, is determined for 1% deviation in motor parameters.
It is observed from Figure 15 that the variation in the efficiency is negligibly small with variation in and , and it follows a Gaussian distribution.The variation in efficiency due to change in and is expressed as a fourth order polynomial using best fit curve in Figure 15, where (26)
Sensitivity Analysis for Performance Variables of IM.
Table 3 shows the sensitivity of different performance variables of three-phase IM with respect to its equivalent circuit parameters.The sensitivity of the power input and power output with respect to is the highest and is the lowest with respect to .Motor efficiency is less sensitive to all the equivalent circuit parameters, with variation in and affecting it more compared to other parameters.The power factor is more affected by variation in and , while variations in other parameters have less effect on it.The stator current is more affected by variation in and least by variation in .The sensitivity of starting current with respect to and is the highest while with respect to is the lowest.Magnetizing current is more sensitive to changes in and less sensitive to changes in .The developed torque and starting torque are mainly affected by variations in .The sensitivity of developed torque is the least with respect to and , respectively.The sensitivity of the performance variables with respect to frequency and supply voltage is also obtained.It is observed that the frequency variation has maximum effect on starting torque and magnetizing current followed by starting current.The sensitivity of developed torque with respect to frequency is the least.The sensitivity of power input, power output, developed torque, and starting torque with respect to supply voltage is 2% each.Similarly, the stator current, starting current, magnetizing current, and so forth change by 1% with respect to variation in supply voltage.The supply voltage variation has negligible effect on efficiency, while power factor is not affected by supply voltage variation.
Conclusion
A prototype of distributed drives system, consisting of a three-phase IM drive and a PMSM drive, is designed, developed, and implemented as a laboratory setup.This prototype system demonstrates the operation and control of distributed drives through PLC and SCADA.The operation, control, and monitoring of various performance parameters of PMSM and IM under different operating conditions are carried out in detail.A detailed sensitivity analysis is also carried out to observe the effect of parametric variations on performance of the motors.
Figure 2 :Figure 3 :
Figure 2: Block diagram of sensor less vector control for three-phase IM drive.
Figure 4 :
Figure 4: Schematic layout of distributed drives laboratory setup.
Figure 4
shows laboratory setup which incorporates industry standard networking.It has an IEEE 802.3 complaint Ethernet data highway and is currently supporting a network of two-operator consoles, a PLC, and two drives (IM drive and PMSM drive) all connected in star topology.The PLC (GE Fanuc 90-30) coordinates the operation of these drives.The PLC passes real-time data to the operator console via Ethernet interface using customized software, namely, VersaMotion, for PMSM drive and DCT software for the IM drive.The input/output (I/O) units of PLC and drives communicate using Profibus-DP[22,23].The communication between individual drives and PCs, SCADA, is through Modbus protocol.
Figure 5 :
Figure 5: (a) Ladder logic for integrated operation of IM and PMSM drives.(b) SCADA GUI developed for the speed control of the distributed drives.
Figure 6 :
Figure 6: Voltage variation of IM drive during starting at different loads and 1440 rpm speed.
Figure 7 :
Figure 7: Frequency variation of IM drive during starting at different loads and 1440 rpm speed.
Figure 8 :
Figure 8: Current variation of IM drive during starting at different loads and 1440 rpm speed.
Figure 9 :
Figure 9: Torque variation of IM drive during starting at different loads and 1440 rpm speed.
Figure 10 :Figure 11 :
Figure 10: Power variation of IM drive during starting at different loads and 1440 rpm speed.
Figure 12 :
Figure 12: Simulated speed response of three-phase IM drive at no load and rated speed.
Figure 13 :Figure 14 :
Figure 13: Experimental starting response of three-phase PMSM at no load and rated speed.
Figure 15 :
Figure 15: Effect of parametric variations on the efficiency of PMSM for rated speed and rated torque.
Adding terms ( / ) and ( / ) , respectively, on both sides of the previous equation, we get
Table 1 :
Technical specifications of three-phase induction motor.
Table 2 :
Technical specifications of three-phase PMSM. | 2018-12-13T08:00:47.361Z | 2013-02-07T00:00:00.000 | {
"year": 2013,
"sha1": "05d04037de7462fbec5aded7be295537bc2f06b5",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/je/2013/924928.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "05d04037de7462fbec5aded7be295537bc2f06b5",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Engineering"
]
} |
238428135 | pes2o/s2orc | v3-fos-license | The cell response to the effect of heliogeophysical factors and extremely high frequency radiation of low intensity
The aim of the work was to investigate the effect of low-intensive electromagnetic radiation (EMR) of extremely high frequencies (EHF) on the metachromasia reaction of yeast Saccharomyces cerevisiae depending on heliogeomagnetic environment, and to determine the effect of phosphates in the culture medium on the cell response. The culture Saccharomyces cerevisiae Y-517 was grown on the solid nutrient medium made on the base of wort. The media Sabouraud (without phosphates) and Sabouraud+ (with addition of phosphates) were also used. The yeast was reseeded daily, the smears were prepared, stained with methylene blue, the type of color was determined using microscope, color changing indicated metachromasia. Yeast cells were affected by EHF-radiation (120 μW/cm2) at 54, 65, 73 GHz. Heliogeomagnetic activity was controlled using Kp and Ap indices. The cell response to the effect of heliogeophysical factors was observed only when cultivating yeast in the medium with phosphates; it used to change under the influence of EHF-radiation, the third coloration type didn’t appear. We supposed intracellular water to be the unified target for both physical factors. Probably, destabilization of the structured intracellular aqueous medium by EHF-radiation changes the strength of the impact of heliogeophysical factors on cell macromolecules.
Introduction
Heliogeophysical factors are a complex of physical factors associated with the solar activity, the Earth rotation, fluctuations of geomagnetic fields and structural features of the atmosphere [1, 2]. All alive organisms are extremely sensitive to the effects of heliogeophysical factors, abrupt changes in which can cause various disorders of the body's vital functions. At the same time, the constant impact of these factors within the limits not exceeding the adaptive capabilities of biological objects is a necessary condition for the existence of life [1-3].
Molecular targets for the action of heliogeophysical factors have not been finally determined. The first scientific data on the effect of heliogeomagnetic activity on a cell were obtained in 1925 by S. Velhover. He noticed that the color of polyphosphate granules of Corynebacteria, when stained with methylene blue, changed from blue to red through violet, depending on the solar activity [4]. This phenomenon was called metachromasia.
The yeast Saccharomyces cerevisiae containing polyphosphate granules was used for long-term monitoring of the metachromasia reaction [5]. Correlations of this reaction with various IOP Conf. Series: Earth and Environmental Science 853 (2021) 012020 IOP Publishing doi: 10.1088/1755-1315/853/1/012020 2 heliogeophysical factors (solar activity, geomagnetic activity, solar wind density, cosmic rays) were determined [1,5]. Since it was proved that the metachromasia reaction is caused by the changes in polyphosphates [6], the authors suggested that polyphosphate granules were a biophysical sensor of the changes in heliogeomagnetic activity [5].
Earlier we studied the metachromasia reaction of S. cerevisiae depending on the heliogeomagnetic activity, which was controlled by the values of Kp and Ap indices [7]. The third type of coloration (violet-red) was registered, as a rule, on the second or third day after the geomagnetic disturbance (Kp ≥ 16 and Ap ≥ 18), i.e. the cells response was delayed. An inverse relationship was found between the metachromasia reaction and Ap and Kp indices of geomagnetic disturbance. The obtained data correlated with the results of long-term monitoring of the research group of E.N. Gromozova [1].
The influence of electromagnetic radiation (EMR) of extremely high frequencies (EHF) of low intensity (65 GHz) on the response of cells to heliogeomagnetic disturbances was also studied. Our interest in EHF-radiation is due to its high biological activity and ability to modify the response of biological systems to the action of chemical substances and physical fields [8,9], as well as the use of this radiation in medical practice [10].
It has been proven that water plays an important role in the perception of EHF-radiation by biological systems [9,11]. It is not only the environment in which biological membranes and cell macromolecules function, but also their integral structural component. Water largely determines the electrical characteristics of biological fluids and tissues. The frequencies of the rotational motions of water molecules lie in the millimeter and submillimeter ranges, this fact determines the resonant nature of the absorption of EHF-radiation by aqueous media [12].
We have shown [7] that the third type of coloration rarely appears if the yeast is exposed to EMR at a frequency of 65 GHz. It was assumed that the effect of EHF-radiation is associated with destabilization of the structure of the aqueous component of biosystems and, as a consequence, the conformation and activity of biomolecules.
The aim of the work was to investigate the effect of EHF-radiation of low intensity at different frequencies on the metachromasia reaction of the yeast Saccharomyces cerevisiae depending on heliogeomagnetic environment, and to determine the influence of phosphates in the culture medium on the cell response.
Materials and methods
The yeast culture Saccharomyces cerevisiae Y-517 was used in the experiments. The culture was grown on the solid nutrient medium made on the base of wort (6-7°) with agar content of 2 -2,5 %. There were also used the media Sabouraud and Sabouraud+ (with addition of КН2РО4 0,5 g/l and Na2HPO4*12H2O 0,9 g/l). Cultivation was provided in thermostat under +28°С during 24 hours. Every day the yeast culture was reseeded.
The generator G4-142 (Russia) was used as a source of EMR of 54, 65, 73 GHz. The density of the radiation current was 120 μW/cm 2 . The yeast culture was exposed to EMR using a pyramidal horntype antenna with 12 cm length and aperture 42×50 cm 2 , during 30 min, at the temperature 21-22°С. The distance between the antenna and the biological object was 15 cm and wasn't changed during the experiment. The irradiation of cells was performed after seeding on the solid medium or before a smear preparing.
After 24-hour cultivation the smears were prepared and stained by methylene blue. The color of polyphosphate granules was determined using microscope "Biomed-6", Russia, (x100) with the visualization system ("Type I"dark blue; "Type II"blue-violet; "Type III"violet-red).
Heliogeomagnetic activity was controlled using the values of Кр-index and Ар-index obtained from the Yu.G. Shafer Institute of Cosmophysical Research and Aeronomy of Siberian Branch of Russian Academy of Sciences, Yakutsk, Russia. Geomagnetic activity is considered to be normal at Kp <16 and Ap <18; increasedat Kp ≥16 and Ap ≥18 [1]. The reliability of the results of the studies was confirmed by 6 parallel experiments with 100% coincidence of the type of cell response, as well as the use of a computer integral image assessment, which confirmed the differences in cell coloration [1].
Data and results
The studies were carried out during several years, generally in spring and autumn, when significant fluctuations in heliogeomagnetic activity are observed. Initially, in the experiments low-intensity EMR (120 μW / cm 2 ) with a frequency of 65 GHz was used. There were used three samples of the yeast S. cerevisiae: ): 1) without exposure to EMR (control); 2) after irradiation at a frequency of 65 GHz; 3) repeatedly irradiated at a frequency of 65 GHz (before each reseeding). Smears were prepared, stained with methylene blue, the type of coloration was determined. The total duration of the experiment was 100 days. For each sample of the yeast culture, the number of smears was summed up and a number of smears with coloration of Type I, Type II and Type III was defined as a percentage. The results are shown in Figure 1. It is seen in Figure 1 that all three types of coloration (the metachromasia reaction) appear in the control sample. It is important to note that the third type of coloration appeared, as in previous studies [7], on the second or the third day after the geomagnetic disturbance. All types of coloring were also determined when using once irradiated cells. The culture, repeatedly irradiated during subculture, did not give the third type of coloring.
Further, we studied the response of cells to heliogeophysical disturbances under the influence of EMR of different frequencies. The samples of the yeast culture were daily exposed to EHF-radiation at 65, 73, 54 GHz, subcultured on standard medium. The smears were prepared every day and stained with methylene blue, the type of coloring was determined. The ratio of the coloration types of smears for each culture sample for the entire duration of the experiment (60 days) is shown in Figure 2.
It was defined that the cells exposed to EHF EMR of the pointed frequencies did not acquire the third type of color. The greatest difference from the control was found in cells after prolonged irradiation at a frequency of 54 GHz, which has a high biological activity [13].
At the next stage, the influence of phosphates added to the nutrient medium on the metachromasia reaction of S. cerevisiae was determined. For this purpose, the yeast culture was grown on the medium Sabouraud (without phosphates) and on the medium Sabouraud+ (with the addition of phosphates). The two samples of the yeast culture were not exposed to EMR, the other two were irradiated at 54 GHz. The ratio of the coloration types of smears for each culture sample for the entire duration of the experiment (60 days) is shown in Figure 3. In Figure 3 it is seen that only cells of the culture S. cerevisiae grown on the medium Sabouraud+ exhibit the metachromasia reaction.
Discussion
Saccharomyces cerevisiae is an unicellular eukaryote usually used as a model for the investigation of the effect of electromagnetic fields of different frequency ranges at a cell functioning [14]. In particular, the culture was used for studying nonthermal biological effects of EHF EMR on the division of cells [15]. With the help of this culture a frequency of 54 GHz with a high biological activity was discovered [13]. Influence of radiofrequency of EMR (40.68 MHz) on physiology of development, sensitivity of cells to stress factors, cell cycle kinetics and enzymes activity of a cell was investigated by using this organism as well [14]. The choice of S. cerevisiae for this study is due to the presence of specific structural-morphological formations of protoplasm -polyphosphate granules (volutin grains) in the cells [6]. Polyphosphates are an osmotically inert reserve of phosphates and energy, which enables cells to rapidly transition to intensive growth and reproduction under any suitable conditions. In addition, polyphosphates are involved in the processes of genetic and structural regulation, in the transport of substances through membranes.
Cytological detection of polyphosphates in cells is carried out by staining with dyes: toluidine blue, neutral red or methylene blue. It is known that dye molecules, when binding to polyanion molecules (polyphosphates), acquire an increased ability to dimerize, which leads to a change in the absorption spectrum of the dye. These interactions depend, on the one hand, on the polyphosphate chain length, its conformation, on the other hand, on pH, temperature, ionic strength, and the polymer / dye ratio [6]. It has been proved [16] that an increase in the length of the polyphosphate chain shifts the maximum in the absorption spectrum of methylene blue to a shorter region, which leads to a color change from blue to red.
Previously it was determined that polyphosphate granules in S. cerevisiae cells responded to heliogeomagnetic disturbances by changing color when the smears were staining by methylene blue (methachromasia) [5,7]. We supposed that heliogeophysical factors as well as EHF-radiation influence at the condition and structure of intracellular water, at ionic interactions that leads to polyphosphates conformational changing and results in methachromasia.
To confirm these assumptions, we studied the effect of heliogeophysical activity in combination with EHF EMR of low intensity on the reaction of metachromasia of S. cerevisiae cells. It is known that the EHF-radiation is non-ionizing, the quantum of its energy, for example, for λ = 1 mm, is of the order of 10 -3 eV, i.e. waves of this range can affect the conformational states of molecules [8,10]. Although various possible mechanisms of reception of EHF-radiation at the cellular level are discussed in the scientific literature, all the authors agree that its primary molecular target is the intracellular water [9][10][11][12]. Thus, the change in the reaction of metachomasia of yeast cells under the influence of EMR may point to the unified target for the action of EHF EMF and heliogeophysical factors.
At first, yeast cells were used in the experiments, once and repeatedly irradiated with EMR at a frequency of 65 GHz. The response of cells to heliogeomagnetic disturbances was compared with the response of non-irradiated cells. The choice of the frequency 65 GHz is due to the results of our previous studies: by using different biological and physical models it was discovered earlier that this EMR destabilize near-surface water structure [11]. Smears were prepared daily and stained with methylene blue. The first two samples of the culture upon staining gave the same type of color ( Fig. 1), which depended on the level of heliogeomagnetic activity. Cells exposed to EMR on a daily basis did not have Type III color, i.e. cells did not respond to magnetic storms.
Further, the effect of EMR of different frequencies (65, 54, 73 GHz) on the yeast response to heliogeomagnetic disturbances was studied (Fig.2). We chose frequencies that, according to scientific literature data, exhibit high biological activity [10], in particular, affect yeast [13]. Long-term irradiation of cells with EMR at the indicated frequencies changed the effect of heliogeophysical factors on cells, which is manifested in the absence of Type III coloration. The smallest variations in the color of polyphosphate granules were observed when the culture was exposed to EMR at a frequency of 54 GHz, which has a high biological activity [13]. Possibly, the higher the level of the impact of EHF-radiation on a cell, the less it responds to the change in heliogeomagnetic activity.
Comparative cultivation of yeast in a medium without phosphates (Sabouraud) and in the same one supplemented with phosphates (Sabouraud+) showed that the metachromasia reaction appears in yeast cells only if there is a sufficient amount of phosphates in them (Figure 3). When using medium without phosphates we observed only the Type I coloration in smears regardless of heliogeomagnetic environment. This proves that the color of the granules depends on the length of the polyphosphate chain. When yeast was grown in a phosphate-free medium, irradiation at 54 GHz resulted in the Type II coloration. This may indicate conformational changes in polyphosphates under the influence of XIV International Conference "Space and Biosphere" (Space and Biosphere 2021) IOP Conf. Series: Earth and Environmental Science 853 (2021) 012020 IOP Publishing doi:10.1088/1755-1315/853/1/012020 6 EMR. When yeast was cultivated in a medium with phosphates, irradiation promoted the appearance of only Type II coloration, regardless of the heliogeomagnetic environment. The Type III coloration did not appear at all.
The research results allow us to confirm our assumption that intracellular water is a single target of the impact of heliogeophysical factors and EHF-radiation. According to the domain-cluster model [17], the intracellular environment is an ordered structure of macromolecules with hydration shells 3-4 nm in size, which is comparable to the size of macromolecules themselves; water inside the cell is organized into clusters and has low entropy. Polyphosphate granules in yeast cells are hydrated macromolecules as well. Since an ordered water system is a good conductor of external signals [17], hydrated polyphosphates should respond to heliogeomagnetic disturbances. EMR destroys water clusters, increasing the entropy of water molecules inside cells, which leads to random fluctuations of polyphosphates. Due to the changes in size and energy, hydrated polyphosphate structures cease to respond to heliogeophysical factors. That is why, no metachromasia reaction is observed in irradiated cells.
Thus, we suppose that exposure of yeast cells to EMR of low intensity at the pointed frequencies (54, 65, and 73 GHz) protects them from the influence of heliogeophysical factors. A similar effect can be expected for other eukaryotic cells.
Conclusions
1. The combined effect of heliogeophysical factors and extremely high frequency radiation of low intensity at the metachromasia reaction of the yeast Saccharomyces cerevisiae was studied.
2. It has been found out that the response of cells Saccharomyces cerevisiae to the effect of heliogeophysical factors is changed by the influence of EHF-radiation (54, 65, and 73 GHz). The most noticeable change in the biological response has been determined when using EMR at 54 GHz.
3. The reaction of metachromasia of Saccharomyces cerevisiae has been shown to appear only in the case of phosphates presence in cultivation media, consequently, the type of coloration of polyphosphate granules depends on the length of the polyphosphate chain.
4. We believe that intracellular water is a single target for the impact of heliogeophysical factors and EHF-radiation. Destabilization of the structured intracellular aqueous medium by EHF radiation can change the strength of the impact of other external signals on cell macromolecules, i.e. EHF radiation of certain frequencies (54, 65 and 73 GHz) can protect cells from the effects of heliogeophysical factors. | 2021-10-08T20:05:41.770Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "47be828976ba42b5223753ab151536ba8bcd9ecc",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/853/1/012020",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "47be828976ba42b5223753ab151536ba8bcd9ecc",
"s2fieldsofstudy": [
"Environmental Science",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
252170399 | pes2o/s2orc | v3-fos-license | High-Precision Automatic Identification of Fentanyl-Related Drugs by Terahertz Spectroscopy with Molecular Dynamics Simulation and Spectral Similarity Mapping
Fentanyl is a potent opioid analgesic with high bioavailability. It is the leading cause of drug addiction and overdose death. To better control the abuse of fentanyl and its derivatives, it is crucial to develop rapid and sensitive detection methods. However, fentanyl-related substrates undergo similar molecular structures resulting in similar properties, which are difficult to be identified by conventional spectroscopic methods. In this work, a method for the automatic identification of 8 fentanyl-related substances with similar spectral characteristics was developed using terahertz (THz) spectroscopy coupled with density functional theory (DFT) and spectral similarity mapping (SSM). To characterize the THz fingerprints of these fentanyl-related samples more accurately, the method of baseline estimation and denoising with sparsity was performed before revealing the unique molecular dynamics of each substance by DFT. The SSM method was proposed to identify these fentanyl analogs based on weighted spectral cosine–cross similarity and fingerprint discrete Fréchet distance, generating a matching list by stepwise searching the entire spectral database. The top matched list returned the identification results of the target fentanyl analogs with accuracies of 94.48~99.33%. Results from this work provide algorithms’ increased reliability, which serves as an artificial intelligence-based tool for high-precision fentanyl analysis in real-world samples.
Introduction
Fentanyl (N-phenyl-N-[l-(2-phenylethyl)-4-piperidinyl]propanamide) is the prototype of a 4-anilinopiperidine class of synthetic opioids that was first synthesized by Paul Janssen in 1959 [1,2]. As a high lipophilic affinity µ-opioid receptor agonist that rapidly crosses through the cell membranes, fentanyl presents approximately 100 times more potency than the highly ionized and hydrophilic morphine based on its success as a prescription over the last decades [3,4]. Its pharmacological effects like analgesia and anesthesia could be retained or enhanced by modifying the functional group on fentanyl [5,6]. This makes it easy to synthesize fentanyl analogs by replacing the phenethyl group with various other groups (except hydrogen); displacing the benzene ring attached to the nitrogen atom with substituted or unsubstituted monocyclic aromatic groups; substituting the propionyl group in the 4-anilido fragment with other acyl groups; and introducing substituents (e.g., methyl groups and methoxycarbonyl groups) at different positions on the piperidine ring [7,8]. According to the 25-year (1995-2020) monitoring data reported by the European Monitoring Centre for Drugs and Drug Addiction (EMCDDA), 36 new synthetic fentanyl derivatives have been detected in the European drug market since 2009 [9]. Recently, new fentanyl and its analogs, such as methoxyacetylfentanyl, acetylfentanyl, furanylfentanyl, 2 of 13 butyrylfentanyl, 4-fluoroisobutyrfentanyl, carfentanil, and valerylfentanyl, have emerged in the illicit market as cheaper and more potent alternatives to heroin [10]. Acute intoxications and related fatalities caused by the abuse of fentanyl analogs have become a concerning global health threat [11,12]. Although many relevant programs and regulations have been launched nationwide to control the abuse of fentanyl-related substances, a series of non-pharmaceutical fentanyl analogs have been derived as illegal drugs that escape law enforcement from the local government [13,14].
The fundamental strategy to address this crisis is to develop fentanyl detection technology. To date, the most commonly used method to identify fentanyl and its analogs are chromatography-mass spectrometry, such as gas chromatography-mass spectrometry [15,16], liquid chromatography-mass spectrometry [17,18], and liquid chromatographytandem mass spectrometry [19,20]. Although these methods are accurate and sensitive, they cannot meet the increasing requirements for rapid detection (e.g., in situ and street detection) due to their time-consuming processing procedures conducted in the laboratory [21]. In recent years, studies on rapid detection methods based on immunoassays, including fluorescent immunoassay [22], homogeneous enzyme immunoassay [23], enzyme-linked immunosorbent assay [24], electrochemical immunosensor [25], and lateral flow immunochromatography [26], have also been reported. These studies have greatly enriched research on fentanyl analogs, especially for their metabonomics but challenges remain in meeting the higher requirements for sensitivity. One promising method of identifying fentanyl-related substances is through terahertz (THz) fingerprint spectroscopy because this technique can probe for both the inter-and intra-molecular vibrational modes of crystalline materials (fentanyl analogs are primarily powdered crystalline substances), yielding unique, molecularly-specific vibrational spectra and thus enabling unique identity to chemical species. It should be noted that the fentanyl-related molecules undergo similar structural and metabolic pathways, resulting in similar properties and metabolite production to them. Thus, it is difficult to identify the unique features of each fentanyl-related substance via conventional spectroscopic methods, such as near-infrared spectroscopy and ultraviolet spectrophotometry. THz time-domain spectroscopy (THz-TDS), as a rapid and sensitive technique to study the dynamics of molecular morphological structure, provides both chemical and structural information for fentanyl analysis. It enables THz-TDS to distinguish between fentanyl-related substances with relatively similar chemical structures, which facilitates the development of specific THz spectral databases of fentanyl and its analogs [27]. We believe that the intrinsic THz fingerprints of these fentanyl analogs can be used not only to improve the knowledge of their basic material properties but also for identification purposes.
This paper presents a method to calculate spectral matching accuracy, aiming to identify fentanyl analogs from the THz spectral database rapidly, automatically, and efficiently. Herein, we collected the THz absorbance spectra of 8 fentanyl-related substances by measuring pressed-pellet samples with a THz-TDS spectrometer. To characterize the THz fingerprints of these fentanyl-related samples more accurately, the method of baseline estimation and denoising with sparsity (BEADS) was performed before revealing the unique molecular dynamics of each substance by density functional theory (DFT). Specifically, we proposed an algorithm named spectral similarity mapping (SSM), a procedure that automatically calculates THz absorbance spectral similarity based on weighted spectral cosine-cross similarity (CCS) and fingerprint discrete Fréchet distance (DFD). It generates a matching list of entries with similar spectra to target analytes by searching the spectral database. Consequently, the top matching result returns quantifiable certainty to infer the identity of the target analyte. This spectral similarity matching and mapping strategy may even realize the automatic identification of counterfeit or illicit fentanyl drugs according to the THz spectral characteristics of the main components of fentanyl.
Molecular Geometric Configuration
The molecular geometric configuration of the fentanyl-related substances, referring to fentanyl (C 22 Figure 1 shows the molecular electrostatic potential (MEP) of the optimized molecular structures. It can be seen from Figure 1 that all these eight molecules have similar structures to piperidine rings, as well as the structures of phenyl group directly linked to two nitrogen atoms (in blue) by a monocyclic aromatic group. Furthermore, the maximum MEP value of these eight fentanyl substances appeared around the oxygen atoms (in red). These results indicate that similar fingerprint characteristics may arise at THz frequency due to similar molecular vibrational modes of these fentanyl-related substances.
Comparison of Experimental and Theoretical Spectra
According to the optimized molecular geometry of fentanyl-related substances, their theoretical spectra were simulated by DFT. To improve the spectral resolution, the BEADS method was used to reduce the noise and correct the baseline before comparing the experimentally obtained THz absorption spectra with the DFT simulated spectra. As shown in Figure 2, THz absorption spectra and peaks at different frequencies (marked with red dots) were found in the experimental spectra of fentanyl-related substances. The peaks with high intensity were successfully marked by an automatic peak-finding algorithm. However, several spectral characteristics marked with green circles, e.g., 2.83 THz and 3. (Figure 2i), were difficult to be determined whether these are the fingerprint peaks or not due to low resolution. Spectral processing results show that the spectral baselines were accurately estimated (dash-dotted curves) and these indistinct peaks were highlighted in the BEADS-processed THz spectra (black curves). The baseline-corrected experimental spectra and DFT simulated theoretical spectra (blue curves) were in high agreement except with slight frequency shifts and several absorption peaks missing (marked with red circles), which was conducive to improving theoretical analysis accuracy of the formation mechanism of THz absorption peaks. The discrepancy between the experimental and theoretical spectra was mainly due to the different states of the tested sample. Because the experimental samples were compressed pellets of solid powders, the DFT simulations were conducted based on the isolated molecules. Therefore, the intermolecular interaction, crystal field effect, and crystal resonance were excluded in theoretical simulations. Furthermore, the experimental measurements were carried out at 294 K, but the simulations were conducted based on 0 K, thus ignoring the thermal effect. There were more theoretical absorption peaks than the experimental ones, which might be due to the limitation of THz experimental instruments.
Comparison of Experimental and Theoretical Spectra
According to the optimized molecular geometry of fentanyl-related substances, th theoretical spectra were simulated by DFT. To improve the spectral resolution, the BEA method was used to reduce the noise and correct the baseline before comparing the perimentally obtained THz absorption spectra with the DFT simulated spectra. As sho in Figure 2, THz absorption spectra and peaks at different frequencies (marked with r dots) were found in the experimental spectra of fentanyl-related substances. The pe with high intensity were successfully marked by an automatic peak-finding algorith However, several spectral characteristics marked with green circles, e.g., 2.83 THz a 3.27 THz of fentanyl (Figure 2b), 1.07 THz of both acetylfentanyl ( Figure 2d) and but to the different states of the tested sample. Because the experimental samples were compressed pellets of solid powders, the DFT simulations were conducted based on the isolated molecules. Therefore, the intermolecular interaction, crystal field effect, and crystal resonance were excluded in theoretical simulations. Furthermore, the experimental measurements were carried out at 294 K, but the simulations were conducted based on 0 K, thus ignoring the thermal effect. There were more theoretical absorption peaks than the experimental ones, which might be due to the limitation of THz experimental instruments.
Assignment of Absorption Peaks
The formation mechanism of the THz characteristic absorption peaks was analyzed using the visualization function in GaussView 5.08 software (Gaussian Inc., Wallingford, CT, USA). Table 1 lists the assignment of the molecular vibration modes according to the DFT simulated results. There were six peaks for fentanyl, in which the peaks at 0.92, 1.38, 2.52, 2.83, and 3.27 THz were generated by the weak deformation vibrations of the molecular skeleton of the piperidine group, and the peak at 1.78 THz was formed by the weak deformation vibration of the phenylethyl group (2C-3C); three peaks of methoxyacetylfentanyl were found at 1.11, 1.35, and 2.62 THz. The peaks at 1.11 and 1.35 THz were formed by the out-plane deformation vibrations of 1C-2C and 14C-15C-16C of the acetamide group, respectively. The peak at 2.62 THz was assigned as the deformation vibration of benzene ring on phenylethyl; three peaks of acetylfentanyl were found at 1.07, 1.34, and 2.36 THz, in which the peaks at 1.07 and 2.36 THz were formed by the weak deformation vibrations of piperidine group, and the peak at 1.34 THz was generated by the out-plane deformation vibration between phenyl and ethyl on phenylethyl group (12C-13C); five peaks of furanylfentanyl were found at 0.83, 1.17, 1.70, 2.50, and 2.93 THz. The peak at 0.83 THz was assigned as the in-plane deformation vibration of 11C-10C single bond on piperidine group. The peak at 1.17 THz was assigned as the out-plane deformation vibration of 17C-16C single bond on phenylethyl group. The peaks at 1.70, 2.50, and 2.93 THz were assigned as the deformation vibrations of benzene rings in both benzene and phenethyl, the deformation vibration of benzene ring in phenethyl, and the deformation vibration of piperidinyl molecular skeleton, respectively; three peaks of butyrylfentanyl were found at 0.52, 1.07, and 1.99 THz. The peaks at 0.52 and 1.99 THz were assigned as the deformation vibrations of the molecular skeleton of piperidine groups. The peak at 1.07 THz was assigned as the out-plane deformation vibration of 3C-4C single bond on butyryl group; three peaks of 4-fluoroisobutyrfentanyl were found at 1.19, 1.57, and 2.11 THz. The peak at 1.19 THz was formed by the out-plane deformation vibration of 5C-9N single bond between phenyl and pyridine groups. The peak at 1.57 THz was formed by the in-plane deformation vibration of 7C-6N single bond on piperidine group. The peak at 2.11 THz was formed by the out-plane deformation vibration of 3C-2C-52C on isobutyryl group; four peaks of carfentanil were found at 1.14, 2.00, 2.52 and 3.45 THz. The peaks at 1.14 and 2.52 THz were generated by the deformation vibration of benzene rings in both phenylethyl and phenyl. The peak at 2.00 THz was generated by the deformation vibration of benzene ring in phenyl. The peak at 3.45 THz was generated by the out-plane deformation vibration of 5C-6C single bond on piperidine group; three peaks of valerylfentanyl were found at 0.91, 1.96 and 2.26 THz. The peak at 0.91 THz was assigned as the out-plane deformation vibration of 14C-8N single bond between phenylethyl and piperidine groups. The peak at 1.96 THz was formed by the combined interaction of out-plane deformation vibrations of 9C-10C and 13C-12C on piperidine group. The peak at 2.26 THz was assigned as the out-plane deformation vibration of 10C-11C-12C on piperidine group.
Identification of Fentanyl Based on Spectral Similarity
To identify the spectral characteristic of fentanyl-related substances that have similar molecular structures, the CS, CCS, DFD, and HD were utilized to calculate the THz spectral similarity between each of the eight fentanyl-related substances, respectively, as seen in Figure 3a-d. The CS curves illustrated in Figure 3a show the relatively low spectral similarity between carfentanil and the other fentanyl-related substances. As shown in Figure 3b, the CCS map shows the similarity values between each two fentanyl analogs, revealing the accuracy of identifying the fentanyl analogs based on their CCS scores. The DFD map (Figure 3c) reveals the spectral distance based on feature points. The smaller the distance, the higher the spectral similarity. The HD curves (Figure 3d) show the relatively higher spectral distance between carfentanil and other fentanyl-related substances. These results indicate that the THz spectral characteristics of carfentanil are more specific than those of other fentanyl analogs. The reason is that carfentanil has a high molecular weight and a unique amide structure as compared to other fentanyl molecules. Therefore, carfentanil can be easily identified among fentanyl analogs by THz spectroscopy. To provide a more intuitive characterizing of spectral similarity, as depicted in Figure 3e, the color bars above the cut-line illustrate the normalized sum of CS and CCS, and the color bars below the cut-line illustrate the normalized sum of DFD and HS. The higher similarity and the corresponding lower distance imply the feasibility of identifying each fentanyl analog. However, acetylfentanyl, butyrylfentanyl, and 4-fluoroisobutyrfentanyl might be misidentified due to their similar spectral characteristics. It is difficult for methods, such as CS, HD, and CCS, to identify spectra with similar feature points through similarity matching. Therefore, a more specific method that matches both the spectral shapes and feature points is needed to identify similar spectra.
analog. However, acetylfentanyl, butyrylfentanyl, and 4-fluoroisobutyrfentanyl might be misidentified due to their similar spectral characteristics. It is difficult for methods, such as CS, HD, and CCS, to identify spectra with similar feature points through similarity matching. Therefore, a more specific method that matches both the spectral shapes and feature points is needed to identify similar spectra.
Application of Mass Spectral Similarity Mapping to Fentanyl Analogs
The proposed SSM method fuses the spectral shape and peak characteristics for similarity evaluation, providing reliable identification for fentanyl analogs based on stepwise matching from the THz spectral database. This database consisted of the average THz absorption spectra of ten replications of the eight fentanyl analogs. Figure 4 depicts the spectral mapping chains and peak nodes when comparing the target spectrum with that in the database. By calculating the SSM accuracy based on the features of spectral shapes and peaks, the matched spectrum (colored curve) with the highest SSM accuracy was marked to identify the target fentanyl analogy (black curve). Results show that the matched spectrum and the target spectrum has a high degree of similarity, which can successfully identify the category of the target fentanyl with the already known information stored in the spectral database.
Application of Mass Spectral Similarity Mapping to Fentanyl Analogs
The proposed SSM method fuses the spectral shape and peak characteristics for similarity evaluation, providing reliable identification for fentanyl analogs based on stepwise matching from the THz spectral database. This database consisted of the average THz absorption spectra of ten replications of the eight fentanyl analogs. Figure 4 depicts the spectral mapping chains and peak nodes when comparing the target spectrum with that in the database. By calculating the SSM accuracy based on the features of spectral shapes and peaks, the matched spectrum (colored curve) with the highest SSM accuracy was marked to identify the target fentanyl analogy (black curve). Results show that the matched spectrum and the target spectrum has a high degree of similarity, which can successfully identify the category of the target fentanyl with the already known information stored in the spectral database.
The parameters α and β were adjusted to test the performance of the SSM algorithm. The testing results show that with α increasing from 0.1 to 0.6 (interval of 0.1), the top matching list can successfully identify the target fentanyl analog, and the SSM accuracy distribution between the target and that to be matched was stable. Figure 5 plots the spectral similarity mapping to fentanyl analogs by SSR with the parameters α = 0.2 and β = 0.8. The matching list of the target fentanyl analog was sorted in descending order of SSR identification accuracy. The SSR accuracy was calculated to evaluate the probability of identifying the target to the matched fentanyl analog from the database. As can be seen from Figure 5a-h, the target (black curve) was accurately identified with the top hit by the mass spectral similarity mapping. Figure 5i depicts the t-SEN visualization of all the replicated spectra, which further reveals the spectral similarity of fentanyl analogs in mapping space. The main advantage of this algorithm is that it can generate a spectral similarity hit map from searching the THz spectral database. It can accurately identify the fentanyl category of a given spectrum according to the returned top hit results by fusing spectral and feature information, making this method suitable for use as references in forensic laboratories and providing valuable support for combating increasingly rampant abuse of fentanyl-related drugs. The parameters α and β were adjusted to test the performance of the SSM algorithm. The testing results show that with α increasing from 0.1 to 0.6 (interval of 0.1), the top matching list can successfully identify the target fentanyl analog, and the SSM accuracy distribution between the target and that to be matched was stable. Figure 5 plots the spectral similarity mapping to fentanyl analogs by SSR with the parameters α = 0.2 and β = 0.8. The matching list of the target fentanyl analog was sorted in descending order of SSR identification accuracy. The SSR accuracy was calculated to evaluate the probability of identifying the target to the matched fentanyl analog from the database. As can be seen from Figure 5a-h, the target (black curve) was accurately identified with the top hit by the mass spectral similarity mapping. Figure 5i depicts the t-SEN visualization of all the replicated spectra, which further reveals the spectral similarity of fentanyl analogs in mapping space. The main advantage of this algorithm is that it can generate a spectral similarity hit map from searching the THz spectral database. It can accurately identify the fentanyl category of a given spectrum according to the returned top hit results by fusing spectral and feature information, making this method suitable for use as references in forensic laboratories and providing valuable support for combating increasingly rampant abuse of fentanyl-related drugs.
Preparation of Drug Pellets
The real-world fentanyl-containing samples, including fentanyl, methoxyacetylfentanyl, acetylfentanyl, furanylfentanyl, butyrylfentanyl, 4-fluoroisobutyrfentanyl, carfentanil, and valerylfentanyl, were provided by the Chinese Ministry of Public Security. Due to the extremely limited amount of these fentanyl-containing drugs, there was no uniform weighing for each sample. Therefore, the percentage (or purity) of the drug was unknown. These powder samples mixed with polyethylene were prepared as pellets using a compression machine (BJ-15, Tianjin BoJun Technology Co., Ltd., Tianjin, China) with diameters of 13 mm under the constant pressure of 30 MPa for 3 min. These pellets were used for THz spectral measurement without any further preparation.
THz Apparatus and Spectral Measurements
The THz-TDS experimental apparatus (CCT-1800, China Communication Technology Co., Ltd., Shenzhen, China) with its stable transmission-scanning mode, which has been described in detail in our previous works [28], was used to obtain spectra (with spectral resolution of 30 GHz over the range of 0.5-3.5 THz) of the drug pellets. Spectral measurements were performed at ambient temperature of 294 K with dry nitrogen purged into the apparatus to eliminate interference by moisture. The THz spectrum of dry nitrogen was obtained as the reference signal. The average of 100 time-domain scans was obtained as the spectrum of the tested drug sample, and the measurement of each sample was repeated 5 times. Finally, the average value of 5 measurements was taken as the standard spectrum of the test sample to further form the spectral database.
DFT Theoretical Simulations
The Becke three-parameter Lee-Yang-Parr (B3LYP) functional and 6-311G basis set (B3LYP/6-311G) implemented in the Gaussian 2016 software were utilized to optimize the theoretical structures and simulate the theoretical spectra of the 8 fentanyl-containing samples at DFT level. According to the results of DFT theoretical simulation, the THz fingerprint peak properties (e.g., vibrational modes, frequencies, and intensities) of samples were investigated by matching the peaks of the THz experimental spectrum with DFT theoretical spectrum.
Spectral Similarity Calculations
To facilitate the spectral searching and matching process, the sample spectrum and its standard spectrum were selected for spectral similarity calculation. The methods of CS and CCS were used to evaluate the THz spectral similarity between fentanyl-related substances, as well as the methods of HD and DFD were used to calculate the distance between two spectra (The smaller the distance between two vectors, the higher the similarity). CS is a measure of similarity between two vectors of an inner product space that measures the cosine of the angle between them (The cosine of 0 • is 1, and it is less than 1 for any other angle) [29]. CCS is a function that calculates cosine-cross similarity between the time series of threshold crossings with time lags. It also calculates the dominant lag and maximum value [30]. HD is a technique used to determine differences between spectral shapes as well as the degree of difference between two shapes [31]. DFD calculates the discrete Fréchet distance between curves P and Q. P and Q are two sets of points that define polygonal curves with rows of vertices (data points) and columns of dimensionality. Points along the curve are taken in the order that they appear in P and Q [32]. These methods are efficient in evaluating spectral similarity and easy to be realized. However, they cannot provide quantitative evaluation results by one single comparison of the two spectra. Even with an exhaustive pairwise comparison of the spectra, the evaluation results are still far from the actual multi-source spectral similarity, which does not fundamentally solve the issue of multi-source spectral similarity evaluation. Herein, we present a SSM method that fuses weighted CS (Equation (1)) and DFD (Equation (2)) to directly evaluate the spectral matching accuracy from two scales of spectral shape and spectral characteristics. The reason for fusing CS and DFD is that they are efficient methods to evaluate spectral similarity (i.e., between two curves) and characteristic distance (i.e., between two sets of feature points on curves), respectively. In addition, they are easy to be realized without any parameters. The spectral matching accuracy can be calculated as follows: where n (I = 1, 2, . . . , n) is the number of spectrum to be matched with a target. CS i and DFD i are the CS and DFD between the i-th spectrum (S i ) and the target spectrum (S t ), respectively. m (j =1, 2, . . . , m) is the number of features on the spectrum, d(υs j i , υs j t ) is the Fréchet distance between features of υs j i and υs j t (on the spectrum of S i and S t , respectively). α and β (α + β = 1) are the weights of CS and DFD, respectively. Acc is the accuracy of specifying the i-th spectrum as the target spectrum.
Conclusions
This work demonstrates the identification of fentanyl-related drugs with similar molecular characteristics by THz spectral analysis. The formation mechanism of the THz peaks of these drugs was investigated by Gaussian-accelerated molecular dynamics simulations, further explaining how these fentanyl molecules interact with the THz radiation. It has been shown that it was difficult to identify fentanyl analogs that were being created with slight modifications to the molecular functional groups by the direct comparison of THz spectral peaks or spectral curves due to their similar spectral features. Here, we presented a fentanyl analog identification method based on THz spectral similarity mapping that incorporated spectral curves and peak nodes. This method allowed the accurate identification of fentanyl analogs based on the top hit list returned by the stepwise searching and matching the database. Furthermore, as most traditional machine learning methods required a large dataset for training and testing models, our method focused on revealing spectral characteristics without requiring large datasets. The target fentanyl analog could be accurately identified with the top hit result. This work demonstrated the efficacy of a functional data analysis to identify molecules containing similar functional groups from their THz absorption spectra, providing valuable support for combating increasingly rampant fentanyl abuse. | 2022-09-10T15:16:50.263Z | 2022-09-01T00:00:00.000 | {
"year": 2022,
"sha1": "d4cb2a1daeeebdf1ddeac1bcf5746fdc9036bc6c",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/23/18/10321/pdf?version=1662549805",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bf1954604b56691b5cb9beb497985034a8bc76e4",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
137154011 | pes2o/s2orc | v3-fos-license | Diffusion of positrons in polymers studied by slow positron beam
Using a pulsed slow positron beam, we have measured the positron lifetimes in the systems of thin Ni films on polyethylene and polycarbonate substrates made by the molecular beam epitaxial method. We have estimated the one-dimensional diffusion lengths of positrons in polyethylene and polycarbonate.
Introduction
The variable-energy slow-positron beams have enabled studies of atomic vacancy-type defects at surfaces, in sub-surfaces, and in thin films [1]. The positron lifetime method with a slow-positron beam provides important information on species of vacancy defects contained in thin films [2]. The positron mobility is an important quantity for understanding the positron trapping into defects and the positron reemission from surfaces observed using slow positron beams. In the case of polymers, the positron beam allows the determination of local free-volume properties of the polymers as a function of the depth from the surface [3]. Considering the positroniums (Ps) formation, the positron mobility in polymers is especially important [4]. Since various positron spur processes relevant to the Ps formation are strongly influenced by the diffusion of positrons, its mobility plays a crucial role in determining the Ps yields in polymers.
Hirata et al. have reported that the diffusion lengths of Ps in polymers are very small (D P s = 2.6 ∼ 5.1 × 10 −6 cm 2 /s) [5]. Diffusion mobility of free positrons in polyethylene is estimated to be µ = 10cm 2 /V · s [6], 10.3cm 2 /V · s [7], 27.7cm 2 /V · s [4] and 32cm 2 /V · s [8]. The diffusion constant is given by the Einstein's relation, where k B is the Boltzmann constant, T is temperature and e is the elemental charge. The diffusion length is then given by Having the free positron lifetime of τ = 0.52 nsec for polyethylene and µ = 32cm 2 /V · s, we obtained the one-dimensional diffusion length L ≡ √ Dτ ≈ 210 nm at room temperature. Ps states at the polymer surface), we deposited the thin Ni films on the polyethylene and polycarbonate.
We compare the obtained one-dimensional diffusion lengths with those determined from the above Einstein's relation.
Experiment
Thin Ni films of thickness 25±5 nm and 75±5 nm were deposited on the polyethylene substrates (15mm × 15mm × 1mm, density 0.92g/cm 3 Japan Custom Co. , Ltd. ) by the molecular beam epitaxial (MBE) method under a pressure of 8.5 × 10 −9 Torr at room temperature. Similarly, a thin Ni film of thickness 125±5 nm was deposited on the polycarbonate substrate (15mm×15mm×1mm, density 1.20 g/cm 3 , Japan custom Co. , Ltd.). The positron annihilation lifetime measurements were carried out at room temperature using an intense pulsed slow positron beam generated by the electron linac of the National Institute of Advanced Industrial Science and Technology (AIST) LINAC facility.
Results and Discussions
First of all, we introduce the general principle to determine the one dimensional diffusion length of positrons. The positron implantation profile is given by where z 0 is given by where E is the incident positron energy in keV , ρ is the materials density in g/cm 3 where L i is the one-dimensional diffusion length of positrons in the i-th layer. T i is the thickness of the i-th layer. The fraction of positrons (F i ) that annihilate in the i-th layer is given by Having the annihilation fraction determined from the measurements and the stopping fraction calculated by eq. (5) in the i-th layer, one can determine the one-dimensioned diffusion length, L i , in the i-th layer using eqs.(6) through (8). Table 1 Table 2 lists the positron lifetimes and their intensities obtained for the system of the thin Ni film (25±5nm thick) on a polyethylene substrate at E=1, 3 and 4 keV. In the case of 1keV, the observed lifetime spectrum is well fitted with two lifetime components. Most positrons annihilate within the Nifilm as shown in from Fig. 1. The lifetimes τ 1 and τ 2 represent small vacancy clusters in the Ni layer and the pick-off annihilation of ortho-positronium at the Ni surface, respectively. For E=3 keV and 4keV, the observed lifetime spectra are well fitted with three lifetime components. The lifetime τ 1 corresponds to the mixture of small vacancy clusters in the Ni film and free-positron annihilation in polyethylene. The values of τ 2 and τ 3 correspond to the annihilation of free positrons in polyethylene and the pick-off annihilation of orho-positronium in polyethylene. The ratio of I 3 /I 2 are 1.51 and 1.52 for E=3 keV and 4 keV, respectively. These results are consistent with those listed in table 1 [9]. Thus, from the comparision between tables 1 and 2, the annihilation fraction in the polyethylene substrate is estimated as I 2 /15.1(=I 2 in table 1). Table 3. Annihilation and stopping fractions in the Ni layer and the polyethylene substrate estimated for the system of the thin Ni film (25±5nm thick) on the polyethylene substrate at E=3 keV and 4 keV. Table 3 summarizes the annihilation and stopping fractions (F and η in eq. (8)) in the Ni layer and the polyethylene substrate estimated for the system of the thin Ni film (25±5nm thick) on the polyethylene substrate at E=3 keV and 4 keV. We integrated the positron stopping profile within polyethylene, taking into account of only the diffusion length Ni (near surface 6nm) [1]. Following the manner explained at the beggining of this section, we obtained the one-dimensional diffusion length as L ∼10±2 nm in polyethyene.
We have measured the positron lifetimes for the system of a Ni film (75±5nm thick) on a polyethylene substrate ate E=1, 2, 4, 6 and 8 keV. Subsequently, we obtained the one-dimensional diffusion length in the polyethylene to be L ∼10±2 nm, which is comparable to that obtained for the case of the 25±5 nm thick Ni on a polyethylene. We have further measured the positron lifetimes for the system of a thin Ni film (125±5 nm thick) on a polycarbonate substrate at E=1, 2, 6 and 8 keV. The one-dimensional diffusion length in polycarbonate was obtained to be L ∼32±2 nm.
The one-dimensional diffusion lengths obtained here (10∼30 nm) for the polyethylene and polycarbonate are much shorter as compared to the value ∼200 nm derived from the mobilities of positrons and Einstein's relation as mentioned in the introduction. The positron in polymers pushes on polymer molecules to make space for itself. This effect reduces the positron kinetic energy. Large space in polymer of sphere, however, is costly in terms of polymer free energy, so the polymer molecules push back. The balance between the positron kinetic energy and the polymer compressibility, which is relatively low in comparison with other solid materials, results in the stable structure. In the case of the positron diffusion in this state, it might not be adequate to use the Einstein's relation.
Conclusion
We have determined the one-dimensioned diffusion lengths in the polyethylene and polycarbonate based on the positron beam technique. The obtained diffusion lengths are one order of magnitude shorter than those reported previously using the Einstein's relation. | 2019-04-28T13:12:06.396Z | 2010-04-01T00:00:00.000 | {
"year": 2010,
"sha1": "1a73048037ecd07ae9c8558ca08c2911c664f64a",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/225/1/012053",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "ac92d5d17cb74da6164b730ba5005d7a4d1990e9",
"s2fieldsofstudy": [
"Physics",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
} |
149502239 | pes2o/s2orc | v3-fos-license | Aggressiveness in the HEXACO personality model
The a im of this research was to examine the relations between the HEXACO facets and aggressiveness components (anger, vengefulness, domination, hostility, reactive aggression, proactive aggression, and indirect aggression). On a sample of 654 participants from general population, HEXACO–60, Aggressiveness Questionnaire AVDH, Reactive–Proactive Aggression Questionnaire (RPQ), and Indirect Aggression Scale (IAS) were applied. The results of the community structure network analysis provided the most informative insight into these relations and showed that all aggressiveness components formed a single community with the Agreeableness facets. Thereby, facet Patience was the strongest correlate of anger, and hostility had the highest clustering coefficient, which brings together the Agreeableness facets and aggressiveness components. Although Honesty–Humility formed a separate community, some of its facets had strong isolated connections with indirect aggression and dominance. The results revealed that Agreeableness is a dominant correlate of aggressiveness and captures all aggressiveness components, while Honesty– Humility is related to specific components, referring to a manifestation of aggressiveness in a more subtle and indirect way.
Trait aggressiveness is a multidimensional construct which captures specific behavioral (aggression), affective (anger), and cognitive (hostility) components.Aggressiveness was not the basic trait in dominant personality models, at least not as a separate and unique dimension.In lexical Big Five (BF) model (Goldberg, 1992) or in the Five Factor Model -FFM (Costa & McCrae, 1992), aggressiveness is commonly associated with the negative pole of Agreeableness, which refers to interpersonal antagonism.However, different components of aggressiveness are associated with different basic traits.Agreeableness is negatively related to general tendency towards aggressive behavior, e.g., physical and verbal aggression (Gallo & Smith, 1998;Tremblay & Ewart, 2005), aggression under provoking and nonprovoking conditions (Bettencourt, Talley, Benjamin, & Valentine, 2006), tende ncy towards reactive and proactive aggression (Seibert, Miller, Pryor, Reidy, & Zeichner, 2010), and it influences whether there will be an affiliative or confrontational position in interpersonal conflicts (Caprara, Barbaranelli, & Zimbardo, 1996;Graziano, Jensen-Campbell, & Hair, 1996;Jensen-Campbell & Graziano, 2001).Other components of aggressiveness, such as anger and hostility, are more associated with Neuroticism (Gallo & Smith, 1998;Tremblay & Ewart, 2005).Neuroticism has an important role both in aggression under provoking condition (Bettencourt et al., 2006), and in reactive aggression (Miller & Lynam, 2006;Miller, Zeichner, & Wilson, 2012;Seibert et al., 2010), mostly because these kinds of aggressive behavior are related to anger (Ramirez, 2009).However, for some specific aggressive behaviors, such as relational or indirect aggression, the main predictors are both Neuroticism and Agreeableness (Miller et al., 2012;Seibert et al., 2010).
In the lexical six-factor HEXACO model (Lee & Ashton, 2004) certain reconceptualizations were made, which has implications for determining aggressiveness.The sixth factor, Honesty-Humility, includes a part of variability attributed to Agreeableness from BF, but it also contains new indicators of morality (Lee & Ashton, 2004).Apart from the sixth factor, the HEXACO model also differs by the way it defines Neuroticism and Agreeableness.Specifically, the indicators of anger and hostility that were part of Neuroticism from BF are part of Agreeableness in the HEXACO model, while the indicators of sentimentality and empathy that are part of the Agreeableness in the BF model are in the HEXACO model part of Emotionality, which is a trait closely related to Neuroticism.Because of this reconceptualization, Agreeableness from the HEXACO model had a higher correlation with reactive than with proactive aggression, while Honesty-Humility is highly correlated with both (Book, Volk, & Hosker, 2012).Accordingly, Agreeableness still presents a general tendency towards aggression, but now it includes the tendency towards reactive i.e., immediate aggression followed by anger, which is not the case with Agreeableness from BF.However, Honesty-Humility is to a higher extent related to the selective, deliberate aggression, which corresponds more to proactive aggression (Book et al., 2012;Lee & Ashton, 2012).Relations between the HEXACO and different components of aggressiveness confirmed this distinction: while Agreeableness is more associated with anger and hostility, Honesty-Humility is more associated with vengefulness, and both traits are almost equally associated with dominance (Dinić, Mitrović, & Smederevac, 2014).
The present study
The aim of this study is to examine the relations between the HEXACO facets and components of the aggressiveness.Several components of aggressiveness which are relevant from the standpoint of personality psychology are included: anger, vengefulness, dominance, hostility, reactive aggression, proactive aggression, and indirect aggression.The first four are components which were extracted by examining the latent structure of items from agreeableness/aggressiveness scales from different personality inventories (Dinić et al., 2014).Namely, these personality scales do not measure the same content of aggressiveness and favor different aggressiveness components, so the purpose of including the four mentioned components is to make sure to capture all the main aggressiveness components present in agreeableness/aggressiveness scales.The rest of the components refer to the behavioral aspect of aggressiveness.Given that different functions of aggression are related to different personality traits and types (Book et al., 2012;Miller & Lynam, 2006;Miller et al., 2012;Seibert et al., 2010), reactive and proactive function of aggression was included.Indirect aggression is also included since it is close to proactive aggression (Miller et al., 2012).
Although Agreeableness from the HEXACO model is supposed to be the main correlate of aggressiveness and captures a wider range of aggressiveness components (Lee & Ashton, 2004), previous studies showed that Honesty-Humility is also closely associated with aggressiveness (Book et al., 2012;Dinić et al., 2014;Lee & Ashton, 2012).However, there are specific relationships between these basic traits and certain components of aggressiveness.Namely, Agreeableness includes facets of patience and gentleness, and thus it should be more (negatively) related to anger, hostility, and immediate or reactive aggression, while Honesty-Humility includes facets of sincerity and fairness, and thus it should be more (negatively) related to calculated or proactive aggression, including vengefulness (Book et al., 2012;Dinić et al., 2014;Lee & Ashton, 2012).Having in mind the intentional nature of indirect aggression, it could be expected that it is mostly related to Honesty-Humility.
However, the other traits should be also considered in their relations with aggressiveness although their associations with aggressiveness are not as strong.For example, Emotionality was moderately correlated with displaced aggression (Lee & Ashton, 2012) and had small negative correlation with instrumental (proactive) aggression (Book et al., 2012).Moreover, Conscientiousness from FFM was negatively related to both reactive and proactive aggression (Jones, Miller, & Lynam, 2011;Miller et al., 2012), and to indirect aggression (Gleason, Jensen-Campbell, & Richardson, 2004).
Considering the relations of almost all HEXACO traits with certain components of aggressiveness, the main question is what is the location of aggressiveness components in HEXACO model: are some HEXACO traits, such as Agreeableness, central to the aggressiveness components, or are aggressiveness components connected to several different HEXACO traits in a way which indicates that there is no grouping around one trait, or do aggressiveness components form a separate group.To answer this question, confirmatory and exploratory approaches are applied.In order to ensure the same hierarchical level of variables, analyses were done on facet level of both HEXACO traits and aggressiveness.Firstly, several proposed models were tested with the location of aggressiveness components: 1) only in Agreeableness, 2) only in Honesty-Humility, 3) in both mentioned dimensions, according to theoretical expectations and previous research, 4) in a separate factor.We also tested a model based on structural equation modeling (SEM) approach in which the bifactor model of aggressiveness was built (with general factor of aggressiveness and aggressiveness components as specific factors) and settled regression paths from Agreeableness and Honesty-Humility factors to specific aggressiveness factors in order to get significance of these relations.The version without other HEXACO traits except Agreeableness and Honesty-Humility was also tested.In addition, a bifactor model was tested with Agreeableness facets, Honesty-Humility facets, and aggressiveness components in a general factor and three additional specific factors which constitute these traits.The main idea of this model is to get insight into factor loadings of aggressiveness components and explained variance of specific and general factors.Secondly, exploratory factor analysis (EFA) along with network community analysis was used.Compared to EFA, which should show the common latent structure of the HEXACO facets and aggressiveness components, the network community analysis should reveal the patterns of clustering of these variables without invoking latent factors.Network analysis enables us to identify both weak and central, cohesive elements of clusters, which could provide us with more informative insight into specific relations between HEXACO facets and aggressiveness components.
Participants and procedure
The sample included 654 participants (49.6% of males) from the general population, aged between 18 and 73 (M = 30.49,SD = 12.37, Mode = 20).Most of the participants were highly educated: 41.5% were college students and 24.7% finished a higher school or faculty, while 32.4% finished only primary or secondary school (9 respondents or 1.5% did not fill in the question about education).There were no significant gender differences in age (t(641) = -0.66,p > .05),nor in the education level (χ 2 (2) = 0.52, p > .05),but there was significant difference in age regarding the education level (F(2, 632) = 156.93,p < .001),with students being the youngest.All procedures performed in the study were in accordance with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.Before participating in the study, the participants were provided with the information about the purpose of the study and they signed informed consent.The testing was performed by trained MA students as a part of their pre-exam activities.Each student collected the data from a specific number of participants, based on given age and sex quotas.The same sample was used in Dinić and Wertag (2018).
Measures
HECAXO-60 (Ashton & Lee, 2009).HEXACO-60 is a shorter form of the HEXACO-PI-R, which measures six dimensions of the lexical HEXACO model, each consists of four facets.Each dimension was operationalized with 10 items (2 or 3 items per facet) with 5-point Likert type scale for answering (α's were in the range from .67 for Emotionality to .79 for Openness to experience).
Indirect Aggression Scale -Aggressor version (IAS-A; Forrest et al., 2005).IAS-A measures three forms of adult indirect aggression -social exclusion (n = 10, α = .89,e.g."Purposefully left them out of activities."),use of malicious humor (n = 9, α = .87,e.g."Used sarcasm to insult them."),and guilt induction (n = 6, α =. 82, e.g., "Used emotional blackmail on them.").Instructions for the IAS-A require the participants to think about the situations when they have exhibited such behavior towards another person in the past 12 months.A 5-point Likert scale for answering was provided, from 1 = never to 5 = regularly.Since these three forms are highly correlated (from .76 to .85), and they form an isolated, dense cluster in network analysis, the total score for IAS-A was computed (α = .94).
Aggressiveness Questionnaire (AVDH; Dinić et al., 2014).AVDH is based on examining the common structure of the items from agreeableness/aggressiveness scales from various personality inventories from psycho-lexical (Big Five Inventory -BFI, IPIP-HEXACO-PI-R, and Big Five Plus Two -BF+2), as well as from psycho-biological models (Zuckerman-Kulhman Personality Questionnaire-III-Revised -ZKPQ-III-R, Reinforcement Sensitivity Questionnaire -RSQ, and Multidimensional Personality Questionnaire -MPQ).Based on parallel analyses, four factors were extracted, which was followed by constructing new items referring to the factors content.AVDH contains 23 items with joint 5-point Likert type scale, and measures four aggression traits: 1. Anger, which refers to the affective aspect of the trait aggression, e.g., frequent and easy experiencing of anger and rage, hot temper, and overreacting in experiencing anger (n = 5, α = .79,e.g., "I get angry easily."),2. Vengefulness, referring to the tendency to retaliate and to the cognitive aspect of aggression, such as planning and imagining revenge attempts that include the desire for harming or humiliating another person (n = 6, α = .90,e.g."I simply must make mischief to people who annoy me."), 3. Dominance, which includes the tendency towards more subtle expressions of aggressive impulses through intrusion, argumentativeness, and initiating verbal disputes, with the need to assume social dominance and demonstrate superiority (n = 7, α = .85,e.g., "Nobody dares to contradict me."), and 4. Hostility, which includes bigotry and unfriendly attitude towards others, which could be manifested as more covert or passive aggression, and also as reduced tolerance for other people's mistakes, and concentration mostly on other people's imperfections (n = 5, α = .73,e.g., "Some people annoy me so much that I cannot stand their presence.").
Data analysis
Firstly, confirmatory factor analysis (CFA) was used to test seven models: 1) model in which all aggressiveness components were located in the same factor as Agreeableness facets; 2) model in which all aggressiveness components were located in the same factor as Honesty-Humility facets; 3) model in which certain components were located in Agreeableness factor (anger, hostility, reactive aggression), and certain in Honesty-Humility factor (vengefulness, proactive and indirect aggression).Because dominance could be located in both Honesty-Humility and Agreeableness factors, we checked solutions in which dominance is only in Honesty-Humility, only in Agreeableness, and in both factors, and then we compared its loadings in order to set the location of dominance; 4) model with a separate aggressiveness factor which includes all aggressiveness components; 5) SEM model in which there is a bifactor model of aggressiveness (with general factor of aggressiveness and specific factors as aggressiveness components) and settled regression paths from Agreeableness and Honesty-Humility to each of specific aggressiveness factors; 5a) variant of Model 5 but with regression paths in line with relations in Model 3; 6) variant of Model 5a without other HEXACO traits, except for Agreeableness and Honesty-Humility; 7) bifactor model with Agreeableness, Honesty-Humility, and aggressiveness as specific factors and one general factor.In all models, correlations between factors were included.Because multivariate normality was violated, robust maximum likelihood estimator (MLR) was applied.Several fit indices were calculated: comparative fit index (CFI) and Tucker-Lewis index (TLI) which should be ≥ .95,root mean square error of approximation (RMSEA) and standardized root mean square residual (SRMS) which should be < .08 for acceptable model fit (Hooper, Coughlan, & Mullen, 2008).AIC and BIC were calculated for comparing the non-nested models.CFA was conducted in R package "lavaan" (Rosseel, 2012).
Secondly, principal axis factor analysis was applied, and number of factors was obtained based on parallel analysis, Wayne Velicer's minimum average partial (MAP) and very simple structure (VSS) criteria.The analysis was conducted in R package "psych" (Revelle, 2017).
Thirdly, network analysis was applied.The approach in the network analysis was the "network of personality components" approach (Cramer et al., 2012), which was used as a comprehensive, data-driven tool for investigating the underlying structural organization of the component interactions.The main goal of the network analysis in our study is discovering the partitioning of the network into distinguishable groups (communities) of variables whose intra-group relations are more frequent than extra-group ones.This kind of partitioning, or community structure, reveals dense localized patterns of interactions, which are essential for explaining the differences in the relations of the aggressiveness components with the facets from different HEXACO domains.
Although network models (or Ising models) are statistically equivalent to latent variable models, in the former all connections between observables are represented by twoway connections of the same type, while in the latter the direction of the effect from the indicator to the latent variable cannot change (Kruis & Maris, 2016;Marsman, Maris, Bechger, & Glas, 2015).This is a uniqu e benefit of the network approach, as the analysis is focused on the localized, complex pattern of interaction between observable variables, with each effect being interpreted not only numerically, but in a relational manner.Therefore, this approach is a suitable addition to the factor analysis as it provides an alternative exploratory framework where the observed associations between manifest variables are not being explained as a result of an underlying factor structure, but as a consequence of mutualistic relations between these variables (Kruis & Maris, 2016).
The basic element of a network is a node, which represents one variable, and an edge, which indicates the association between variables.More specifically, the edge weight is the strength of association between two variables, while the form of associativity is determined by the type of the computed network.The algorithm used for visual representation of each network was the Fruchterman-Reingold algorithm (Fruchterman & Reingold, 1991), which positions nodes with the highest number of connection nodes in the middle of the graph, while the nodes that are less connected to others are on the periphery.Also, this layout places the group of nodes which are densely connected with each other in proximity, which enables visual identification of potential communities.In this study, the concentration network of given variables was computed, based on their partial correlations, using the adaptive LASSO method (Zou, 2006) as the most recommended (Constantini et al., 2015).This method is based on the use of LASSO penalty (Friedman, Hastie, & Tibshirani, 2008) on partial correlations of different intensity (the lower the coefficient, the proportionately higher the penalty is), leading to small coefficients being reduced to zero.R package "qgraph" (Epskamp, Craner, Waldorp, Schmittmann, & Borsboom, 2012) was used for network computation.In order to describe the structure of node groupings, the community detection method was used.Communities are locally densely connected parts of the network.There is a difference between strong communities, where nodes are more likely to have edges located inside the community than outside of it, and weak communities, where only some nodes have this property.In weak communities, some nodes are more open towards the rest of the network and their relation to their neighbors can be described with lower clustering coefficient (Girvan & Newman, 2002).Community structure of the LASSO concentration network was detected by using a "walktrap" detection algorithm (Pons & Latapy, 2006) from the R package "igraph" (Csárdi & Nepusz, 2006).This algorithm showed the accuracy in the networks of the same size and density, as the one in this study, and it results in reliable community structure (Yang, Algesheimer, & Tessone, 2016).Most community detection algorithms calculate many different community partitions of the network and choose the one which has the highest modularity score.Modularity of a given network partition into communities is higher when there are many more edges between the nodes of the same community than it could be expected by chance, i.e. in a random network (Newman, 2010).Once detected, the community structure shows which variables have strong local interactions with other variables of their own type.Local clustering coefficient was also used, and it represents a ratio of the number of the pairs of neighbors that one node is connected to the number of pairs of neighbors the same node has.While modularity is based on the total number of edges within one group, the clustering coefficient is concerned with triads (or triangles) of nodes, which means that a node with high clustering coefficient is connected to neighboring nodes which are themselves connected with each other, which leads to triadic closure and formation of node cliques or strong communities (Newman, 2010).Signed Zhang clustering coefficient (Zhang & Horvath, 2005) was used and computed using the R package "qgraph" (Epskamp et al., 2012).
Descriptives and correlations
Honesty-Humility and Agreeableness show stronger correlations with aggressiveness components, with Honesty-Humility having slightly higher negative correlations with indirect aggression and vengefulness, while Agreeableness has a higher correlation with reactive aggression and anger (Table 1).Their facets also show different pattern of relations.The strongest negative correlation was found between the facet Patience from Agreeableness and anger, while Forgiveness, for example, has strong negative edges with vengefulness and hostility.Also, Fairness from the Honesty-Humility domain shows stronger negative correlations with aggressive behaviors, while Modesty has stronger negative correlations with dominance.However, the other traits and their facets have higher correlations with aggressiveness components, e.g., Social Boldness from Extraversion with dominance, and Prudence from Conscientiousness with anger and vengefulness.Note.Theoretical ranges are between 1 and 5, except for RA and PA which are between 0 and 2. Correlations ≥ ± .13 are significant at p < .001.
Confirmatory factor analysis
Firstly, we explored the difference in dominance loadings on both Honesty-Humility and Agreeableness factor in order to configure Model 3. The difference between these loadings is negligible (in Honesty-Humility it was -.46 and in Agreeableness it was -.38), even if dominance is only on Honesty-Humility (-.81) or on Agreeableness factor (-.79), so it's obvious that dominance should be on both factors.If we take a look at AIC and BIC, the fourth model emerges as the best compared to models 1-3 (Table 2).However, neither of proposed models had satisfactory fit indices (Table 2, for factor loadings see Supplement A).Moreover, in all proposed models, modification indices (MI) do not suggest allocation which includes aggressiveness components.For example, in the third model, MI suggested inclusion of Social Boldness in Agreeableness and Honesty-Humility factors, as well as in Aggressiveness factor along with the mentioned two factors in the fourth model.In all models, aggressiveness components had the highest loadings.
In Model 5 it could be noted that only vengefulness (-.28) and indirect aggression (-.26) significantly regressed to Honesty-Humility, and that only vengefulness (-.20) regressed to Agreeableness.However, the contribution of vengefulness in relation to Agreeableness is positive, which clearly indicates that the general factor of aggressiveness captures common variance, and the residual variance of specific factors, so the interpretation of this model is biased.Therefore, only relations proposed in Model 3 were kept but as regression paths (Model 5a).All regression paths were significant, except between Honesty-Humility and proactive aggression, and contributions are in expected negative direction.However, this model also does not have satisfactory fit indices.There is no improvement in Model 6 either.
In bifactor Model 7, the explained variance of specific aggressiveness factor was 32.76%, while for Agreeableness factor it was 13.61%, for Honesty-Humility factor 10.70%, and for the general factor it was 42.93%.Anger, dominance, and hostility had higher loadings on specific factor, while proactive, indirect, and reactive aggression had higher loading on the general factor, while vengefulness obtained relatively close loadings on both the specific and the general factor.Although explained variance of the general factor is the highest, aggressiveness components had the main contribution on this factor, especially in the domain of aggression behavior.Factor loadings of aggressiveness components vary, and it could be noticed that some components had higher loadings on the specific factor, and while others had high loadings on the general factor.Also, Fairness from Honesty-Humility and Patience from Agreeableness had relatively the same loadings on both the general and the specific factor.This result indicated that there are distinctions between aggressiveness components.In further analyses we explore whether these distinctions reflect different relations with the HEXACO facets.
Exploratory factor analysis
Because CFA did not result in satisfactory fit indices, EFA was applied in order to explore the latent structure of HEXACO facets and aggressiveness components.The results of parallel analysis suggested 8 factors, and 5 factors were suggested by applying MAP and VSS criteria.Because most of the criteria suggested 5 factors, this solution was kept and promax rotation was applied.The results show that the factor which encompasses all aggressiveness components includes all facets from Agreeableness and three facets from Honesty-Humility (Fairness, Greed Avoidance, and Modesty, marginally, see Supplement B), but it is noticeable that Honesty-Humility facets had the weakest loadings.Thus, the nature of relations between Agreeableness and Honesty-Humility domains and the aggressiveness components remains vague11 .
Network analysis
LASSO network.LASSO network (Figure 1) shows the structure of relations between seven groups of variables (facets from the six HEXACO traits and aggressiveness components as the seventh group), each represented with a different color.The density of the network is 0.33, which means that the existing edges represent one third of all possible edges between all 31 nodes.The most striking observation resulting from visual inspection of this network is the structural position of the Patience facet from the Agreeableness domain, which is positioned next to the aggressiveness components, presumably due to its strong negative connection with anger, which outweighs the relations this facet has with its own group.The other results refer to the HEXACO structure, i.e., the displacement of Anxiety facet from the Emotionality domain, which is slotted between the facets of the Conscientiousness domain.Visual topology of the network suggests that the relation between Agreeableness, Honesty-Humility, and aggressiveness groups is not clear at this point.Facets from both HEXACO domains have strong negative edges with the aggressiveness components, with the exception of Greed Avoidance facet from Honesty-Humility.Therefore, community detection algorithm was performed on this network.
Community detection.Walktrap algorithm was used to detect communities in a LASSO network.Modularity of the resulting community partition is 0.46, which can be considered an optimal score for the given network. 12According to the results, there are five different communities (Figure 2).The most important result regarding the relations between the aggressiveness components and the HEXACO facets is the presence of the community in which components of aggressiveness and Agreeableness facets are joined.This partition can be seen as a consequence of a strong negative edge between anger and Patience, positive edges between the aggressiveness components, and lack of notable strong edges of other Agreeableness facets towards nodes from other communities.Facets of Honesty-Humility domain form a weak community, which has several strong edges with the previous community, notably with dominance and indirect aggression.However, the only 12 Walktrap algorithm produced the optimal modularity scores compared to other algorithms suitable for the size and density of our network.Modularity scores for other algorithms: 0.44 (Infomap), 0.39 (Label propagation), 0.18 (Edge betweenness).Spinglass algorithm reports negative modularity score, which means that there are more edges outside the found communities than in the communities themselves.Louvain algorithm returns exactly the same partitioning as walktrap with the same modularity.
node without any such edges is Greed Avoidance and it represents the main cohesive element of this community.If this node is removed from the network, the Honesty-Humility community falls apart and merges with reactive, proactive and indirect aggression into a new community.The remaining three communities are consisted of combining facets of the Extraversion and the Conscientiousness domain (along with the Anxiety), facets of the Emotionality domain (without Anxiety), and facets from the Openness domain.
In addition to these insights provided by the community analysis, it is useful to take a look at local clustering coefficient of all the nodes, as the nodes with low clustering values have slightly more connections with the rest of the network, outside their communities.
Clustering coefficients.In order to compare the clustering of different nodes throughout the network, the relative clustering coefficients are shown, among which the variable with the highest absolute coefficient has the relative coefficient of 1 (Figure 3).Hostility is the node with the highest relative clustering coefficient in the network which means that it forms cohesive triads with the neighboring nodes and therefore brings together the Agreeableness facets and the aggressiveness components.These results shed new light on the cohesiveness of the Agreeableness and aggressiveness community.While the strongest edge in this community is the one between Patience and anger, hostility is the most clustered node.These results pointed out that both Patience and hostility are the key nodes in this community, but their contribution to the cohesiveness of the community is different.
Of all aggressiveness components, anger and proactive aggression have the lowest clustering coefficients, which means that they are not part of many closed triadic structures.For proactive aggression, this is a result of the absence of strong edges with dominance, hostility, and vengefulness.For anger, however, the issue is not lack of edges towards the neighboring nodes, but high centrality and a large number of weak connections towards many edges in the network.Edges with high centrality in networks of medium density can be the weakest parts of any community if their centrality score is determined by important intracommunity relations, which means that their pattern of relations is dispersed through the entire network (Gupta et al., 2016).
Discussion
Results of confirmatory approach regarding relations between the HEXACO facets and aggressiveness components do not provide satisfactory fit indices for proposed models.Considering that, exploratory approach was applied, which includes both EFA and network analysis.EFA resulted in a factor which includes aggressiveness components along with all Agreeableness facets and most of the Honesty-Humility facets, which had the lowest loadings.Community detection algorithm on LASSO correlation network provides additional exploratory insight into these relations.While community analysis also ties all facets from Agreeableness together with the aggressiveness components, the Honesty-Humility facets remain associated together as a weak, but stable community.Results of both exploratory analyses showed that among the HEXACO traits and its facets, the Agreeableness domain is most closely related to aggressiveness components and, as could be seen in network analysis, together they form a joined community.Namely, affective and cognitive aggressiveness components were mostly related to Neuroticism from BF (e.g., Gallo & Smith, 1998;Tremblay & Ewart, 2005), but in HEXACO model they are located in the negative pole of Agreeableness, together with the behavioral component.This result suggests that Agreeableness captures a wider range of interpersonal outcomes, including all main components of aggressiveness.This is in line with the theoretical assumptions (Lee & Ashton, 2004) and it is not surprising.
However, there are two main implications regarding the position of aggressiveness in the HEXACO model.First, the nodes which bring together the Agreeableness domain and aggressiveness are Patience facet and hostility, but in a different way.Patience assesses a tendency to remain calm rather than to become angry (Lee & Ashton, 2004), thus it is strongly connected to anger as the affective component of aggressiveness.As mentioned before, indicators of the affective component of aggressiveness in BF model belong to Neuroticism (Goldberg, 1992), so the question is if allocation of these indicators makes Agreeableness from HEXACO the key correlate of aggressiveness, or it points out to a stronger connection with aggressiveness.It seems that the latter is the case, given that the other facets of Agreeableness have dense, medium strength, relations with the aggressiveness components.
Hostility also has an important role in forming the Agreeableness + aggressiveness community.It is the most clustered node and it is connected to almost all other nodes from its community.Previous studies showed mixed results regarding the cognitive (not hostility specific) component of the trait.For example, in Zillig, Hemenover, and Dienstbier (2002) study, in personality inventories based on lexical paradigm (Big Five Inventory, Unipolar Adjective Trait Descriptors, and Revised Interpersonal Adjective Scale) the cognitive component is less represented in the Agreeableness domain.However, a recent study showed that lexical based inventories, such as HEXACO-100 and BFI, accentuated the cognitive component in Agreeableness, compared to the inventories based on psycho-biological paradigm (Dinić & Smederevac, 2018).What is certain is that although the affective component of aggressiveness has a clear facet in the HEXACO model, the cognitive component does not.
The second important implication is regarding Honesty-Humility facets.Although in EFA almost all facets of this trait belong to the factor which contains aggressiveness components and Agreeableness facets, they are tied into the Honesty-Humility community and formed a separate community.This means that the Honesty-Humility domain is distinct from the Agreeableness domain, at least when relations with aggressiveness are analyzed, point out that Agreeableness is the dominant correlate of aggressiveness.The divergent validity of Honesty-Humility is well established (Ashton & Lee, 2007;Ashton, Lee, & de Vries, 2014), but previous studies showed that both Agreeableness and Honesty-Humility were related to certain aggressiveness components (Book et al., 2012;Dinić et al., 2014;Lee & Ashton, 2012).Therefore, a separate Honesty-Humility community, without any aggressiveness component, is not expected.
Of all Honesty-Humility facets, Greed Avoidance is the most distant from the aggressiveness components.Greed Avoidance refers to the tendency to be uninterested in possessing lavish wealth, luxury goods, and signs of high social status (Lee & Ashton, 2004).These characteristics are more related to materialism and not necessarily include absence or low aggressiveness markers.What could be also concluded is that Greed Avoidance is distant from all other nodes in the network and acts as a strongest cohesive element of the Honesty-Humility community (note that in EFA Greed Avoidance is located in the factor with aggressiveness components and Agreeableness).
Although facets from Honesty-Humility form a separate community, there are some strong isolated connections with certain aggressiveness components.For example, the Modesty facet has a strong negative relation with dominance, while Sincerity and Fairness facets have strong negative relations with indirect aggression.Modesty and Greed Avoidance refer to Humility, while Sincerity and Fairness refer to Honesty (Ashton & Lee, 2007).However, the difference between Modesty and Greed Avoidance is in the sense of superiority and entitlement, which is present in Modesty.These characteristics capture the maladaptive aspect of narcissism (Campbell, Bonacci, Shelton, Exline, & Bushman, 2004), which is related to socially undesirable outcomes, including aggression.Results in this study indicated that Modesty is negatively related to certain aspect of aggressiveness that refer to the tendency towards social domination and showing-off in arguments in order to demonstrate superiority.On the other hand, the tendency toward more subtle, indirect aggression is more negatively related to Honesty aspect.Together, the results indicate that Honesty-Humility is related to very specific components of aggressiveness and does not represent the disposition for general aggressiveness.What is more important is that the used form of aggression (more subtle, indirect) is more indicative for Honesty-Humility than the function of aggression (reactive or proactive).This result is not in line with the suggestions based on results from previous studies (Book et al., 2012;Knight, 2016;Lee & Ashton, 2012), but in these studies distinctions between forms and functions of aggression were not made.To conclude, community network analysis enables us to compare the few stronger associations of Honesty-Humility facets with aggressiveness components (indirect aggression and dominance), internal structure of correlations inside this domain, and associations of Honesty-Humility facets with other domains, and as a result we can conclude that all Honesty-Humility facets form a weak, but autonomous community.
Taken together the results confirmed that all aggressiveness components are, to some extent, included in HEXACO model, given that aggressiveness components do not form separate factor or community.Therefore, the percent of the variance in the aggressiveness components accounted for by the HEXACO facets is 31.78%,which indicate that there is a large amount of aggressiveness variance that HEXACO facets do not cover.It could be assumed that the "core" of aggressiveness is captured by the HEXACO model, specifically by the Agreeableness, but that there are some unique manifestations of aggressiveness which depend on another construct beyond basic personality traits.This construct could be a constellation of "dark" or malevolent personality traits, namely Dark Tetrad, as some authors suggested (Paulhus, Curtis, & Jones, 2017).
The limitation of this study is that all used questionnaires are self-reports and the results could be influenced by shared method variance.Although self-report is a standard tool in personality psychology, behavioral assessments are warranted.In addition, due to the high correlation between the scales of indirect aggression, the set of different aspects of indirect aggression was reduced to a single variable, which limited the information obtained from different aspects of this aggressiveness component.Also, t he small percent of shared variance could be the result of using the short form of HEXACO inventory.In line with that, we recommend that future study should use the longer version of HEXACO instrument and test whether proposed CFA models would result in satisfactory fit indices and whether the same latent and community network structure would be obtained.Despite these limitations, the present findings could provide better insight into both the shared and unique features of the aggressiveness and the HEXACO facets, which may indicate different mechanisms underlying the process of variable grouping and clustering in the mutual network.Future studies could include facets from BF and HEXACO model together, along with various components of aggressiveness.Cilj ovog istraživanja je ispitivanje relacija između HEXACO faceta i komponenata agresivnosti (bes, osvetoljubivost, dominacija, hostilnost, reaktivna agresija, proaktivna agresija i indirektna agresija).Na uzorku od 654 ispitanika iz opšte populacije zadati su HEXACO-60, Upitnik agresivnosti BODH, Upitnik reaktivne-proaktivne agresije (RPQ) i Skala indirektne agresije (IAS).Rezultati analize mrežnih zajednica pružaju najinformativniji uvid u pomenute relacije i pokazuju da sve komponente agresivnosti formiraju zajednicu sa facetama Prijatnosti iz HEXACO modela.Pritom, faceta Strpljivost se izdvaja kao najjači korelat besa, dok hostilnost ostvaruje najviši koeficijent klasterizovanja, što dovodi do spajanja Prijatnosti i komponenti agresivnosti.Iako Poštenje-skromnost formira posebnu zajednicu, neke od faceta ove dimenzije imaju jake izolovane veze sa indirektnom agresijom i dominacijom.Rezultati pokazuju da je Prijatnost dominantni korelat agresivnosti i da obuhvata sve komponente agresivnosti, dok je osobina Poštenje-skromnost povezana sa specifičnim komponentama koje se odnose na manifestaciju agresivnosti na suptilniji i više indirektan način.
Figure 1 .
Figure 1.LASSO concentration network of HEXACO facets and aggressiveness components.
Figure 2 .
Figure 2. Community identification based on LASSO concentration network of HEXACO facets and aggressiveness components.
Table 1
Descriptives and correlations between HEXACO traits and aggressiveness components
Table 2
Model fit indicesNote.Model 1 -all aggressiveness components are on Agreeableness factor, Model 2 -all aggressiveness components are on Honesty-Humility factor, Model 3 -anger, hostility, and reactive aggression are on Agreeableness factor, while vengefulness, proactive, and indirect aggression are on Honesty-Humility factor, and dominance is on both factors, Model 4 -aggressiveness components are in separate factor, Model 5 -SEM hybrid model with general aggressiveness model and specific factors as aggressiveness components (bifactor model of aggressiveness), with regressions from Agreeablness and Honesty-Humility to each specific aggressiveness factors; Model 5a -variant of Model 5 with regressions in line with relations in Model 3; Model 6 -the same model as Model 5a, but only with Agreeablness, Honesty-Humility, and aggressiveness factors.Model 7 -bifactor model with Agreeableness, Honesty-Humility, and aggressiveness as specific factors and one general factor.All χ 2 s are significant at p < .001. | 2019-05-12T14:22:39.761Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "0c8d0a5d3f992c7d8a815b47004c1988e3491675",
"oa_license": "CCBYSA",
"oa_url": "http://www.doiserbia.nb.rs/ft.aspx?id=0048-57051800022S",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8fb5cd9ff08df5defb6f32f2112b75b5a3a8d6e8",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology"
]
} |
228099345 | pes2o/s2orc | v3-fos-license | OnabotulinumtoxinA for the treatment of neurogenic detrusor overactivity in children
Abstract Aims This study evaluated whether one (or more) of three doses of onabotulinumtoxinA were safe and effective to treat neurogenic detrusor overactivity (NDO) in children. Methods This was a 48‐week prospective, multicenter, randomized, double‐blind study in children (aged 5–17 years) with NDO and urinary incontinence (UI) receiving one onabotulinumtoxinA treatment (50, 100, or 200 U; not to exceed 6 U/kg). Primary endpoint: change from baseline in daytime UI episodes. Secondary endpoints: change from baseline in urine volume at first morning catheterization, urodynamic measures, and positive response on the treatment benefit scale. Safety was also assessed. Results There was a similar reduction in urinary incontinence from baseline to Week 6 for all doses (−1.3 episodes/day). Most patients reported positive responses on the treatment benefit scale (75.0%−80.5%). From baseline to Week 6, increases were observed in urine volume at first morning clean intermittent catheterization (50 U, 21.9 ml; 100 U, 34.9 ml; 200 U, 87.5 ml; p = 0.0055, 200 U vs. 50 U) and in maximum cystometric capacity (range 48.6−63.6 ml) and decreases in maximum detrusor pressure during the storage phase (50 U, −12.9; 100 U, −20.1; 200 U, −27.3 cmH2O; p = 0.0157, 200 U vs. 50 U). The proportion of patients experiencing involuntary detrusor contractions dropped from baseline (50 U, 94.4%; 100 U, 88.1%; 200 U, 92.6%) to Week 6 (50 U, 61.8%; 100 U, 44.7%; 200 U, 46.4%). Safety was similar across doses; urinary tract infection was most frequent. Conclusions OnabotulinumtoxinA was well tolerated and effective for the treatment of NDO in children; 200 U showed greater efficacy in reducing bladder pressure and increasing bladder capacity.
| INTRODUCTION
Neurogenic detrusor overactivity (NDO) is a condition characterized by involuntary detrusor contractions (IDCs) during the bladder filling phase that can result in urinary incontinence (UI). 1 Any neurological condition that impacts the brain or spinal cord, resulting in the interruption of the signaling pathways that control bladder function-for example, spinal cord injury, multiple sclerosis, or spinal dysraphism-may lead to NDO. 2,3 Types of relevant spinal dysraphism include myelomeningocele (MMC), spina bifida occulta, split cord malformation (diastematomyelia), spinal cord lipoma (lipomyelomeningocele), dermal sinus tract, and tethered spinal cord. 4,5 MMC is the most common neurological disorder responsible for bladder dysfunction in pediatric patients, with traumatic and neoplastic spinal cord lesions being less frequent. [6][7][8] NDO can lead to elevated bladder pressures and, if not adequately managed with standard treatment, may require augmentation cystoplasty to prevent renal damage. 9 The primary goal of NDO treatment is to attain and maintain safe bladder storage pressures to avoid kidney damage. A detrusor pressure of 40 cmH 2 O has been cited as a critical threshold above which patients may be at increased risk for upper urinary tract dysfunction resulting in renal damage. 10 Joint guidelines from the European Society for Paediatric Urology and the European Association of Urology suggest that in children with NDO, starting the use of clean intermittent catheterization (CIC) early can help minimize upper tract changes, provide better bladder protection, and lower UI rates. 11 Similarly, the International Children's Continence Society recommends pharmacotherapy with oral anticholinergic medications in conjunction with CIC. 12 However, 10%-15% of these patients fail to respond to these treatments, and side effects may be limiting. [13][14][15] OnabotulinumtoxinA 200 U is a well tolerated and effective treatment option approved for adults with UI due to NDO inadequately controlled with anticholinergic therapy. 16 Although onabotulinumtoxinA is not currently approved for children with NDO, several published studies demonstrated positive efficacy with acceptable safety in this population at doses up to 360 U. A systematic literature review demonstrated that after onabotulinumtoxinA treatment, 32%-100% of pediatric patients were continent, with maximum detrusor pressure (MDP) reductions of 32% to 54%, often below the 40 cmH 2 O threshold. 17 However, to date, there is no consensus as to what dose has optimal efficacy and safety, and currently available information is inadequate to guide dosing decisions for the use of onabotulinumtoxinA in this population. The goal of the current program was to fill this gap and determine if one or more of three onabotulinumtoxinA doses (50, 100, and 200 U; not to exceed 6 U/kg) were safe and effective for the treatment of NDO in children inadequately managed with anticholinergic therapy.
| Study population
Children (5-17 years) with NDO due to spinal dysraphism, transverse myelitis, or spinal cord injury, based on the presence of an IDC during urodynamics, were included. Patients were inadequately managed with anticholinergic agents (i.e., were still incontinent, experiencing intolerable side effects, or unwilling to continue the medication) and were regularly using CIC (≥3 times/day for ≥3 months before screening). Patients must have had ≥4 daytime UI episodes over a 2-day diary completed during screening. "Daytime" was defined as the time between waking up to start the day and going to bed to sleep for the night.
Patients were excluded who had cerebral palsy, spinal cord surgery within 6 months of screening, or previous/ current botulinum toxin therapy of any serotype for any urological condition. Patients could discontinue their anticholinergics within 7 days of the start of the screening, or continue at a stable dose throughout the study.
| Study treatment
Patients were centrally randomized through an interactive web response system in a 1:1:1 ratio to one treatment of onabotulinumtoxinA 50, 100, or 200 U (not to exceed 6 U/kg). While nonclinical studies support dosing up to 8 U/kg, a conservative approach of 6 U/kg was taken due to the pediatric population being studied. The 50 U low dose was included in lieu of a placebo control arm (owing to ethical concerns in children).
Patients, physicians, and study staff were blinded to treatment. Medication was reconstituted by an independent drug reconstitutor not associated or involved with the study patients' care or assessments.
Patients received prophylactic antibiotic treatment. OnabotulinumtoxinA was delivered via cystoscopy as 20 intradetrusor injections of 0.5 ml excluding the trigone, under general anesthesia/conscious sedation or instillation of local anesthetic (only allowed for patients >12 years of age).
Patients had posttreatment follow-up clinic visits at Weeks 2, 6, and 12, then alternating telephone and clinic follow-up visits every 6 weeks up to 48 weeks.
Patients could request onabotulinumtoxinA retreatment ≥12 weeks after the first treatment, with the retreatment administered in a long-term extension study (ClinicalTrials.gov, NCT01852058). Retreatment criteria required ≥2 daytime UI episodes over a 2-day bladder diary. If patients did not request/qualify for retreatment during the 48 weeks of the study, they exited the study and could enroll in the extension study.
This study was conducted in conformance with Good Clinical Practice guidelines, the principles of the Declaration of Helsinki, or the laws/regulations of the country in which the research was conducted. Assent was obtained from the patients, and informed consent was provided by parents/guardians.
| Primary endpoint
Change from baseline in the daily average frequency of daytime UI episodes/day (from a 2-day bladder diary). The primary time point was Week 6. This primary endpoint was selected owing to regulatory requirements that the pediatric study mirror the NDO Phase 3 pivotal trials of onabotulinumtoxinA in adults.
| Key secondary endpoints
Change from baseline in urine volume at first morning catheterization (collected "upon waking for the day") and urodynamic measures of change from baseline in MDP (cmH 2 O) during the storage phase.
| Other secondary endpoints
Percentage of patients experiencing IDC, change from baseline in maximum cystometric capacity (MCC), the proportion of patients with positive treatment response on the modified treatment benefit scale (TBS), and duration of effect (time to patient request for retreatment). The TBS is a single-item measure of the patient's/ parent's perception of posttreatment benefit (1 = greatly improved; 2 = improved; 3 = not changed; 4 = worsened). 18 A positive response was defined as the patient's condition had "greatly improved" or "improved." Urodynamic testing was administered at baseline and Week 6 and performed according to the standards of good clinical practice as set forth by the International Continence Society and the International Children's Continence Society. [19][20][21] An independent central reviewer provided a quality review and validation of urodynamic tracings and results for analysis.
| Safety
Safety analyses included all patients who received the study drug based on actual treatment received, with patients allocated to the nearest dose group. As patients could request retreatment and move to the extension study from Week 12 onward, adverse events (AEs) and serious AEs (SAEs) were presented over the initial 12 weeks to allow for meaningful comparison across treatment groups, as well as over the entire treatment cycle. Urinary tract infection (UTI) was symptomatic and, in the investigator's opinion, required treatment.
| Statistical analysis
Assuming 30 patients/group and a two-sided Type I error rate of 0.05, and using a range of standard deviations (2-4) based on the 200 U dose group in the adult Phase 3 NDO studies, the confidence interval approach to determining sample size was used to show that the widths of the confidence intervals obtained for the difference between treatment groups in the primary efficacy variable were clinically acceptable. Thirty-four patients/ group were planned for enrollment in this study (accounting for a potential attrition rate of 10% by Week 6).
Efficacy data were analyzed using the modified intent-totreat population consisting of all randomized patients who received treatment. Patients were analyzed using their randomized treatment assignment except for those who, owing to the 6 U/kg maximum, received a lower dose than assigned and were assigned to the nearest dose group based on the dose actually received. The lowest dose (50 U) group was used as the comparator in all statistical testing.
The last-observation-carried-forward approach was used to impute missing daily average frequency of Pairwise treatment differences at each visit were obtained using an analysis of covariance model for continuous variables and the Cochran-Mantel-Haenszel method for categorical variables, controlling for baseline, age (<12 years or ≥12 years), baseline daytime UI episodes (a total of ≤6 episodes or >6 episodes over the 2-day diary), and anticholinergic therapy use (no/yes) at baseline. The Kaplan-Meier method was used to provide median estimates for time to event data. All significance levels were two-sided, with p < 0.05 indicating statistical significance. Analyses were conducted using SAS version 9.4 statistical software (SAS Institute Inc.).
| Baseline demographics and patient characteristics
Baseline demographic and disease characteristics are listed in Table 1. Spinal dysraphism was the primary cause of NDO.
Overall, 114 patients were enrolled and randomized; 100/114 (87.7%) completed the study (48 weeks completion or qualified for retreatment), and 14/114 (12.3%) discontinued the study early ( Figure S1). In total, 113 patients received study medication; both the modified intent-to-treat and safety populations consisted of 38, 45, and 30 patients in the 50, 100, and 200 U onabotuli-numtoxinA treatment groups, respectively. Due to the 6 U/kg maximum, the number of patients who received 200 U was smaller as six patients were analyzed in one of the lower dose groups. Patients analyzed in the 200 U dose group received between 168 and 200 U, patients analyzed in the 100 U group received between 96 and 144 U, and patients analyzed in the 50 U group received between 50 and 72 U.
| Efficacy
Improvements from baseline in the number of daytime UI episodes were observed in all dose groups ( Figure 1A); each dose group resulted in statistically significant and clinically meaningful within-group reductions from baseline. There were no differences in daytime UI episodes for onabotuli-numtoxinA 100 or 200 U compared with onabotulinumtox-inA 50 U (p = 0.9949 and p = 0.9123, respectively).
After 6 weeks, the majority of patients in each group reported "great improvement" or "improvement" on the TBS ( Figure 1B). The 100 and 200 U groups were not statistically significantly different from the 50 U group (p = 0.6884 and p = 0.6112, respectively). Improvements were sustained to Week 12.
A dose-dependent increase in functional bladder capacity, measured by the volume at first morning catheterization recordings, was seen with escalating dosages of onabotuli-numtoxinA ( Figure 2). The adjusted mean change from F I G U R E 2 LS mean change from baseline in urine volume at first-morning catheterization overtime (weeks). *Significant versus onabotA 50 U. p = 0.0055. Error bars reflect standard error. LS, least squares; OnabotA, onabotulinumtoxinA baseline at Week 6 was statistically significant and clinically meaningful for the 200 U versus 50 U doses (p = 0.0055).
Furthermore, a significant improvement from baseline in urodynamic storage pressures was also seen with increasing dosages of onabotulinumtoxinA, with the largest decrease in MDP during the storage phase (Pdet max ) seen in the 200 U arm when compared with 50 U (p = 0.0157; Figure 3A).
There was an increase from baseline to Week 6 in MCC in all dose groups, with no significant differences between the doses ( Figure 3B).
There was a reduction in all dose groups from baseline to Week 6 in the percentage of patients experiencing IDC, with a numerical trend favoring the 100 and 200 U groups ( Figure 3C).
Duration of effect, based on median time for patients to request retreatment, was 30.6, 24.1, and 29.6 weeks in the 50, 100, and 200 U groups, respectively.
| Safety
The safety profile of onabotulinumtoxinA in this pediatric population was similar across doses. Over the entire study period, treatment-emergent AEs were reported in 71.1% to 76.7% of patients; SAEs in 6.7% to 10.5% of patients (Table S1). There were no deaths, cases of pyelonephritis, or evidence of distant spread of toxin in any treatment group. One patient in the 50 U group discontinued the study due to an AE; one event of cystitis was reported as serious. Over the first 12 weeks following onabotulinumtoxinA treatment, 47.4%-66.7% of patients experienced AEs (Table S1).
UTI was the most common AE reported, with no evident dose-dependent relationship (Table S1). Four incidences of UTI were classified as serious as these patients required hospitalization (50 U, 2/38 [5.3%]; 100 U, 2/45 [4.4%]). Most AEs of UTI occurred later than 2 weeks following treatment (Table 2). Annualized UTI event rates were calculated for the treatment period versus 6 months before treatment; no dose-related trend was seen, and there was no difference in the UTI event rate posttreatment compared with the 6 months before treatment ( Table 2).
| DISCUSSION
A fixed dosing approach of onabotulinumtoxinA 50, 100, or 200 U (not to exceed 6 U/kg) was utilized in this study. While it is common for pediatric dosing to be based on U/kg, the fixed-dose approach for this study was based on the nonlinear relationship between age and bladder capacity. Although bladder capacity in children increases sharply during infancy and early childhood, the rate of increase tapers off substantially in children around the age of 4 years. 22 Considering the maximum recommended dose for onabotulinumtoxinA spasticity indications is, in general, higher than that for urological indications (e.g., 300 U for adult spasticity vs. 200 U for adult NDO), the 6 U/kg safety cap was selected for the pediatric doses to reflect this experience (e.g., 8 U/kg for spasticity and 6 U/kg cap for NDO). This study in a vulnerable pediatric population with poorly controlled NDO was not placebo-controlled owing to ethical concerns. While this may be considered a limitation, a 50 U low-dose arm was included in lieu of a placebo group in anticipation of this dose showing significantly reduced efficacy when compared with higher doses. This was the case for several objective endpoints demonstrating reduced bladder pressure and increased bladder capacity; however, it was not true for the more subjective endpoints related to UI. Each dose of onabotulinumtoxinA (50, 100, and 200 U; not to exceed 6 U/kg) was well tolerated. As previous studies were mostly conducted using higher doses, it was surprising that all three doses demonstrated clinically significant improvements in UI in these children. No significant differences in UI reduction were seen between onabotulinumtoxinA 200 and 100 U versus 50 U, indicating similar treatment effects for each arm. This is supported by the finding that most patients across all dose groups gave positive responses on the TBS at Week 6, and duration of effect (time to request retreatment) was similar (approximately 6 months) in the three dose arms. As patient satisfaction and request for retreatment are mostly driven by the experience of UI, the alignment of these analyses would be expected. Another limitation of this study is that collecting and interpreting incontinence episode data via a diary in children can be challenging, as many of these patients are in diapers and may be unable to perceive bladder fullness or leakage. In some patients, leakage would most likely be observed in undergarments or diapers only at the time of catheterization. Thus, changes in the frequency of incontinence between catheterizations may not be evident and collected in the diary for some patients, which may have contributed to the low (50 U), middle (100 U), and high doses (200 U) responding similarly for incontinence endpoints.
While the reduction in UI is important from a quality of life perspective, the primary goal in treating pediatric NDO patients is to attain and maintain safe bladder storage pressures and bladder capacities to prevent permanent damage to the bladder, ureters, and kidneys. Chronically raised bladder pressures are of great concern and have been shown to lead to renal dysfunction and even mortality. 9,23,24 Here, the 200 U dose of onabotuli-numtoxinA showed clinically and statistically greater improvements versus 50 U in measures of Pdet max . With the ultimate goal to reduce bladder pressures as low and as close to that of a normal bladder as possible to avoid potential renal damage, a storage pressure of 40 cmH 2 O has been established as a critical threshold that patients should not exceed. 10 In this study, the 200 U dose demonstrated the most significant reduction in mean A dose-response relationship was also observed for volume at first morning void, which is the best indicator of functional bladder capacity because it represents the natural filling of the bladder over a long period of time (i.e., overnight). Here as well, the 200 U dose demonstrated a significantly greater improvement in volume versus the 50 U dose.
These findings were also supported by the observation that the number of patients experiencing IDCs dropped for each of the three dose groups from baseline to Week 6, with the 200 U dose arm showing a numerically larger decline than the 50 U arm.
It was not surprising that the most common AE was UTI; however, it is interesting that a dose-response trend was not seen and reassuring that onabotulinumtoxinA injections overall did not result in more UTIs than before treatment.
Based on the similar safety profile across the low, medium, and high dose groups, and the clinically important improvements seen in reducing detrusor pressure and increasing bladder capacity, it appears appropriate to treat pediatric patients with NDO with the approved adult onabotulinumtoxinA dose of 200 U (not to exceed 6 U/kg).
| CONCLUSIONS
OnabotulinumtoxinA 200 U (not to exceed 6 U/kg) is a well tolerated and effective treatment for children aged 5-17 years with signs and symptoms of NDO inadequately managed with anticholinergic therapy. While reductions in UI episodes were similar across doses, the 200 U dose demonstrated a statistically and clinically significant greater improvement in Pdet max , as well as increases in functional bladder capacity measured by the first morning catheterization, versus the low dose of 50 U. A long-term extension study is ongoing to evaluate the continued safety and efficacy following repeat treatment with onabotulinumtoxinA. | 2020-12-12T14:08:08.013Z | 2020-12-11T00:00:00.000 | {
"year": 2020,
"sha1": "e3d6a324b375bb3279d3b40f8fa4fcdd2ec34eb0",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/nau.24588",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "1cf7f86ed08ce6bfc36351a64996cd56533bfe43",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
413107 | pes2o/s2orc | v3-fos-license | Targeted Toxin-Based Selectable Drug-Free Enrichment of Mammalian Cells with High Transgene Expression
Almost all transfection protocols for mammalian cells use a drug resistance gene for the selection of transfected cells. However, it always requires the characterization of each isolated clone regarding transgene expression, which is time-consuming and labor-intensive. In the current study, we developed a novel method to selectively isolate clones with high transgene expression without drug selection. Porcine embryonic fibroblasts were transfected with pCEIEnd, an expression vector that simultaneously expresses enhanced green fluorescent protein (EGFP) and endo-β-galactosidase C(EndoGalC; an enzyme capable of digesting cell surface α-Gal epitope) upon transfection. After transfection, the surviving cells were briefly treated with IB4SAP (α-Gal epitope-specific BS-I-B4 lectin conjugated with a toxin saporin). The treated cells were then allowed to grow in normal medium, during which only cells strongly expressing EndoGalC and EGFP would survive because of the absence of α-Gal epitopes on their cell surface. Almost all the surviving colonies after IB4SAP treatment were in fact negative for BS-I-B4 staining, and also strongly expressed EGFP. This system would be particularly valuable for researchers who wish to perform large-scale production of therapeutically important recombinant proteins.
Introduction
has long been used as a powerful experimental tool to evaluate the properties and functions of newly isolated genes. This -scale production of recombinant proteins used in the fields of pharmacology and medicine. Vectors that are generally used as a vehicle for the delivery of a transgene into cells are largely divided into two types, namely, viral and non-viral (plasmid) vectors; the latter has been widely used by researchers because of convenience in plasmid preparation and transfection.
In most plasmid-based transfection experiments, the vectors contain a selectable marker gene (such as neomycin resistance gene [neo]) that confers resistance against a specific drug. After transfection of these vectors, the cells have to be maintained in the presence of the drug to enrich the transfectants for approximately one week. The transfectants carrying a selectable marker gene express protein products that destroy (detoxify) the drug, whereas non-transfectants do not. However, the expression levels of a selectable marker gene and, probably, a gene of interest (GOI) among the transfectants are variable. Therefore, to obtain transfectants with high transgene expression, the levels of the introduced transgene in individual isolated colonies have to be investigated independently or the cells need to be segregated, for example, by fluorescence-activated cell sorting. These steps are not only laborious but also time-consuming. Moreover, this selectable marker-based system cannot be used for cells exhibiting multidrug resistance, and therefore, a new drug-free system for the selection of cells with high transgene expression has long been awaited.
Almost all mammalian cells, except those from humans and Old World apes, express Gal 1-3Gal (an -Gal epitope) on their cell surface [1 4]. The -Gal epitope is synthesized via cell surface-localized -1,3-galactosyltransferase ( -GalT) and is a causative agent for hyperacute rejection upon pig-to-human xenotransplantation [5]. The Clostridium perfringens-derived endo--galactosidase C (EndoGalC) is known to digest the -Gal epitope [6,7]. Therefore, introduction of an EndoGalC construct into the porcine genome has been considered as a promising approach to generate genetically modified piglets suitable for xenotransplantation [7 9]. In addition, the absence of an -Gal epitope can be easily monitored by staining cells with Bandeiraea simplicifolia isolectin-B 4 (BS-I-B 4 , IB4), a lectin that specifically binds to the -Gal epitope [1].
Targeted toxins consist of the ribosome-inactivating protein saporin (SAP) [10] that is conjugated to a target molecule recognizing a cell-specific marker. When administered to the cells of interest, the conjugate binds to, and is absorbed by, the target cells, which results in the release of SAP and subsequent ribosome inactivation. In contrast, the cells not expressing the target molecule do not bind or absorb the conjugate and are not affected. Therefore, targeted toxins have been considered as a powerful tool for removing unwanted cells from a pool of genetically modified population. In fact, we have previously demonstrated successful application of this technology for the isolation of transfectants with high transgene expression from among porcine embryonic fibroblasts (PEFs) transfected with the EndoGalC construct [8]. Moreover, the elimination of unwanted cells, including those that are untransfected and those weakly expressing the -Gal epitope (considered as cells with low transgene expression), can be performed simply by incubating the target cells with SAPnormal conditions. As expected, the surviving cells are those that do not express the -Gal epitope on their cell surface. Based on these findings, we propose that coexpression of a gene of interest and EndoGalC, along with subsequent IB4SAP treatment, as depicted in Figure 1, would result in enrichment of -Gal epitope-negative cells that strongly express GOI. Figure 1. Schematic diagram of a mechanism for targeted toxin-mediated drug-free non--Gal epitope on their surface are targeted by IB4SAP, which subsequently leads to cell death. When the cells are transfected with a vector expressing EndoGalC that digests the -Gal epitope, the cells weakly expressing EndoGalC -killed by IB4SAP through binding to the residual -Gal epitope on the cell surface. In contrast, the cells strongly expressing EndoGalC -the complete loss of the -Gal epitope on their surfaces.
In the current study, we examined whether the EndoGalC/IB4SAP-based selection system is effective for the isolation of -.
Experiment 1: Inverse Relationship between EndoGalC and -Gal Epitope Expression
As a preliminary test, PEFs were stained with the serially diluted Alexa Fluor 594-labeled IB4 (hereafter referred to as AF594-IB4 ) to know the optimal concentration of AF594-IB4 exhibiting strong binding to the cells. As shown in Figure 2A, 50 10 g/mL of AF594-IB4 were found to be highly reactive to the PEFs. Two g/mL of AF594-IB4 yielded moderate staining for -Gal epitope expression. Therefore, we hereafter decided to use more than 50 g/mL of IB4SAP for isolation of -Gal epitope-negative transfectants. The intensity of each cell was measured and plotted, with the arbitrary fluorescence intensity shown in both the abscissa and ordinate axes. The green and blue dots indicate fluorescence measured from the AF594-IB4-stained cells that were transfected with pCE-29 and pCEIEnd, respectively. The red dots indicate fluorescence from the pCE-29-transfected cells that were stained with AF594-IB4 50 mM galactose.
To explore the relationship between the EndoGalC and -Gal epitope expression, we transfected PEFs with the pCEIEnd plasmid ( Figure 2B), which expresses enhanced green fluorescent protein (EGFP) and EndoGalC simultaneously because of the presence of internal ribosomal entry site (IRES) [11 13] between the EGFP and EndoGalC genes. PEFs transfected with pCE-29 plasmid were used as the control ( Figure 2B). At two days after transfection, the cells collected by trypsinization were stained with AF594-IB4. In the case of pCEIEnd transfection, the cells strongly expressing EGFP were almost completely negative for IB4 staining ( Figure 2C, arrows in g i), whereas those not expressing or weakly expressing EGFP showed distinct staining ( Figure 2C, arrowheads in g i). In the case of pCE-29 transfection, all the cells were stained with lectin, irrespective of EGFP expression ( Figure 2C, arrows and arrowheads in a c). However, incubation of the pCE-29-transfected cells with AF594-IB4 + 50 mM galactose abolished the IB4-specific staining (d f in Figure 2C). The image analysis confirmed these observations (green dots vs. red dots in Figure 2D). Also, there was an inverse relationship between EndoGalC and -Gal epitope expression in the pCEIEnd-transfected cells (blue dots in Figure 2D). Thus, transgene high-expressors exhibited greatly reduced expression levels of the -Gal epitope.
Experiment 2: Enrichment of Transgene High-Expressors with Toxin-Conjugated IB4 Treatment
Next, we performed IB4SAP treatment to enrich transgene high-expressors exhibiting distinct EGFP fluorescence, but reduced expression levels of the -Gal epitope. As depicted in Figure 3A, PEFs transfected with pCEIEnd were cultured for four to six days in normal medium (without drug selection). The cells were subsequently split into two groups: one treated with IB4SAP and the other with SAP (control). After treatment, the cells were reseeded in a 60-mm tissue culture dish containing normal medium. Within two days, extensive cell death was observed in the IB4SAP-treated group, in contrast to the SAP-treated cells (data not shown). One week after reseeding, the cells in the control group reached confluency and were found to contain a mixture of fluorescent and non-fluorescent cells determined by a fluorescence microscope [ Figure 3B(a)]. In contrast, almost all the emerging colonies obtained two weeks after IB4SAP treatment exhibited bright and strong EGFP-derived green fluorescence [ Figure 3B(b,c)], even though there were some nonfluorescent colonies (Table 1). Subsequently, some of these fluorescent colonies were subjected to staining with AF594-IB4. All cells exhibited green fluorescence [ Figure 3C(b), arrows] but lost AF594-derived red fluorescence [ Figure 3C(c), arrows], indicating greatly reduced levels of the -Gal epitope on their surface. Long-term (more than six months; over 40 passages) cultivation did not alter their phenotype (high levels of EGFP expression and greatly reduced levels of the -Gal epitope expression) (data not shown). Moreover, staining of nonfluorescent colonies with lectin resulted in the appearance of red fluorescence (data not shown), suggesting that they were likely to be non-transfectants that survived after IB4SAP treatment. We also found that these nonfluorescent cells could be excluded by repeated treatment with IB4SAP (data not shown).
Experiment 3: Targeted Toxin-Based Enrichment of Transgene High-Expressors Is Applicable to Multidrug-Resistant Cells
To extend the usefulness of this novel system for multidrug-resistant cells, we introduced a transgene ( Figure 4A) into a multidrug-resistant porcine cell line THEPNBS, which carries two fluorescent marker genes (EGFP and tdTomato) as well as five drug resistance genes (puro, neo, hyg, Sh ble, and zeo) [14]. As expected, simultaneous expression of EGFP and tdTomato was observed in the parental THEPNBS cells [ Figure 4B(b,c)]. As depicted in Figure 4A, the THEPNBS cells were transfected with pCZIEnd carrying the lacZ gene that codes for E. coli-derived -galactosidase, which can be easily detected by cytochemical staining with its substrate, X-Gal. At two days after transfection, only a few cells exhibited lacZ activity [ Figure 4B(d), arrowheads], whereas the majority of cells had negative results for staining [ Figure 4B(d)]. At five to seven days after transfection, the cells harvested by trypsinization were subjected to IB4SAP treatment and then cultured in normal medium. Two weeks after reseeding, the emerging colonies were fixed and examined by cytochemical staining for lacZ activity. As expected, almost all the colonies (26/30) obtained were distinctly stained [ Figure 4B(e, ], and higher magnification also showed uniform distribution of lacZ activity throughout the colonies [ Figure 4B(f)]. However, a few colonies (4/30) were negative for X-Gal staining [ Figure 4B( ), arrowhead]. As mentioned previously, these colonies may be derived from non-transfectants that survived after IB4SAP treatment. Nevertheless, our results demonstrate that this IB4SAP-based drug-free selection system is applicable to multidrug-resistant cells. Figure 3A, the number of colonies emerging after transfection with pCEIEnd DNA and subsequent treatment with IB4SAP was recorded. Some colonies were inspected for EGFP-derived green fluorescence under a stereomicroscope and scored based on the strength of fluorescence. Experiment was performed in each different day. 2 The strength of fluorescence was classified as ++ (strong), + (moderate), +/ (faint) and (no fluorescence). Targeted toxin technology using IB4SAP was first applied for specific elimination of porcine cells that were untransfected, or weakly expressed the EndoGalC gene, to create genetically modified cells suitable for pig-to-human xenotransplantation [8]. On the basis of this previous study, we decided to utilize this novel system for selecting transgene high-expressors, as described in Figure 1. Here, we confirm that this system works well. Unfortunately, this system is only applicable to mammalian cells that express the -Gal epitope. Human and Old World monkey cells that do not express such an epitope due to mutations in the -GalT gene [15] cannot be used. Theoretically, if these -Gal epitope-negative cells are genetically engineered to express -GalT, then the system would become applicable.
The present system is based on the coexpression of the EndoGalC gene and a GOI in an -Gal epitope-expressing cell. To achieve this, we used a 0.63-kb IRES sequence that enables simultaneous expression of at least two proteins from a single mRNA [11 13]. Subsequently, increased expression of EndoGalC accelerated the digestion of the -Gal epitope on the cell surface. This reverse correlation between EndoGalC and -Gal epitope expression was confirmed in the current study (see Figure 2D). Notably, because EndoGalC is produced by bacteria, such as C. perfringens, rather than mammalian cells, there is a possibility that EndoGalC expression in mammalian cells would affect cellular properties such as proliferation rate, cell behavior (including cell migration and differentiation), and cellular metabolism. Watanabe et al. [16] attempted to address this problem by producing transgenic mice that exhibited systematic expression of EndoGalC. Their results showed that although mice at the newborn stage transiently exhibited growth retardation with abnormal keratinogenesis, those at adult stages gained weight normally and showed normal skin formation. Their internal organs were also normal, and the reproductive ability was not impaired. The same research group later demonstrated that mouse NIH3T3 cells, transfected with an EndoGalC-expression vector, exhibited greater proliferative activity than the untransfected parental cells [17]. Therefore, it is likely that the EndoGalC-expressing cells proliferate faster than intact cells. Therefore, careful attention should be paid to examine whether cellular behavior (including cell proliferation) is altered before and after introducing an EndoGalC-expression vector.
The most excellent property of this system appears to be simple acquisition of transgene high-expressors without drug selection and subsequent molecular biological and biochemical screening of isolated clones. In the traditional cloning approach for isolating transgene high-expressors, the isolation of drug-resistant cells and subsequent characterization of individual clones that have been clonally propagated are essential steps ( Figure 5, previous system). Characterizing the isolated clones at the molecular biological and biochemical levels is time-consuming and laborious. In contrast, our EndoGalC/IB4SAP-based system does not require either drug selection or subsequent characterization of the isolated clones ( Figure 5, current system). The colonies that survive after transfection of an EndoGalC-expressing vector and subsequent IB4SAP treatment should strongly express the GOI. The results presented in this study prove our proposed hypothesis (Figure 1). Our concern regarding this new system is the procedure to be used to eliminate unwanted cells, which are likely to be untransfected cells escaping from IB4SAP-mediated cell death. However, we experienced a very low incidence of this issue [ Table 1; Figure 4B( ), arrowhead]. Because these cells still express the -Gal epitope on their surface, they can be eliminated by repeated IB4SAP treatment. Moreover, this EndoGalC/IB4SAP-based system for the acquisition of transgene high-expressors would be particularly valuable for researchers who wish to perform large-scale production of therapeutically important recombinant proteins (e.g., immunoglobulins) by using mammalian cells. In this case, the drug-free selectable cultivation of cells with high transgene expression is in great demand. To test whether our system can address this demand, we are currently attempting to produce recombinant proteins that can be secreted into the medium by introducing a gene encoding for secreted alkaline phosphatase (SEAP). This system is also applicable to other -Gal epitope-expressing cells. For example, we successfully obtained transgene high-expressors in mouse NIH3T3 cells using this technology (data not shown). Furthermore, this system appears to be useful in the xenotransplantation field. We have recently succeeded in isolating porcine cells with greatly reduced expression of the -Gal epitope after transfection with a vector expressing small interfering RNA (siRNA) for -GalT [18]. We showed that when these isolated siRNA-expressing cells were used as donor cells for somatic cell nuclear transfer (SCNT) experiments in pigs, the resulting cloned blastocysts exhibited a significant reduction in -Gal epitope expression. This result has encouraged us to plan studies for producing -Gal epitope-negative cloned piglets by using SCNT.
Cell Cultures
The PEFs used throughout this study were primarily cultured from male fetuses of Clawn miniature swine (Japan Farm, Ltd., Kagoshima, Japan) at 30 day after insemination. Cells were grown in PEF #124; Wako Pure Chemical Industries, Ltd., Osaka, Japan), 10% fetal bovine serum (FBS), and 1 antibiotic-antimycotic solution (#A5955; Sigma-Aldrich Co. Ltd., St. Louis, MO, USA) at 38.5 °C in a humidified atmosphere of 5% CO 2 in air. The cells were passaged 3 4 times and then frozen. Frozen cells were thawed and passaged for 7 13 generations prior to transfection.
Vector Construction
For construction of an EndoGalC expression plasmid that confers simultaneous expression of a gene of interest and EndoGalC in a transfected cell, an 1.53-kb fragment consisting of a 0.9-kb EGFP cDNA (Clontech Lab.) and a 0.63-kb IRES was inserted upstream of the 3-kb EndoGalC gene [6] in pCAG/EndoGalC [8]. The resulting construct was termed pCEIEnd ( Figure 2B), in which expression of both EndoGalC and EGFP are under the control of a strong ubiquitous promoter, CAG -actin promoter) [19]. EndoGalC (GT + Endo) contains a cytoplasmic tail and a transmembrane domain; a stem region of pig -GalT cDNA was inserted upstream of the full-length EndoGalC gene [6]. Therefore, the EndoGalC protein expressed in cells is retained at the cell membrane where it is expected to exert enzymatic activity. pCEIEnd also contains the backbone of pBluescript SK(-) (Stratagene, La Jolla, CA, USA). pCZIEnd ( Figure 4A) was constructed by replacing the EGFP cDNA with the lacZ gene. pCE-29 ( Figure 2B; [20]) carrying a CAG promoter-driven EGFP expression unit as well as the pBluescript SK(-) backbone was used as a control.
The fidelity of these plasmids was confirmed by restriction enzyme analysis and sequencing. Plasmids amplified in Escherichia coli (DH5 ) were purified using the Qiagen Plasmid DNA Isolation Midi Kit (Qiagen GmbH, Hilden, Germany). Circular plasmids were used for the transient expression assay, whereas plasmids linearized by appropriate digestion enzymes were used for acquisition of stable transfectants.
Experiment 1
To explore the optimal concentrations of AF594-IB4, PEFs recovered from dishes by trypsinization were incubated for 1 h at room temperature in a solution containing various amounts (0.08, 0.4, 2, 10 and 50 g/mL) of AF594-IB4 (#I21413; Invitrogen, Carlsbad, CA, USA) in phosphate-buffered saline without Ca 2+ and Mg 2+ (PBS[-]; pH 7.4), 2% FBS, and 1 mM CaCl 2 2 After incubation, the cells were washed twice with PBS/FBS/CaCl 2 and then inspected for fluorescence under a fluorescence microscope (BX60; Olympus, Tokyo, Japan). Micrographs were taken using a digital camera (FUJIX HC-300/OL; Fuji Film, Tokyo, Japan) attached to the fluorescence microscope and printed using a Mitsubishi digital color printer (CP700DSA; Mitsubishi, Tokyo, Japan). The specificity of IB4 for the -Gal epitope was confirmed by the abolition of lectin staining in the presence of 50 mM galactose (Sigma-Aldrich Co. Ltd.). Briefly, AF594-IB4 (10 g/mL) in PBS/FBS/CaCl 2 was first mixed with 100 mM galactose with a ratio of 1:1 (v/v) for 2 h at room temperature. Cells were then incubated with the mixture for 1 h at room temperature prior to fluorescence observation.
Transient transfection of PEFs with circular plasmids was performed with a nucleofection system (Lonza GmbH, Wuppertal, Germany), as previously described [21]. A schematic flowchart of this experiment is shown in Figure 2B. Briefly, 10 L of a solution containing circular pCEIEnd or pCE-29 DNA (6 g) was mixed with 90 L of the Nucleofector Solution (#11668-027; Lonza GmbH), which was then mixed with PEFs (5 10 5 ) for transfection. After transfection, cells were cultured in gelatin-coated 60-mm tissue culture dishes (#4020-020; Iwaki Co. Ltd., Tokyo, Japan) in PEF culture medium at 38.5 °C for 2 days. Cells harvested by trypsinization were subjected to cytochemical staining with 5 g/mL of AF594-IB4, as described above.
Experiment 2
Transfection was performed as described in Experiment 1, except that linearized plasmid DNA was used. The schematic flowchart of this experiment is shown in Figure 3A. Briefly, linearized pCEIEnd DNA (6 g) was mixed with PEFs (5 10 5 ) in the Nucleofector Solution (total volume of 100 L). After transfection, cells were cultured in gelatin-coated 60-mm tissue culture dishes in PEF culture medium at 38.5 °C, and after 4 6 days, the cells were equally divided into 2 sets. One set was treated with IB4SAP (#IT-10; Advanced Targeting Systems Inc., San Diego, CA, USA), whereas the other set was treated with SAP (#PR-01; Advanced Targeting Systems Inc.) as a negative control. For IB4SAP treatment, cells were incubated at 37 °C for 2 h in a solution (20 L) containing 80 g/mL of IB4-SAP in PBS/FBS/CaCl 2 . For SAP treatment alone, cells were incubated at 37 °C for 2 h in a solution (20 L) containing 80 g/mL of SAP in PBS/FBS/CaCl 2 . The treated cells were directly returned to a 60-mm dish containing normal PEF culture medium and cultured for an additional 1 or 3 weeks. In the IB4SAP-treated group, emerging colonies picked using a small paper disc (3 MM Whatman paper; width length, 3 3 mm) that was dipped in 0.125% trypsin/0.01% EDTA were directly transferred into a gelatin-coated 48-well plate (#3830-048; IWAKI Co. Ltd.) containing PEF culture medium. Cells were cultured for 10 20 days until confluency. Upon passage, a portion of cells was subjected to cytochemical staining with AF594-IB4, as described in Experiment 1.
Experiment 3
Transfection was performed as described in Experiment 2, except that THEPNBS cells were used. The schematic procedure is shown in Figure 4A. Briefly, linearized pCZIEnd DNA (6 g) was mixed with THEPNBS cells (5 10 5 ) in the Nucleofector Solution. After transfection, the cells were split in a ratio of 1:10; the former (1/11 of total cells) was seeded in a gelatin-coated 30-mm tissue culture dish, and the latter cells (10/11 of total cells) were seeded in a gelatin-coated 60-mm tissue culture dish. After 2 days, cells in 30-mm tissue culture dishes were fixed with 2% paraformaldehyde in PBS(-) for 5 min at room temperature, and then stained for lacZ activity in the presence of X-Gal (substrate for lacZ) by using the X-Gal Staining Assay Kit (Genlantis Inc, Abingdon, UK). The cells in 60-mm tissue culture dishes were harvested after 4 6 days by trypsinization and then treated with IB4SAP as described in Experiment 2. The treated cells were split 1:10; the former was seeded in a gelatin-coated 30-mm tissue culture dish, and the latter was seeded in a gelatin-coated 60-mm tissue culture dish. Two weeks after the IB4SAP treatment, cells in 30-mm tissue culture dishes were subjected to the cytochemical staining for lacZ activity, as described above. Emerging colonies in 60-mm tissue culture dishes were picked using the paper method described in Experiment 2 and were propagated for cell storage and confirmation of lacZ activity.
Image Analysis
Fluorescence in cells stained with AF594-IB4 was recorded using a digital camera, and the image analysis was performed as previously described [8]. Since cytoplasmic fluorescence for both EGFP and AF594 was noted in cells transfected with either pCE-29 (control) or pCEIEnd (experiment), the intensity of fluorescence (green or red) throughout a cell was measured using a program set with Adobe Photoshop version 5 (Adobe System, Inc., Seattle, WA, USA). pCE-29-transfected cells stained with AF594-IB4 in the presence of 50 mM galactose were used as controls. Results from at least more than 12 cells randomly selected from each group were analyzed and plotted.
Conclusions
In conclusion, we have shown here that the EndoGalC/IB4SAP-based target toxin system is useful for isolating transgene high-expressors with relative ease. This method would be especially helpful for the large-scale production of recombinant proteins as well as for the acquisition of genetically engineered multidrug-resistant cells. | 2014-10-01T00:00:00.000Z | 2013-02-28T00:00:00.000 | {
"year": 2013,
"sha1": "a5357212e7137e9b24f78fe062c4c16d5863ef59",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-7737/2/1/341/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a5357212e7137e9b24f78fe062c4c16d5863ef59",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
254215187 | pes2o/s2orc | v3-fos-license | The Dashboard Vitals of Parkinson’s: Not to Be Missed Yet an Unmet Need
The vitals of Parkinson's disease (PD) address the often-ignored symptoms, which are considered either peripheral to the central core of motor symptoms of PD or secondary symptoms, which, nevertheless, have a key role in the quality of life (QoL) and wellness of people with Parkinson's (PwP) [...].
Commentary
The vitals of Parkinson's disease (PD) address the often-ignored symptoms, which are considered either peripheral to the central core of motor symptoms of PD or secondary symptoms, which, nevertheless, have a key role in the quality of life (QoL) and wellness of people with Parkinson's (PwP) [1]. Unmet needs in PwP have recently been discussed, with many being related to motor symptoms and, specifically, non-motor symptoms (NMSs), which continue to pose a major challenge to PwP and their clinicians [2]. In addition, several other factors related to enablers of PD expression, progression, as well as co-morbidities and co-medication issues compound the wellness of PwP and we proposed all PwP to have a dashboard, whereby clinical assessment for these symptoms must be noted and managed as bespoke to the individual person, a key element in modern personalized medicine for PD [3,4].
The key elements of the vitals to form a dashboard for PwP are shown in Figure 1. These include the essential motor assessment, which is completed in almost all clinics as the initial evaluation in consultations. Motor function can be graded by clinical examination and assigning the Hoehn and Yahr (H&Y) staging [5], which, despite its clinimetric drawbacks, continues to be the most widely used clinical assessment for tangible and real-life motor assessment of PD and has stood the test of time. If time permits and there is capacity, then detailed motor examinations are possible using the Scales for Outcomes in PD (SCOPA)-motor [6], Movement Disorder Society Unified PD Rating scale (MDS-UPDRS) [7], or even the older UPDRS parts 3 and 4 [8]. In the future, PD-validated wearable monitoring scores with sensors, such as Parkinson kinetograph (PKG), could be added [9,10].
Then, there is the burden of NMS assessments, which can be carried out and graded using either the validated NMS Questionnaire (NMS Quest) or, if time permits, utilizing the PD-NMS scale (NMSS) [11][12][13][14]. NMS burden (NMSB) should be performed for every patient and graded, alongside the patients and their caregivers, rating their top named bothersome NMS. NMSB is contributed to by a range of NMS, from cognitive issues, neuropsychiatric problems, such as depression, apathy, and anxiety, to sleep dysfunction, hyposmia, bladder, bowel, and upper gastrointestinal dysfunction, such as the dribbling of saliva. NMSB has a direct correlation with QoL and a guide to using the NMS Quest in the clinic has also been published. NMSB score should be integral to the dashboard and ideally measured on a yearly basis [15].
Figure 1.
A diagram of the essential "vitals" to be considered in Parkinson's disease which should form a dashboard of symptoms to be considered and managed in every person with Parkinson's. NMSB non-motor symptom burden; H Pylori Helicobacter pylori; P Gingivalis porphyromonas gingivalis; CPS comorbidity polypharmacy score.
Vision is a critical aspect of living with PD and is rarely formally addressed in a PD clinic. A range of visual problems can occur in PD and these have been explored in several studies [16][17][18][19][20]. Vision assessment is important for PwP who continue to drive and, in this respect, night blindness (nyctalopia) and convergence insufficiency are important. Subsequently, a patient may have significant discomfort related to dry eyes (xeropthalmia), which is treatable with eye drops as well as glaucoma. The NMS Quest also allows for declaration of diplopia, which is common in PD and may be related to dyskinesias or convergence insufficiency. Nyctalopia may be related to vitamin A deficiency and may require night-time bedroom lighting to prevent falls at night-time should the patient need to get out of bed, for instance, to go to the toilet. Significant issues need a referral to an ophthalmologist [21].
Bone health is an integral aspect of Parkinson's wellness and relates to a very high incidence of osteoporosis or osteopenia in PD and related risk of fractures with falls and frailty as well as subsequent risk of hospitalization. A global longitudinal study of osteoporosis in women, the GLOW study, reported PD to be the strongest and most robust contributor to risk of fractures compared with other studied factors [22]. Motor dysfunction, frailty, gait impairment and freezing, postural instability, diphasic or troublesome dyskinesias and falls, polypharmacy, and reduced bone density contribute towards the increased risk of fracture in PD [23][24][25][26]. Vitamin D deficiency along with disease duration and severity, age, and low body mass index (BMI) with secondary hyperparathyroidism may also contribute to low bone density and need to be evaluated in all PwP periodically and added to the dashboard [22].
When assessing PwP holistically, the issue of weight is often ignored in clinical consultations, although blood pressure, height, and weight are often routinely collected in the clinic. Low body weight poses a specific challenge in PD and a low body weight Vision is a critical aspect of living with PD and is rarely formally addressed in a PD clinic. A range of visual problems can occur in PD and these have been explored in several studies [16][17][18][19][20]. Vision assessment is important for PwP who continue to drive and, in this respect, night blindness (nyctalopia) and convergence insufficiency are important. Subsequently, a patient may have significant discomfort related to dry eyes (xeropthalmia), which is treatable with eye drops as well as glaucoma. The NMS Quest also allows for declaration of diplopia, which is common in PD and may be related to dyskinesias or convergence insufficiency. Nyctalopia may be related to vitamin A deficiency and may require night-time bedroom lighting to prevent falls at night-time should the patient need to get out of bed, for instance, to go to the toilet. Significant issues need a referral to an ophthalmologist [21].
Bone health is an integral aspect of Parkinson's wellness and relates to a very high incidence of osteoporosis or osteopenia in PD and related risk of fractures with falls and frailty as well as subsequent risk of hospitalization. A global longitudinal study of osteoporosis in women, the GLOW study, reported PD to be the strongest and most robust contributor to risk of fractures compared with other studied factors [22]. Motor dysfunction, frailty, gait impairment and freezing, postural instability, diphasic or troublesome dyskinesias and falls, polypharmacy, and reduced bone density contribute towards the increased risk of fracture in PD [23][24][25][26]. Vitamin D deficiency along with disease duration and severity, age, and low body mass index (BMI) with secondary hyperparathyroidism may also contribute to low bone density and need to be evaluated in all PwP periodically and added to the dashboard [22].
When assessing PwP holistically, the issue of weight is often ignored in clinical consultations, although blood pressure, height, and weight are often routinely collected in the clinic. Low body weight poses a specific challenge in PD and a low body weight phenotype in PD, the Park-weight phenotype, has been proposed to have a high risk of dyskinesias, as well as possible links with cognitive dysfunction and hyposmia [27][28][29]. Weight and BMI, therefore, need to be noted at baseline in all PD cases and routinely charted for monitoring. Unexplained weight loss is a question asked in the NMS Quest and, in addition, may be a problem with some medications, such as intrajejunal levodopa infusion, as well as those with severe dyskinesias. Unexplained weight loss coupled with rising frailty has also been linked to future cognitive dysfunction and, therefore, also may have prognostic consequences [30,31].
Gut and oral health is another important enabler of wellness and health in PD and constitutes the important "vital" aspect for the dashboard. Gut dysfunction in PD is well documented and ranges from upper gastrointestinal dysfunction, such as dysphagia and delayed gastric emptying, to constipation [32].
While many of these symptoms are flagged up in the NMS Quest and constitute part of the NMSB, some need key and focused attention as they are often ignored in clinics. These include: 1.
Specific attention and query about oral health, gum, and gingivitis and an examination by a dentist in all cases. Infection with porphyromonas gingivalis, a Gram-negative anaerobic bacterium, can cause chronic periodontitis and possibly systemic inflammation, together with gingipains, and may have an overall effect on worsening of the Parkinsonian state and even pathogenesis [33]. A recent study suggested that high serum C-reactive protein (CRP) level may be a good indicator of periodontitis and should trigger a referral to a dentist and needs to feature in the dashboard [34].
2.
Delayed oral drug absorption as well as clinical phenomena of "delayed on" or "no on" or even dyskinesias-related erratic absorption may relate to delayed gastric emptying and "gastric blocks". Helicobacter pylori (H Pylori) infection, a Gramnegative bacteria, in the stomach is common in PD and several case-control studies report that prevalence of H Pylori infection is five-times higher in older PD patients, specifically those over 80 years of age, and up to three-times higher in PD patients compared to healthy individuals [35].
3.
Eradication of H Pylori infection using combined antibiotic therapies can improve bioavailability and pharmacokinetics of levodopa and drug bioavailability by increasing its absorption by 21 to 54%, despite one single-centre negative study. The latter study, however, did not address blood levels of levodopa and instead focused on quality of life and motor scores [36]. Any patient with delayed time to 'ON' after oral levodopa absorption, as well as upper gastrointestinal symptoms of heartburn, bloating, and reflux, must have H Pylori infection tested and, if positive, be treated [37].
4.
Severe constipation may arise from chronic dehydration and impacted faeces. This also interferes with oral drug absorption and a simple abdominal X-ray may show dilated bowel loops and impacted faeces [38,39]. Treatment with regular laxatives and even an enema may then be warranted, as part of the vitals, in relevant cases.
Finally, there is the issue of comorbidity-and medication-related enablers of health, such as impulse control disorders (ICD) as well as medication management. Diabetes mellitus has been proposed to be a risk factor of PD and comorbid diabetes can affect PD [40][41][42]. Consequently, blood glucose is often listed, along with urate, as associates in the revised MDS criteria for PD, while antidiabetic drugs are being examined for possible neuroprotection in PD [43]. Diabetes is a risk factor for worsening neurodegeneration, delayed gastric emptying as well as cognitive dysfunction and, hence, should be actively listed in the dashboard [44]. Other important co-morbidities, which have been proposed as risk factors for PD, also include REM Sleep behaviour disorder (RBD), with 80% of RBD patients developing neurodegenerative diseases, such as PD [45,46]. Development of PD Dementia (PDD) has been proposed to be greater in those with higher UPDRS scores, male gender, have hypertension, and, most commonly, have a history of neuropsychiatric disorders [47]. As such, greater emphasis should be on managing cognitive and psychological disorders in PwP given the risk of significant progression in PD that can occur in these cohorts; as such, the dashboard includes MoCa and MDS NMS, both of which aid in the surveillance of the emergence and presence of psychiatric and other neurological comorbidities.
Polypharmacy is common in PD related to comorbidities and risks side effects, which includes ICD with dopaminergic drugs, specifically dopamine agonists. Withdrawal of dopaminergic drugs, specifically dopamine agonists, also needs to follow a graded pattern to avoid dopamine agonist withdrawal syndrome [48,49]. The use of dopaminergic drugs carries with it side effects, which must be reviewed in each consultation with PwP, to ensure adequate support and holistic care are provided. Side effects include ICD, which can range from hypersexuality, gambling, binge eating, or impulsively, and other side effects, including neuropsychiatric (hallucinations, delusions) and dyskinesias [50,51]. The dashboard includes assessment of these concurrently during consultation (see Figure 2). Furthermore, specific attention needs to be given to anticholinergic drugs and a reference to the anticholinergic index of all drugs being given to PwP, as these drugs should not be used in the cholinergic subtype of PD and generally can worsen cognition and gait in PD. In this respect, a comorbidity polypharmacy score (CPS), which is defined as the sum of baseline medication and all known comorbidities, may be useful, and the severity of CPS has been traditionally stratified as mild (CPS 0-7), moderate (8)(9)(10)(11)(12)(13)(14), severe (15)(16)(17)(18)(19)(20)(21), and morbid (≥22 points). Pill burden, comorbidity, and swallowing all come into play in this respect [52,53]. disorders [47]. As such, greater emphasis should be on managing cognitive and psychological disorders in PwP given the risk of significant progression in PD that can occur in these cohorts; as such, the dashboard includes MoCa and MDS NMS, both of which aid in the surveillance of the emergence and presence of psychiatric and other neurological comorbidities.
Polypharmacy is common in PD related to comorbidities and risks side effects, which includes ICD with dopaminergic drugs, specifically dopamine agonists. Withdrawal of dopaminergic drugs, specifically dopamine agonists, also needs to follow a graded pattern to avoid dopamine agonist withdrawal syndrome [48,49]. The use of dopaminergic drugs carries with it side effects, which must be reviewed in each consultation with PwP, to ensure adequate support and holistic care are provided. Side effects include ICD, which can range from hypersexuality, gambling, binge eating, or impulsively, and other side effects, including neuropsychiatric (hallucinations, delusions) and dyskinesias [50,51]. The dashboard includes assessment of these concurrently during consultation (see Figure 2). Furthermore, specific attention needs to be given to anticholinergic drugs and a reference to the anticholinergic index of all drugs being given to PwP, as these drugs should not be used in the cholinergic subtype of PD and generally can worsen cognition and gait in PD. In this respect, a comorbidity polypharmacy score (CPS), which is defined as the sum of baseline medication and all known comorbidities, may be useful, and the severity of CPS has been traditionally stratified as mild (CPS 0-7), moderate (8)(9)(10)(11)(12)(13)(14), severe (15)(16)(17)(18)(19)(20)(21), and morbid (≥22 points). Pill burden, comorbidity, and swallowing all come into play in this respect [52,53].
Conclusions
A dashboard of the vital symptoms, which are enablers of wellness in PD, needs to be considered in every patient with PD, regardless of stage and setting, see Figure 2. The process is simple and needs to be preferably recorded on an annual basis, as part of their regular review. Attention to these vitals would ensure continuing good care for PwP and function as the cornerstone of a holistic personalised modern symptom-driven management strategy. | 2022-12-04T16:38:40.067Z | 2022-12-01T00:00:00.000 | {
"year": 2022,
"sha1": "be09c5b256e6617213ba42a04b039d5c0d5de04d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "1c50aa34cb1a5202114d06b173967a6e36f88031",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
253708068 | pes2o/s2orc | v3-fos-license | fasano.franceschini.test: An Implementation of a Multidimensional KS Test in R
The Kolmogorov-Smirnov (KS) test is a nonparametric statistical test used to test for differences between univariate probability distributions. The versatility of the KS test has made it a cornerstone of statistical analysis across many scientific disciplines. However, the test proposed by Kolmogorov and Smirnov does not easily extend to multidimensional distributions. Here we present the fasano.franceschini.test package, an R implementation of a multidimensional two-sample KS test described by Fasano and Franceschini (1987). The fasano.franceschini.test package provides a test that is computationally efficient, applicable to data of any dimension and type (continuous, discrete, or mixed), and that performs competitively with similar R packages.
Introduction
The Kolmogorov-Smirnov (KS) test is a nonparametric, univariate statistical test designed to assess whether a sample of data is consistent with a given probability distribution (or, in the two-sample case, whether the two samples came from the same underlying distribution). First described by Kolmogorov and Smirnov in a series of papers (Kolmogorov, 1933a,b;Smirnov, 1936Smirnov, , 1937Smirnov, , 1939Smirnov, , 1944Smirnov, , 1948, the KS test is a popular goodness-of-fit test that has found use across a wide variety of scientific disciplines (e.g. Atasoy et al., 2017;Chiang et al., 2018;Hahne et al., 2018;Wong and Collins, 2020;Kaczanowska et al., 2021).
Due to its popularity, several multivariate extensions of the KS test have been described in literature. Justel et al. (1997) proposed a multivariate test based on Rosenblatt's transformation, which reduces to the KS test in the univariate case. While the test statistic is distribution-free, it is difficult to compute in more than two dimensions, and an approximate test with reduced power must be used instead. Furthermore, the test is only applicable in the one-sample case. Heuchenne and Mordant (2022) proposed to use the Hilbert space-filling curve to define an ordering in R 2 . The preimage of both samples is computed under the space-filling curve map, and the two-sample KS test is performed on the preimages. While it is theoretically possible to extend this approach to higher dimensions, the authors note that this would be computationally challenging and leave it as an open problem. Naaman (2021) derived a multivariate extension of the DKW inequality and used it to provide estimates of the tail properties of the asymptotic distribution of the KS test statistic in multiple dimensions. While an important theoretical result, practical usage is limited absent a method for computing exact p-values. Peacock (1983) proposed a test which addresses the fact that there are multiple ways to order points in higher dimensions, and thus multiple ways of defining a cumulative distribution function. In one dimension, probability density can be integrated from left to right, resulting in the canonical CDF P(X < x); or from right to left, resulting in the survival function P(X > x). However, since P(X < x) = 1 − P(X > x) (for continuous random variables), the KS test statistic is independent of this choice. In two dimensions, there are four ways of ordering points, and thus four possible cumulative distribution functions: P(X < x, Y < y), P(X > x, Y < y), P(X < x, Y > y), and P(X > x, Y > y). Since any three are independent, the KS test statistic will depend on which is used. To address this, Peacock (1983) proposed to compute a KS statistic using each possible cumulative distribution function, and to take the test statistic to be the maximum of those. Peacock (1983) suggested that for a sample {(X i , Y i ) : 1 ≤ i ≤ n}, each KS statistic be maximized over the set of all coordinate-wise combinations {(X i , Y j ) : 1 ≤ i, j ≤ n}. The complexity of computing Peacock's test statistic thus scales cubically with sample size, which is expensive and can become intractable for large sample sizes. Fasano and Franceschini (1987) proposed a simple change to Peacock's test: instead of maximizing each KS statistic over all coordinate-wise combinations of points in the sample, they are maximized over just the points in the sample itself. This slight change greatly reduces the computational complexity of the test while maintaining a similar power across a variety of alternatives (Fasano and Franceschini, 1987;Lopes et al., 2007).
In this article we present the fasano.franceschini.test package, an R implementation of the twosample Fasano-Franceschini test. Our implementation can be applied to continuous, discrete, or mixed datasets of any size and of any dimension. We first introduce the test by detailing how the test statistic is computed, how we compute it efficiently, and how we compute p-values. We then describe the package structure and provide several basic examples illustrating its usage. We conclude arXiv:2106.10539v3 [stat.ME] 18 Nov 2022 by comparing our package to three other CRAN packages implementing multivariate two-sample goodness-of-fit tests.
Two-sample test statistic
be samples of i.i.d. d-dimensional random vectors drawn from unknown distributions F 1 and F 2 , respectively. The two-sample Fasano-Franceschini test evaluates the null hypothesis In their original paper, Fasano and Franceschini (1987) only considered two-and three-dimensional random vectors, although their test naturally extends to arbitrary dimensions as follows.
For a given point x ∈ R d , we define the ith open orthant with origin x as For example, in two dimensions, the four combinations e 1 = (1, 1), e 2 = (−1, 1), e 3 = (−1, −1), and e 4 = (1, −1) correspond to quadrants one through four in the plane, respectively. In general there are 2 d such combinations, corresponding to the 2 d orthants that divide R d . Using the indicator function we define the distance This is similar to the distance used in the two-sample KS test, but takes into account all possible ways of ordering points in R d . Note that this distance does not depend on the enumeration of the orthants. The distance is then maximized over each sample separately, leading to the difference statistics The two-sample Fasano-Franceschini test statistic is then defined as the average of the difference statistics scaled by the sample sizes: (2)
Computational complexity
The bulk of the time required to compute the two-sample Fasano-Franceschini test statistic (Eq. 2) is spent evaluating sums of the form which count the number of points in a set S that lie in a given d-dimensional region. The simplest approach to computing such sums is brute force, where every point x ∈ S is checked independently. The orthant a point lies in can be determined using d binary checks, resulting in a time complexity of O(N 2 ) (where N = max(n 1 , n 2 )) to evaluate Eq. (2) for fixed d.
Alternatively, we can consider each sum as a single query rather than a sequence of independent The fraction of each sample in each quadrant is shown in the corresponding plot corner, and the maximum difference over all four quadrants is shown above each plot. D 1 is taken as the maximum of these differences. To compute the Fasano-Franceschini test statistic, the same procedure would need to be repeated, but using points in the second sample to divide the plane instead.
ones. Specifically, both sums in Eq. (1) are orthogonal range counting queries, which ask how many points in a set S ⊂ R d lie in an axis-aligned box (x 1 , Range counting is an important problem in the field of computational geometry, and as such a variety of data structures have been described to provide efficient solutions (de Berg et al., 2008). One solution, first introduced by Bentley (1979), is a multi-layer binary search tree termed a range tree. Other slightly more efficient data structures have been proposed for range counting, but range trees are well suited for our purposes, particularly because their construction scales easily to arbitrary dimensions (Bentley, 1979;de Berg et al., 2008).
A range tree can be constructed on a set of n points in d-dimensional space using O(n log d−1 n) space in O(n log d−1 n) time. The number of points that lie in an axis-aligned box can be reported in O(log d n) time, and this time can be further reduced to O(log d−1 n) (when d > 1) using fractional cascading (de Berg et al., 2008). To compute the two-sample Fasano-Franceschini test statistic, we construct one range tree for each of the two samples, and then query each tree 2 d times. Thus the total time complexity to compute the test statistic using range trees for fixed d is O(N log d−1 N), where again N = max(n 1 , n 2 ).
As the range tree method has a better asymptotic time complexity than the brute force method, we expect it to perform better for larger sample sizes. However, for smaller sample sizes, the cost of building the range trees can outweigh the benefit gained by more efficient querying. For each dimension, we sought to determine the sample size N * at which the range tree method becomes more efficient than the brute force method (Figure 2). For d = 2, N * ≈ 25; for d = 3, N * ≈ 200; for d = 4, 5, and presumably all higher dimensions, N * > 5000. As goodness-of-fit tests are generally applied to samples of much smaller sizes than this, we stopped benchmarking here.
Based on these benchmarking results, our package automatically selects which of the two methods is likely faster based on the dimension and samples sizes of the supplied data. However, as we used equal sample sizes during benchmarking, and since computation time can vary depending on the geometry of the samples, the selected method may not actually be fastest. If users are interested in performing benchmarking for their specific dataset, the argument nPermute can be set equal to 0, which bypasses the permutation test and only computes the test statistic.
Significance testing
To the best of our knowledge, no results have been published concerning the distribution of the Fasano-Franceschini test statistic. Any analysis would likely be complicated by the fact that, unlike the KS test statistic, the Fasano-Franceschini test statistic is not distribution free (Fasano and Franceschini, 1987). In their original paper, Fasano and Franceschini (1987) did not attempt any analytical analysis and instead performed Monte Carlo simulations to estimate critical values of their test statistic for various two-and three-dimensional distributions. By fitting a curve to their results, Press et al. (2007) proposed an explicit formula for p-values in the two-dimensional case. However, this formula is only approximate, and its accuracy degrades as sample sizes decrease or the true p-value becomes large (greater than 0.2). While this will still allow a simple rejection decision at any common significance level, it is sometimes useful to quantify large p-values more exactly (such as if one was to do a crossstudy concordance analysis comparing p-values between studies as in Ness-Cohn et al. (2020)). Effort could be made to improve this approximation, however it is still only valid in two dimensions, and thus an alternative method would be needed in higher dimensions.
To allow the fasano.franceschini.test package to be applicable to as broad a class of problems as possible, we compute p-values using a permutation test. Under the null hypothesis, the two samples were drawn from the same underlying distribution, and a permutation test leverages this to compute the null distribution of the test statistic. Permutation tests are distribution free, and can be applied to continuous, discrete, or mixed data of any dimension. The test procedure is as follows: 1. Compute the test statistic D for the original samples S 1 and S 2 .
2. Pool the two samples, and label each element according to which sample it belongs to.
3. Permute the labels, and split the pooled sample into two new samples S i 1 and S i 2 according to the new labels.
4. Compute the test statistic D i for S i 1 and S i 2 . 5. Repeat steps (3-4) for every permutation of the labels.
6. The p-value is fraction of test statistics D i at least as large as D.
However, as the sample sizes increase to even modest values, the total number of permutations of the labels increases rapidly, and it quickly becomes computationally infeasible to compute the test statistic for every permutation. Thus instead of considering all permutations, we select a fixed number of permutations M with replacement and compute a Monte Carlo approximation of the p-value, given byp If permutations are selected without replacement, this estimator is exact. However, if permutations are selected with replacement, this estimator is slightly more conservative than the exact estimator (Phipson and Smyth, 2010). Unless sample sizes are small, the loss of power will be minimal as the likelihood of selecting the same permutation multiple times will be negligible.
We select permutations with replacement primarily to circumvent the computationally expensive step of ensuring that repeated permutations are not selected. An additional benefit is that we are easily able compute a confidence interval for the true permutation p-value, as the number of test statistics for permuted samples at least as large as D is distributed binomially with a probability of success equal to the true permutation test p-value (Good, 2005). We compute the confidence interval using the binom.test function from the stats package, which computes an exact binomial confidence interval as given in Clopper and Pearson (1934).
Package overview
The fasano.franceschini.test package is written primarily in C++, and interfaces with R using Rcpp (Eddelbuettel et al., 2022). The permutation test is parallelized using RcppParallel (Allaire et al., 2022). The package consists of one function, fasano.franceschini.test, for performing the two-sample Fasano-Franceschini test. The arguments of this function are described below.
• S1 and S2: the two samples to compare. Both should be either numeric matrix or data.frame objects with the same number of columns.
• nPermute: the number of permuted samples to generate when estimating the permutation test p-value. The default is 100. If set equal to 0, the permutation test is bypassed and only the test statistic is computed.
• threads: the number of threads to use when performing the permutation test. The default is one thread. This parameter can also be set to "auto", which uses the value returned by RcppParallel::defaultNumThreads().
• seed: an optional seed for the pseudorandom number generator (PRNG) used during the permutation test.
• p.conf.level: the confidence level for the confidence interval of the permutation test p-value. The default is 0.95.
• verbose: whether to display a progress bar while performing the permutation test. The default is TRUE. This functionality is only available when threads = 1.
• method: an optional character indicating which method to use to compute the test statistic. The two methods are r (range tree) and b (brute force). Both methods return the same results but may vary in computation speed. If this argument is not passed, the sample sizes and dimension of the data are used to infer which method is likely faster.
The output is an object of the class htest, and consists of the following components: • statistic: the value of the test statistic D.
• estimate: the value of the difference statistics D 1 and D 2 .
• p.value: a Monte-Carlo approximation of the permutation test p-value.
• conf.int: a binomial confidence interval for the permutation test p-value.
• data.name: the names of the original data objects.
Examples
Here we demonstrate the basic usage and features of the fasano.franceschini.test package. We begin by loading the necessary libraries and setting a seed for reproducibility.
> library(fasano.franceschini.test) > library(MASS) > set.seed (1) Note that to produce reproducible results, we need to set two seeds: the set.seed function sets the seed in R, ensuring we draw reproducible samples; and the seed passed as an argument to the fasano.franceschini.test function sets the seed for the C++ PRNG, ensuring we compute reproducible p-value estimates.
Comparison with other R packages
In this section, we compare the fasano.franceschini.test package with three other CRAN packages that perform multivariate two-sample goodness-of-fit tests.
Peacock.test
The Peacock.test package (Xiao, 2016) provides functions to compute Peacock's test statistic (Peacock, 1983) in two and three dimensions. As no function is provided to compute p-values, we cannot directly compare the performance of this package with the fasano.franceschini.test package. However, a thorough treatment of the power of both Peacock and Fasano-Franceschini tests can be found in both the primary literature (Peacock, 1983;Fasano and Franceschini, 1987) and in a subsequent benchmarking paper (Lopes et al., 2007), which found that the two tests have similar power across a variety of alternatives.
cramer
The cramer package (Franz, 2019) implements the two-sample test described in Baringhaus and Franz (2004), which the authors refer to as the Cramér test. The Cramér test statistic is based on the Euclidean inter-point distances between the two samples, and is given by This statistic is not distribution-free, and several methods are provided to compute p-values. By default, p-values are estimated using a bootstrapping procedure.
diproperm
The diproperm package (Allmon et al., 2021) implements the DiProPerm test introduced by Wei et al. (2016). The test first trains a binary linear classifier to determine a separating hyperplane between the two samples. The data are then projected onto the normal vector to the hyperplane, and the test statistic is taken to be a univariate statistic of the projected data (by default the absolute difference of means). Like in the fasano.franceschini.test package, significance is determined using a permutation test.
Power comparison
To compare the fasano.franceschini.test package with the cramer and diproperm packages, we performed power analyses using three classes of alternatives: location alternatives, where the means of the marginals are varied; dispersion alternatives, where the variances of the marginals are varied; and copula alternatives, where the marginals remain fixed but the copula joining them is varied.
For location and dispersion alternatives, we used multivariate normal distributions. We denote the d-dimensional normal distribution with mean µ ∈ R d and covariance matrix Σ ∈ R d×d by N d (µ, Σ), and sample from it using the MASS package (Ripley, 2021). The d × d identity matrix, which is sometimes used as a covariance matrix, is denoted as I d . For copula alternatives, we consider the Gaussian copula with correlation matrix and the Clayton copula with parameter θ ∈ [−1, ∞) \ {0}. We denote the d-dimensional distribution with standard normal marginals joined by a Gaussian copula with correlation matrix P(ρ) by G d (ρ). We denote the d-dimensional distribution with standard normal marginals joined by a Clayton copula with parameter θ by C d (θ). Both distributions are sampled from using the copula package (Hofert et al., 2022). For all power analyses performed, power was approximated using 1000 replications, a significance level of α = 0.05 was used, all samples were of size 50, and all R functions implementing tests were called using their default arguments.
We first examined the power of the tests on various bivariate alternatives. All three tests had similar power across location alternatives, although the Cramér and DiProPerm tests did tend to slightly outperform the Fasano-Franceschini test. Across dispersion alternatives, the Cramér and Fasano-Franceschini tests had very similar powers. On both copula alternatives, the Fasano-Franceschini test had a consistently higher power than the Cramér test. The DiProPerm test was unable to achieve a Figure 3: Visualization of the distributions used in power analyses. Each plot shows two samples consisting of 10000 points each. The first sample S 1 is shown in blue, and the second sample S 2 is shown in red. (a) S 1 ∼ N 2 (0, I 2 ) and S 2 ∼ N 2 (0.4, I 2 ). (b) S 1 ∼ N 2 (0, I 2 ) and S 2 ∼ N 2 (0, I 2 + 1.5). (c) S 1 ∼ G 2 (0) and S 2 ∼ G d (0.6). (d) S 1 ∼ C 2 (1) and S 2 ∼ C 2 (8).
Figure 4:
Comparison of power of the Fasano-Franceschini, Cramér, and DiProPerm tests on various bivariate alternatives. (a) Location alternatives, with S 1 ∼ N 2 (0, I 2 ) and S 2 ∼ N 2 (µ, I 2 ). (b) Dispersion alternatives, with S 1 ∼ N 2 (0, I 2 ) and S 2 ∼ N 2 (0, I 2 + ε). (c) Gaussian copula alternatives, with S 1 ∼ G 2 (0) and S 2 ∼ G 2 (ρ). (d) Clayton copula alternatives, with S 1 ∼ C 2 (1) and S 2 ∼ C 2 (θ). power above the significance level of α = 0.05 on any of the dispersion or copula alternatives. This is likely due to the fact that in these instances, there is significant overlap between the high density regions of the two distributions, making it difficult to find a separating hyperplane between samples drawn from them.
We next examined how the power of the three tests varied when the two sampling distributions were kept fixed but the dimension of the data increased. On the location alternative, the power of the Cramér and DiProPerm tests was quite similar, monotonically increasing to one as dimension increased. The power of the Fasano-Franceschini increased until d = 5 and then monotonically decreased to α = 0.05 by d = 20. We see similar results for the Cramér and Fasano-Franceschini tests on the dispersion alternative. On copula alternatives, both the Cramér and Fasano-Franceschini tests have monotonically increasing power as dimension is increased. However, whereas the Fasano-Franceschini test is able to achieve a power of nearly one near d = 10 on both alternatives, the Cramér test's power grows at a much slower rate. The DiProPerm test is still unable to attain a power above α = 0.05 on the dispersion alternatives or either of the copula alternatives.
Overall, the Cramér and DiProPerm tests perform better than the Fasano-Franceschini test on location alternatives, especially as dimension increases. On dispersion alternatives, the Fasano-Franceschini and Cramér tests have comparable performance for low dimensions, but the Cramér test maintains a higher power for high dimensions. However, in these cases the marginal distributions differ, and thus a multivariate test is not strictly necessary as univariate tests could be applied to the marginals independently (with a multiple testing correction) to detect the difference between the multivariate distributions. On copula alternatives, where a multivariate test is required, the Fasano-Franceschini test consistently outperformed both the Cramér and DiProPerm tests.
Summary
This paper introduces the fasano.franceschini.test package, an R implementation of the multidimensional two-sample goodness-of-fit test described by Fasano and Franceschini (1987). We provide users with a computationally efficient test that is applicable to data of any dimension and of any type (continuous, discrete, or mixed), and that demonstrates competitive performance with similar R packages. Complete package documentation and source code are available via the Comprehensive R Archive Network (CRAN) at https://cran.r-project.org/web/packages/fasano.franceschini.test and the package website at https://nesscoder.github.io/fasano.franceschini.test.
Computational details
The results in this paper were obtained using R 4.1.1 with the packages fasano.franceschini.test 2.1.1, diproperm 0.2.0, cramer 0.9-3, MASS 7.3-54, and copula 1.1-0. All computations were done using the Quest high performance computing facility at Northwestern University. R itself and all package dependencies are available from CRAN at https://cran.r-project.org. | 2021-06-22T01:15:56.167Z | 2021-06-19T00:00:00.000 | {
"year": 2021,
"sha1": "08776db6ec83929de0ff87dbc374a09edf445bf5",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "08776db6ec83929de0ff87dbc374a09edf445bf5",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics",
"Biology"
]
} |
250399838 | pes2o/s2orc | v3-fos-license | Critical Components of Airport Business Model Framework: Evidence from Thailand
: Because of the scarcity of extant studies in the literature on airport business models, this study aims to identify a framework for airport business model design. Exploratory research obtained from key Thai respondents was used, and the data analysis was further enhanced by an extensive review of related grey literature available in public domains. With our qualitative data analysis, we propose the generic airport business model framework as a foundation for designing business models. Strategic partners, core business activities, human resources and sustainability-related projects should be considered basic components driving an airport to achieve high performance. The remaining business model components should be customised depending on business environ-ments and location contexts.
Introduction
Deregulation of the airline industry exerted pressure on airports as service providers in the aviation sector. The growth in demand for air transport caused airports to invest in the development of infrastructure and service quality [1][2][3]. Therefore, airports, which are mostly owned and operated by governments, shifted to public enterprise management and multi-business companies to become more competitive and profitable [4][5][6]. Massive funding was required to refurbish airports and improve their cost efficiency. The airports were tasked with finding managerial instruments to provide a new business model [7]. Consequently, many scholars in the field are investigating the factors that affect airport efficiency, including measurements, benchmarks and other airport development tools that help retain airport strategic planning and nurture competitive advantage [8]. However, little attention has been paid to airport business model (ABM) propositions [6,[9][10][11][12], despite the positive effects of business models on a firm's performance [13][14][15][16][17][18][19][20].
This paper makes an academic contribution to the ABM literature. Firstly, it provides the framework for a business model design, by providing the basic components used to illustrate focal business operations that create value for users. As the business model has been the unit of analysis in the management science literature for decades [21,22], there are few works related to ABM. Secondly, the study seeks to enhance data analysis, by examining lessons learned from the World's Best Airport. The associated findings shed light on the details of each business model component that enabled this airport to receive the latest World's Best Airport award from Skytrax. Finally, the study presents in-depth information collected from airport management in Thailand. Key informants from various airport ownership patterns were invited to express their own ideas regarding the improvement of airport performance and key activities that enhance operational efficiency.
The key components were determined using these methods to construct an ABM analytical framework.
This paper develops a framework to address the components that airport management should focus on to design an ABM in order to improve airport performance. Exploratory research was used to discover the essential ABM components. Then, the data analysis was enhanced by an extensive review using the lessons learned from recipients of the highest honour of the World Airport Awards instituted by Skytrax: namely, the World's Best Airport award. This paper is organised as follows: Section 2 reviews the literature relevant to business models as they have been studied in the airport literature; Section 3 explains the qualitative research design; Section 4 presents the findings and discussions; finally, the managerial implications are provided in Section 5.
Business Model Conceptualisations
It is essential to define a business model, because it shows the relationships between a firm, strategy and performance [23]. According to Table 1, business model terminology is fragmented, and consists of diverse understandings of its terms from various scholars in different fields of study [24]. The definitions of terms are therefore unclear, and lack a theoretical background. This is something often misused and misinterpreted by scholars, practitioners and the business sector, despite its frequent use in the literature [25][26][27]. This is because most works adopt the case study methodology, especially in information technology businesses, instead of empirical testing and theoretical development. Consequently, the growing body of empirical testing and theoretical development is understudied [28].
Although business model terminology is diverse, we can see consolidation trends in its conceptualisation. Table 1 presents the evolution of business model terminology, and some selected conceptualisations of the term. It is noticeable that business model terms fall into two categories. The first group defines a business model as a model [27,29,30], a way [31,32], a component [23], a template [20], a means [25], a tool [33], a plan [34], a system [35] and a set or bundle of activities [13,36], indicating how a firm performs overall as a business.
Authors Definitions
Porter (2001) [27] The definition of a business model, most often, refers to how a firm does business and creates revenue. Simply put, this model sets a low bar for setting up and building a firm's operation. Chesbrough and Rosenbloom (2002) [31] In a general sense of business model, it is how a business is run, whereby a firm can organise itself to generate revenue. It shows how a firm makes money by indicating its standing in the value chain. Magretta (2002) [38] A business model tells the story of how a firm sells its products and delivers value. Hedman and Kalling (2003) [23] Business models are used to illustrate the key components of a company. Morris, et al., (2005) [30] It is the firm's economic model. It involves profit generation, revenue sources, methods of pricing, cost structures, profit margins and expected volumes. Osterwalder, et al., (2005) [40] A business model is a conceptual tool containing elements that show the relationship and present the logic of a specific business. It describes the company value offered to various customer segments. In addition, it shows the architecture and networks of partners that deliver value to create and sustain revenues and profits. Chesbrough (2007) [42] Business models perform two crucial functions. They act as value creators and value captors. They define a series of activities from purchasing to final customers. Zott and Amit (2007) [19] A business model explains how a firm is connected with external parties, and how a firm interacts in economic exchanges to generate value for external stakeholders. Zott and Amit (2008) [20] A business model is a structural template describing a firm's focal transactions with all stakeholders. Baden-Fuller and Morgan (2010) [25] A business model is a means of describing and classifying businesses. It operates as a site for scientific investigation and provides guidelines to managers. Amit and Zott (2010) [36] A business model is the bundle of activities aimed to serve the market needs and parties, and it represents how these activities are linked together.
Chesbrough (2010) [37] A business model is a model fulfilling these functions: -conveying the value proposition -classifying market segments and identifying the revenue creation mechanism -identifying the structure of the value chain -describing the revenue mechanisms that a firm offers -assessing the structure of profit, revenue and cost -illustrating the standing of a firm within the network connecting customers and suppliers -preparing competitive strategies. Demil and Lecocq (2010) [43] A business model may refer to the articulation of various company activities designed to provide value to customers. Giesen, et al., (2010) [15] Business model components relate to these questions: -What value is handed to customers -How the value is delivered to customers -How a firm's revenues are created -How a firm posits itself in the industry. Osterwalder and Pigneur (2010) [45] A business model is a description of the rationale on how a firm creates, delivers and captures value.
Teece (2010) [47] A business model explains the architecture of value creation and delivery, and captures the business mechanisms it uses. Zott and Amit (2010) [49] A business model acts as a system of activities transcending the firm's pinnacle and boundaries that allow a firm to create and share value.
Casadesus-Masanell and Ricart (2011) [44] They suggested that a business model contains the components that inform managerial decisions as to the manner in which a firm should operate, the consequences of those managerial decisions and their impacts on the firm's strategy for value creation and value capture.
Cavalcante, et al., (2011) [33] They posit a business model as a tool to provide stability for the development of a firm's activities. This model is flexible and subject to change.
Zott, et al., (2011) [50] Business models provide a holistic view on how a firm runs its business. They explain not only how value is generated but also how it is captured. Trimi and Berbegal-Mirabent (2012) [39] A business model explains how a firm delivers value to users, where to allocate the money for the firm's sustainability and how to run the company. Boons and Lüdeke-Freund (2013) [34] A business model provides a plan that indicates how new ventures are able to become profitable. Zott and Amit (2013) [32] Business models depict the ways a firm does business. They are crafted to best provide customer satisfaction. Bocken, et al., (2014) [51] A business model is defined by three components: value proposition; value creation; delivery and value capture. DaSilva and Trkman (2014) [26] A business model is the combination of resources through transactions to create value for a firm and its customers. Amit and Zott (2015b) [21] A business model explains the system of activities carried out by a firm, its parties and the mechanisms linking these business activities to one another. Joyce and Paquin (2016) [52] A business model is a rationale of how a firm creates, delivers and captures value. Wirtz, et al., (2016) [24] Apart from value creation and market component considerations, a business model simplifies and represents a firm's related activities to secure a competitive advantage. Massa, et al., (2017) [29] A business model explains how a firm is run in order to achieve its goals, such as profitability, growth, interaction with society and impacts, among others. Saebi, et al., (2017) [46] A business model is an architecture linking a firm's value proposition, market segment, value chain structure and value capturing. Geissdoerfer, et al., (2018) [53] Business models are defined as simplified versions of value proposition, creation, delivery and capture. They represent the interactions among these elements within a firm's unit. Hahn, et al., (2018) [54] A business model is the content, structure and control of transactions designed to create value over the exploitation of business opportunities.
Teece (2018) [48] A business model illustrates the architecture whereupon a firm generates and delivers value to users. It describes the mechanisms for capturing a share of value. It is a combined set of components, including costs, revenues and profits.
Afuah (2019) [13] A business model is a set of activities performed to generate and utilise business resources in order to create, deliver and monetise benefits to customers. Di Tullio, et al., (2021) [55] A business model is a communication device that underlies the valuecreation process.
Airport Business Model Literature
In contrast to business model definitions, ABM has received less attention in the literature in terms of both conceptualisation and related studies [6,[10][11][12]56]. According to Frank [10], the first works describing overall airport operations were those by De Neufville and Odoni [2] and Gillen [57]. According to our review, however, we argue that the concept of ABM is not succinctly presented in these articles, because the authors explored how airport systems adapted to changes; hence, the literature review on these issues was much less comprehensive.
To the best of our knowledge, the only literature containing the keywords 'airport business model' or 'business models for airports' are those from Baker and Freestone [58] , Frank [10], Kalakou and Macário [12], Everett Jr [59], Efthymiou and Papatheodorou [60] and Rotondo [6]. Table 2 presents the current literature on ABM, and the conceptualisations and the findings from the literature that includes the keywords 'airport business model' in their title. Table 2. Airport business model literature.
Authors Definition Aspects of Studying
Baker and Freestone (2010) [58] They did not clearly specify, but we can infer that they intended to describe how those airports do business.
The paper compared how two sampled airports from different scales embraced the airport city concept to develop their properties commercially in response to changes.
Frank (2011) [10] The business model analyses and depicts the way the firm operates.
The author suggested a structure for airport business models, comprising the customer value proposition, breakthrough rule changing, regulators, key profit formula, stakeholders, governance mix, reform opportunity cost, key resources, key processes, network value, risk and externalities.
Kalakou and Macário (2013) [12] An attempt to conceptualise business operations through a model, treating it as an operational tool to improve the firm's performance and revenues.
They explored a new framework for airport business model design by adapting elements from Osterwalder and Pigneur (2010). The authors presented additional building blocks, including the so-called regeneration factor, which includes expected investments and expected returns. The study concluded that high-performance airports shared the same airport business model components.
Everett Jr (2014) [59] A business model is part of a business plan. This schematic model provides an overall picture of a firm, and is more comprehensive than other revenue or operating models.
The paper presents the framework for developing airport operations in a changing business environment. Using the example of a small airport in the USA, the author adopted components from Osterwalder and Pigneur (2010) to illustrate the application of the framework.
Efthymiou and Papatheodorou (2018) [60] The authors did not give the definition, but we can interpret that it means how airports run businesses under changing environments.
The authors present how airport businesses evolve their operations during different periods of the aviation industry, in response to changing airline business models.
Rotondo (2019) [6]
The author defines a business model using three elements: structure, value proposition and the market.
The study aims to develop a systematic and theoretically founded framework with which to interpret airport business models. It provides a structured and comprehensive examination of strategic methods using an approach to evaluate business models, and demonstrates the application of the concepts using airports in Italy.
In the literature, Baker and Freestone [58] explained how Brisbane Airport and Athens Airport adapted the airport city concept to their business operations. Although their work contains keywords relating to the ABM, they discussed something quite far removed from this paper's research objective. In what is similar to Efthymiou and Papatheodorou [60], they discussed how airports ran businesses from pre-to post-deregulation, and described how airport businesses apply the concept of the airport city, or Aeropolis, to the operations.
By contrast, Frank [10] employed exploratory research using in-depth interviews to examine airport business practices, and proposed different types of ABM for the Talip International Airport (TIA), Mills International Airport (MIL) and Malik Airport (MAK). The author proposed the airport business model matrix, the components of which included customer value propositions, key profit formulas, stakeholder rewards, key processes, network value, and innovation and key resources. She concluded that the ABM design should be heterogeneous in nature, and that it should supply a holistic view of airport operations.
Kalakou and Macário [12] used the common Business Model Canvas (BMC) to conduct an analysis of 20 ABMs, because the authors believed that this model captured the overall airport operations as well as the business environment. They found that types and volume of traffic have a high impact on business models; in addition, they further developed Osterwalder and Pigneur's [45] analytical framework, with the inclusion of a regenerator factor, which reflected expected investments and returns. The authors agreed with Frank [10], that an ABM should not be static, and should reflect present operations for future model development. Moreover, the authors explained that each element in the BMC illustrated the innovative process of airport business modelling. This is because all elements of the BMC affect the new value proposition; it therefore creates innovation building on current airport operations. Everett Jr [59] employed the same framework to explain small airport operations in Eastern Pennsylvania, which were operated by Lehigh-Northampton Airport Authority (LNAA). The author employed nine element-building blocks, and holistically described the current operations related to the airport business environment. The author presented the overall operations for the selected airport.
The work by Rotondo [6] has recently enriched the ABM literature. In this study, Rotondo [6] made a distinct attempt to interpret and provide an ABM framework, by conducting a review of the strategic management and airport-related literature on capturing the environment affecting airport business operations. On the basis of Casadesus-Masanell and Ricart [44], he constructed the ABM framework by asking the questions that represent the components underlying the core logic for creating and capturing value. His goal for developing this framework was to assess the Italian airport system. However, Rotondo's ABM framework [6] lacks in-depth information from airport management that can potentially be crucial for ABM development. Therefore, the current study goes beyond his study by employing an exploratory research approach to build upon his findings and add value to ABM literature.
However, owing to the limitations of ABM conceptualisation, we began by using Rotondo's [6] ABM framework and the BMC of Osterwalder and Pigneur [45] as guidelines for developing the ABM analytical framework of this study, because these two frameworks share similar conceptualisations. Rotondo's [6] ABM framework provides details on each business model component, especially for the airport business. The BMC illustrates more comprehensive business model components, and shows the linkage between business activities and value creation. It provides a concrete model and visual presentation [51] that allows an understanding of business operations [61] and ideal foundations [52,62] for further study on developing an ABM analytical framework.
Analytical Framework of Business Model Design
To design business models, the components adopted in the business model ought to be consistent with the goals of the firm [19], and aligned with the business model definition employed. This is because the differences in definitions create disparities in business model components and designs. Various suggestions have been made as to what the appropriate business model components should be. Each definition provides different business model components that impact how firms design business models, such as the proposal by Hedman and Kalling [23]. They suggested that business model components include customers, competitors, offerings, activities and organisation, business resources and production factors. However, some studies present a common systematic process to design business model archetypes that correspond to the business model definition given in this study, such as the BMC published by Osterwalder and Pigneur [45]. Because the BMC components are classified into value and efficiency parts [12], the BMC was adopted as the elementary framework for qualitative analysis in this study (Figure 1). This block includes various groups of customers who are the source of earnings in a business. If a firm offered products and services to various CS, it would be required to justify and prioritise them to deliver the right value to the right groups. The CS can be categorised into mass markets, niche markets, segmented markets, diversified markets and multi-sided platforms or multi-sided markets that are specifically regarded as segmented for airport businesses. 2. Value Propositions (VP) are the goods and services a firm offers that create value for each customer segment. It also indicates customer pain points and suggests solutions. VP involve these factors: newness, performance, customisation, design, brand, getting the job done, price, cost and risk reduction, and accessibility and usability. 3. Channels (CH) refer to the selected channels where a firm communicates with each customer segment about proposing value. Finding the right channel helps a company raise awareness among customers about its products, and allows the company to assess the best mode to convey messages to customers. 4. Customer Relationships (CR) elucidate the forms of interaction between a firm and each specific customer segment. CR can be divided into several categories. They include personal assistance, dedicated personal assistance, self-service, automated services, communities and co-creation. 5. Key Resources (KR) enable VP to customers and markets, maintain CR with CS and generate revenues. KR can be classified as physical, intellectual, human and financial. 6. Key Activities (KA) are a set of activities a firm needs to drive its business model. It explains the main activities a firm should undertake to deliver VP. Such activities include production, problem solving, platform provision or network management. 7. Key Partnerships (KP) are the networks underlying a supplier-firm partnership. The aims of networking partnerships are optimisation and economies of scale, reduction of risk and uncertainty, and acquisition of activities and business resources to extend a firm's capabilities. 8. Revenue Streams (RS) show the revenue stream from each customer segment. This involves two different RS: transaction revenues and recurring revenues. Transaction revenues are payments from one-time customers, while recurring revenues refers to continuous payments from customers. To generate RS, a firm might sell assets; collect usage, brokerage and subscription fees; or lend, rent, lease, licence or sell advertising. 9. Cost Structure (CS) reflects important costs incurred from the other eight block operations. Once the other blocks are detailed, it is possible to calculate all inherent costs that can then be minimised. However, this depends on the type of business model that might fall between being cost-driven and value-driven.
Research Methodology
Exploratory research was employed to answer the research question, as to what components airport businesses should use in order to construct a business model to improve airport performance. This qualitative method was used to discover a study in grounded theory, and to seek additional information due to the limitations of the literature on this issue [63][64][65]. Firstly, we conducted in-depth interviews to search for business model components essential to efficient airport business operations. Secondly, we enhanced the data analysis further by examining the operations of Singapore Changi International Airport, recipient of the World's Best Airport award from Skytrax, to draw lessons learned about constructing the ABM framework.
Management groups from various airport ownership patterns in Thailand, and airport scholars, were contacted to conduct in-depth interviews for collecting data from key informants and for allowing data triangulation ( Table 3). The inclusion criterion consisted of key informants who had management positions or at least held the position of director. They were required to have had experience in strategic airport planning and business management. Key informants from the Airports of Thailand (AOT), representing privatised airports, and from Bangkok Airways Plc., which administrates private airports in Thailand, were invited to join the interview sessions. The opinions of the management of the Department of Airports (DOA), a public airport agency in Thailand, from both central and regional units were obtained. We invited airport scholars experienced in conducting at least one national airport development research project, or who held the position of a member of the advisory board of the Network of Thailand Civil Aviation Development (NTCAD), to give their opinions on the topics. Table 4 contains the set of semi-structured questions, developed from Osterwalder and Pigneur [45] and Rotondo [6], that was asked of the key informants during the indepth interview process. With information collected from 11 key informants, the qualitative dataset met the data saturation principle, a benchmark for discontinuing data collection [66]. After data transcription, the dataset was later analysed using thematic analysis, which is a suitable method for exploratory research [67]. Thematic analysis is used to identify and organise information into patterns of meaning, through a process of coding and grouping the keywords across qualitative datasets [68,69]. Table 3. Key informants for in-depth interviews.
Total Key Informants Collected
Privatised Airports
Public Airports (Central Unit)
Public Airports (Regional Units) 2 1 2 4 2 11 Moreover, to enrich the analysis of data from the in-depth interviews, we gathered scientific grey literature available in the public domain-comprising airport newsletters, annual reports, corporate publications, airport websites and fact sheets [70,71]-to draw the lessons learned from Singapore Changi International Airport, the recipient of the World's Best Airport award from Skytrax. According to Song, Guo and Zhuang [72], Skytrax, as an organisation, provides yearly performance benchmarks in terms of overall quality. It is one of the most well-known world airport rankings organisations, and is considered a leader in air travel research [73]. Singapore Changi International Airport has frequently been rated the top airport on several airport charts [74]. Singapore Changi International Airport achieved this award for eight consecutive years from 2013; it received the award for the first time in 2000. Singapore Changi has received the award more than 10 times [75]. Therefore, we selected this airport as a case study to supplement our data analysis by tracking its business operations accomplishments.
Findings
Using data triangulation, we found four main business model component keywords that met the data saturation principle. Strategic partnerships, business activities and human resources were the most common domains we found during the thematic analysis process, and most of the key informants agreed that these elements play critical roles in airport business operations.
Strategic Partnerships
To improve business performance, strategic partnerships should be focused. Airport strategic partnerships, in this sense, comprise business and non-business partners. Most of the key informants agreed that the airport authority should encourage stakeholders to participate in business planning. For example, one of the key informants mentioned the following: Airport development is the responsibility not only of airport management but also of other parties in the area, such as provincial government agencies, local entrepreneurs, trade chamber organisations, tourism authorities and educational institutions. They can actively work together as partners to develop the airport business beyond being only a transportation platform.
What we learned from the in-depth interviews was corroborated by the lessons learned from Singapore Changi International Airport. We found that the airport strongly connects with its business partnerships, which creates cooperation among the partners, and pools business resources that enables them to create impressive mega-projects, such as the Jewel Changi Project, in the airport. Moreover, strategic partnerships with the airport dovetail in proposing values to all stakeholders by eliciting cooperation. The airport holds business meetings with the strategic partners to discuss ongoing and future activities. Even cross-industry partnerships are found in business operations. The airport has developed various channels to communicate with its users. It uses offline and online media to listen to customers/partners' complaints and expectations, in order to equip the airport to respond to their needs.
Besides its business strategic partners, the airport also connects with communities and educational institutions. Strong partnerships among airport stakeholder engagement projects, such as the mentorship for the Saturday Night Lights sport volunteer event programme, the 5-Day job attachment programmes, the hands-on-experience internship programmes, the CAG scholarships, the youth passport programme, and so on, provide major benefits. Such partnership projects create a sense of belonging, and engage the surrounding communities and universities.
Core Business Activities
According to the in-depth interviews, we found two sub-keywords under 'airport business activities'. The key airport business activities that foster airport performance should be based on business development activities and destination development within an airport. For example, one of the key informants mentioned the following: If an airport posits itself as a 1.0 airport, then it can be only a transportation platform. But if it develops itself as a destination using the concept of aero marketing for developing its businesses, then it can achieve better operations.
(1) Business development: Airport managements should provide training for positions involved in airport business development, because budget cuts have put pressure on airport operations. In addition, an airport needs to proactively increase utilisation, by attracting airlines to operate more flights. Since non-aeronautical revenues now play a crucial part in airport revenue generation, an airport should convert available areas into commercial platforms. To efficiently develop airport businesses, an airport needs to listen to stakeholders, and build KP. Public hearings are necessary, because they not only reduce the chances of an airline suffering losses due to abandoned projects, but also make management aware of the expectations and dissatisfactions of all airport users.
(2) Destination development: To develop airport businesses together with destination development, an airport needs to develop its individual identity. The attractions of destinations near an airport should be researched and promoted. To link the attractions with airport businesses, airport staff should work with provincial authorities and other KP, such as government agencies, communities, airlines, local brands and well-known brands.
This idea from the key informants was consistent with the findings from an extensive review of the World's Best Airport. We learned that Singapore Changi International Airport has implemented several proactive strategies to enhance airport revenues through commercial activities, using e-commerce channels to reach out to airport customers. These activities have been developed not only for passengers but also residents, athletes, gastronomes and tourists. These groups of airport users have the potential to increase non-aeronautical revenues. We found that the airport develops the areas effectively by arranging monthly and yearly business and leisure events. The airport turns itself into a destination by way of business partnership collaborations. Projects at the Singapore Changi airport include the HSBC Rain Vortex, the Shiseido Forest Valley, Canopy Park and the Changi Experience Studio.
Human Resources
Most of the key informants mentioned the importance of human resources, because these resources play a significant role in airport performance development. The sub-keywords put forward by the key experts can be classified as follows: (1) Skills necessary for airport people: Working in an airport requires specific knowledge for specific job functions. However, most workers in an airport lack a solid foundation in airport business, and are unaware of the goals and mind-sets related to airport operations. Some of the management staff may have been promoted from non-airport organisations because of political motivations; therefore, they do not have the relevant background, and do not realise the importance of an airport with regard to social and local economic development. One of the key informants said: Many of the top management staff still have a perspective that focuses on infrastructure development, despite the fact that the airport business itself is useful in terms of economic aspects.
In addition to having an airport business orientation, management should have skills relevant to business development and aero marketing. At present, the government budget for public airports is declining, and the airports are forced to generate revenues themselves. Hence, skilled airport staff who are motivated to develop the airport businesses and do the marketing are indispensable KR.
(2) Incentives towards their operations: The structure of the civil servant system has a direct impact on some operational airport staff. Because airport budgets were slashed, some airports have been forced to outsource employment or hire limited numbers of permanent and temporary employees. As previously mentioned, working in an airport requires specific knowledge, especially in positions related to safety and security; therefore, the budget for training is largely spent on temporary employee positions. However, because there are no promotion or salary increments for temporary employees, motivation for employee engagement is almost zero. This lack of motivational incentives results in operational inefficiency.
(3) Manpower planning: The shortage of human power in an airport is another issue that has been raised. Some airports offer only a few civil servant positions, and hire limited numbers of permanent and temporary employees. This means that some of them are required to work double shifts, which leads to fatigue and inefficiency in airport operations.
Personnel development is a key resource for airport business operations. Although many job functions have been replaced by technological devices, passengers prefer to communicate with other humans rather than communicating with artificial intelligence devices. Therefore, some of the key experts insisted that forming a team that has an airport business and goal orientation is an essential factor in improving efficiency.
Singapore Changi International Airport is administered by the corporatisation of its operations, and human power planning and other relevant human development issues are manageable (Singapore Changi is run by CAG, which is a corporatised company.). The airport focuses on talent pool management. It provides various engagement and training programmes for its staff, and creates an inclusive, open, collaborative and encouraging culture through crowdsourcing, personal development and growth. Moreover, the airport offers scholarship programmes to attract talented young people from local universities. This is to make sure that the airport draws attention and retains a good staff composition.
Sustainability-Related Projects
Airport operators need to prepare a systematic, well-developed plan for issues related to sustainability. There are several factors and dimensions to be considered. Firstly, a plan should be developed to absorb the necessary expenditure for compliance with laws regarding noise pollution, waste management, carbon footprint and other environmental problems. In addition, airport management needs to consider the potential effects of airport operations on local communities. For example, if airport expansion is being contemplated due to a growth in air travel, then operators are required to address the impacts of an increase in the frequency of flights. Moreover, business operations connected to shared values among airport stakeholders are an important part of improving sustainable business development.
During our extensive review, we found Singapore Changi International Airport itself engaging in several sustainability-related projects. The airport focuses on Sustainable Development Goals (SDGs) in compliance with the United Nations. Such projects include the Singapore Climate Action Plan, and the Singapore Zero Waste and food-waste digester programmes. It also founded the Sustainability Working Group and Changi Foundation to begin corporate social responsibility programmes for airport stakeholders, such as residents and local educational institutions.
Discussion and Implications
In recent decades, airport development has progressed beyond merely providing the infrastructure required for flights, and offering services only to airlines and passengers. Given the importance of commercial revenue for airport operations [3,11], attempts have been made to investigate innovative methods of developing a business model to improve airport performance.
Although we see a trend toward consolidation of business model conceptualisation, the ABM is in the process of development. Frank [10] presented the matrix of reference for the ABM. The author used 12 building blocks, and some of them shared the same elements as those found in Osterwalder and Pigneur [45]. She added other components that play a part in airport operations, such as ownership and government, regulators, externalities, risk management and reform opportunity cost. Conversely, Everett Jr [59] reconsidered and analysed the ABM of a small airport in Pennsylvania, using the conventional Osterwalder and Pigneur [45] model. Kalakou and Macário [12] modified the BMC of Osterwalder and Pigneur [45] by considering the life cycle of the ABM. The most recent work on ABM was that by Rotondo [6]. He illustrated the ABM by reviewing the business model literature, and created a framework using structure, VP and markets.
Although those studies attempted to suggest the ABM framework, none of them addressed the critical components as a foundation for designing the ABM. Based on our data analysis, we propose the generic airport business model (GABM) as a fundamental component for designing an ABM (Figure 2). The GABM should be founded on the basis of four main critical base components as a tool for creating value for airport users: Strategic Partnerships, Core Business Activities, Human Resources and Sustainability-related Projects. The four main airport business components in GABM have a close connection, because they ultimately affect the cost and revenue of airport business operations.
Figure 2.
The generic airport business model framework.
With strong, engaged strategic partnerships-such as airlines, central and local government agencies, chambers of commerce, tourism authorities, entrepreneurs and educational institutions-a variety of business development activities may benefit disparate airport users. The capital-intensive nature of airport businesses [76], and the diverse groups of airport users, affect different business development activities and values that the airport has to deliver. These business partners perform critical roles in driving core business activities, and airport outputs depend on the levels of commercial partner collaborations [10]. Because of the heterogeneous users in the airport business, we argue against Gillen's [57,77] proposition that an airport should be operated as a two-sided platform.
The employment of skilled airport staff, keen on airport business administration, is also key to proactively driving an ABM. The appropriate quantity of human power and a high-quality workforce will play increasingly important roles in pushing other business model components forward. This study suggests another business model component: sustainability-related projects. They should be added to the ABM framework because sustainability projects create a relationship, and a sense of belonging, among airport stakeholders that encourage commitment and collaboration in other business development projects. Although this component performs a large part in the sustainable business model, it is rarely discussed in the business model literature [51,52].
The rest of the components are customised ABM components that should be developed on the basis of contextual circumstances. In other words, the ABM should be tailored, with regard to contexts and available resources, around the airport location [10,12]. There is a diverse ranges of airport user markets, and they affect how airport managements design a business model. For example, if the airport is in a military area, designing an airport business model should consider some military legislation and related policy, as the military is one of the airport stakeholders. ABMs therefore need to be dynamic in nature. We suggest the deployment of a decentralised, contextualised airport management policy that aligns with the local business environment and location.
Conclusions
Because of recent developments in the aviation industry, airports have been forced to find their own sources of finance, and improve their efficiency. Therefore, many scholars in the industry have focused their attention on airport development tools and performance improvement. Although business models have been shown to be an effective tool in improving a firm's performance, the literature relating to the ABM is still far from complete. Using the exploratory research approach, this study used the ABM framework to address this shortcoming.
Drawing from the literature review and our data analysis, we filled the gap in the literature by proposing that the ABM is an illustration of overall business operations that should be structured with strategic partners, core business activities, human resources and sustainability-related projects that assist airport operators in creating value for users. We introduced four ABM components as basic components for further designing an ABM, the remaining components of which should be heterogeneously innovated with regard to the context of airport surroundings.
Based on our analysis, this study has some limitations that should be addressed in future ABM studies. The implementation of the GABM as a basic component should be observed and put into practice, by designing such a proposed framework, together with the addition of other business model components depending on business environment and location contexts, for general airports. This is to verify ABMs in different contexts. Moreover, it will further enrich the ABM literature. Future research could employ empirical analysis to investigate the relationships among our proposed ABM components. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Conflicts of Interest:
The authors declare no conflict of interest. | 2022-07-10T15:08:12.185Z | 2022-07-07T00:00:00.000 | {
"year": 2022,
"sha1": "61a05b53c66105494886792d34320d987cc7a3bf",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/14/14/8347/pdf?version=1657249592",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "0109f606bc8400d35410361b9603e9abe40d2081",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
} |
265433207 | pes2o/s2orc | v3-fos-license | Hot Tear Formation During the Casting of Al–Zn Binary Alloys
The hot tearing susceptibility of Al–Zn binary alloys with solute contents of Al–0.5%Zn, Al–1%Zn, Al–2%Zn, and Al–4%Zn, with all compositions being in wt%, is investigated through physical experiments and numerical simulation. The temperature at the hot spot, the designed critical point for hot tearing, and the load at the end of the test bar are measured and compared with the simulation results. The test samples with 0.5 and 1.0% Zn show hot tears while those with 2.0 and 4.0% Zn have no sign of such tearing. The correlation between solid fraction, strain, stress, and hot tearing indicator is revealed using the simulation results of the Al–1%Zn and Al–4%Zn alloys. Just before solidification, the Al–1%Zn alloy is found to have not only a much higher strain rate at the hot spot of the test bar than that of the Al–4%Zn alloy, but also a higher strain gradient along the longitudinal direction. It is believed that both the high strain rate and high strain gradient are attributable to the formation of hot tears in the Al–1%Zn alloy.
Introduction
Due to the abundant availability and distinct material properties, aluminum alloys have found wide applications in transportation, packaging, and construction industries as the second industrial metal after steel. [1,2]Over the past two decades, there has been more than a doubling in the global production of aluminum alloys from 25 to 65 million tons per annum.Direct chill (DC) casting presents a good opportunity for the mass production of aluminum alloys to satisfy the rising demand.In fact, it has become the main production technique to produce wrought aluminum alloys.A challenge arises as there is a general trend of increased casting defects when casting speed, billet sizes, and alloy content interact.Billet casting facilities face the challenge of producing larger diameter and more highly alloyed billets without compromising productivity or experiencing a considerable rise in scrap rejection rates due to casting defects.
5][6] This phenomenon has been extensively investigated in both casting and welding processes from many perspectives since the 1950s.Hot tearing is often attributed to where insufficient feeding occurs in the mushy zone and/or where the internally generated stresses in the mushy zone exceed the strength of the partially solidified metal. [7,8]Past work has shown that hot tears initiate when the liquid flow through the mushy zone becomes insufficient to fill initiated cavities [9] and the localized zone at this stage is nearly 100% solidified. [10]Past work has also shown that solidification shrinkage and its implicit thermal contraction will generate stresses, these being concentrated at areas with sudden changes in cross section, at the hot spots where the casting solidifies last, or in adjacent areas to either. [11]o identify the practices necessary to minimize the occurrence of hot tearing in the day-to-day operations in a foundry, it is crucial to be able to understand the underlying physics behind hot tearing.The study of its behavior in alloys often involves both experiments and the use of hot tearing criteria.Numerous experiments have been developed using different configurations and various levels of complexity.Ring molds, [12,13] constrainedrod casting molds, [14,15] and cast hot tearing (CHT) test rigs [16][17][18] have been used to understand the relative hot tearing susceptibility (HTS) of various alloys.18][19] The advancement in computer and software technology has also driven the development of casting process simulation and commercial software tools.Many such tools have demonstrated their capability to investigate thermal-fluid transport phenomena, the effects of alloy chemistry, and the formation of some casting defects such as porosity and distortion.On the other hand, significant research and development work is still required to predict hot tearing because of the complexity of the material behavior in the mushy zone.
Based on the fundamental understanding of hot tearing formation, criteria proposed to predict hot tearing occurrence can been classified as either mechanical or nonmechanical. [9]The former are directly associated with the mechanical behavior of semisolid metals, including stress, strain, and strain rate.The latter include the vulnerable temperature range, phase diagram, and process parameters such as pouring temperature, mold temperature, and so on.Numerical models and criteria mainly address one of the main mechanisms, such as those previously mentioned: inadequate feeding during solidification or the excessive thermally induced deformation.From work done using models based on fundamental continuum mechanics that quantify thermally induced deformation and its associated stresses, the overcritical values of stress, strain, or strain rate have been proposed as the triggering factor for hot tearing.Hot tearing criteria have been implemented in commercial software such as ProCAST [20] and MAGMAsoft. [21]roCAST has been commonly used in foundries to simulate casting processes to help understand the physics behind those processes.Two modules are embedded in the software to predict hot cracking and hot tearing. [22]For steady-state conditions as would be encountered in DC casting, the hot cracking susceptibility (HCS) based on the Rappaz, Drezet, and Gremaud (RDG) criterion [13] is recommended.For normal casting processes, the hot tearing indicator (HTI) based on the Gurson constitutive model [20] is recommended.
Al-Si, Al-Mg, and Al-Cu binary alloys have been intensively investigated, and lambda curves have been used to describe the relationship of alloy content and HTS.The Al-Mg and Al-Cu systems exhibit a similar "lambda" shape in their composition HTS curves. [3]Increasing their Cu/Mg content extends the solidification range and thus increases the amount of linear contraction and the HTS, while further alloying beyond the solid solution limit generates more eutectics, thereby reducing the HTS. [3]n is used as the principal alloying element in 7000 series aluminum alloys.25] For nonrefined Al-xZn-2Mg-2Cu (x = 2-12) alloys, the minimum and maximum HTS values occurred at a Zn content of 4 and 12 wt%, respectively. [25]The HTS of Al-Zn binary alloys has also been studied.However, due to the inconsistency seen in the plots of composition against HTS observed by different researchers, [17,26] a lambda curve could not be constructed.This is explained by a significant change in the curvature of the solidus line in the Al-Zn phase diagram which manifests itself in significant changes in the partition coefficient of zinc.Bazhenov found that Al-Zn alloys exhibit no direct relationship between the effective solidification range and HTS.In that case, the Al-25%Zn alloy had the maximum HTS.The increase in the HTS at up to 25% Zn content was explained by a decrease in the solid fraction growth rate during non-equilibrium solidification at the end of the process.The decrease in the HTS at anything more than 25% Zn content was related to a significant fraction of the nonequilibrium eutectic and its predominant influence on the HTS. [26]Pumphrey and Lyons [27] tested alloys of 2-20% Zn using the ring test and found a peak in the HTS at 6%. Clyne and Davies [28] tested alloys of 5-50% Zn using a restrained bar mold, finding the percentage cracking to be constant over the range 5-40% before dropping dramatically between 40 and 50%.
Spittle and Brown [29] used a numerical model to predict the change in permeability as a function of fraction solid and composition in Al-Cu, Al-Si, and Al-Mg and Al-Zn binary alloy systems and highlighted that the permeability plays a significant role in the formation of hot tearing.There was fairly good correlation between the composition corresponding to minimum permeability and the composition corresponding to maximum HTS for Al-Si, Al-Cu, and Al-Mg binary systems.The only exception is the Al-Zn binary system, in which no correlation exists between data from different hot tear experiments and the predicted minimum permeability data.Viano et al. [17] reported the peak HTS at 0.5% Zn, a considerably lower solute concentration than previous other experimental studies, as well as correlation with Spittle and Brown's permeability model.
Using both physical experiments and numerical simulations, this article seeks enhanced insights into the mechanical aspect of the formation of hot tearing in Al-Zn alloys including the correlation between the development of stress and strain and the formation of hot tears.The experiments and numerical modeling are outlined in Section 2, results presented in section 3, and findings discussed in Section 4.
Alloy Preparation and Test Apparatus
The Al-Zn alloys were prepared from commercially pure aluminum (99.7%) and pure zinc (99.999%) using an induction furnace in a 5 kg batch.ThermoCalc software was used to calculate liquidus and solidus temperatures.Table 1 presents the chemical composition of the Al-Zn alloys.
The details of the CHT rig used have been reported elsewhere. [16,17]For this work, as illustrated in Figure 1, the CHT rig was used with modifications to the load measurement unit to investigate the hot tearing of Al-Zn alloys: the casting had two restrained bars fed by a center sprue.One bar was fully restrained from both ends and was used to quantify hot tearing, while the other bar was used to measure the temperature and tensile load.In the measurement bar, one end was restrained, and the other end of the bar contained a sliding end that was attached to the load cell via a connecting rod.The load cell had a 2 kN capacity, rated output 1.5 mV V À1 , and 24 VDC excitation.A type K thermocouple was placed at the hot spot as indicated by T/C in the Figure1a to record the hot spot temperature.Chill blocks on the ends combined with ceramic fiber insulation along the side walls of the mold promoted directional solidification toward the center of the casting.A 1.5 kg quantity of the prealloyed material for each composition was remelted and preheated to 750 °C inside a graphite clay crucible for 15 min.The melt was poured into the mold cavity (preheated to 250 °C).A data acquisition system with eight channels (Toprie TP700) was used to record temperature and load development.
Finite-Element Model
In this study, the commercial virtual casting software tool ProCAST was used to simulate stress and strain development and hot tear formation.Figure 2 shows the computer-aided design model of the CHT rig.The model was assembled from three separate parts to splice the H-shaped casting cavity to accommodate different castings to mold interfacial heat transfer conditions in different parts of the mold.
Heat Transfer Coefficient
As shown in Figure 2a, two sets of interfacial heat transfer coefficients were assigned to the interface between the mold and casting to duplicate the designed rig setup.For the far end of the bars (as shown in white), the interfacial HTC1 was set as 500 w m À2 ⋅k, and in the center part of the bars (as shown in red), the interfacial HTC2 was set as 200 w m À2 ⋅k.
Restrained Surfaces for Stress Simulation
As shown in Figure 2b, to reflect the actual casting process, only one end was attached to a dynamometer for load measurement while the other three ends were displacement restrained.
Material Model
An elastic-plastic model was used to simulate the stress and strain in this study.Its properties included the Young's modulus, the Poisson's ratio, and the thermal expansion coefficients in the elastic range, the last being temperature dependent.These were defined for the four alloys.The hardening coefficient and yield stress for the plastic model were defined as temperature dependent, with the latter corresponded to the slope of the stress-strain curve in the plastic range.The linear hardening is defined as follows. [22]¼ σ 0 þ Hε pl (1) where: σ 0 is yield stress, ε pl is plastic strain, and H is the plastic modulus.
The material properties for the four alloys were calculated using ThermoCalc.A back diffusion model [30] was used to calculate the thermal properties data for the four alloys.The fraction solid profile for each alloy was determined from JMatPro using Thermotech TT-Al database assuming back diffusion conditions.
Hot Tearing Criteria
To predict the HTS, HTI was used.This model was based on the total strain occurring during solidification, that is, it is "strain driven".It computes the elastic and plastic strain at a given location when the solid fraction is between 50 and 99%.The HTI is defined by the accumulated plastic strain in the mushy zone that corresponds to the void nucleation described in the Gurson model. [22]I ¼ where έp ht is the critical accumulated effective plastic strain for the initiation of hot tearing, έp is the effective plastic strain rate, t c denotes the time when the coherency temperature is reached, and t s is the time when the solidus temperature is reached.The coherency temperature refers to the state of a solidifying alloy at which a coherent dendrite network is established during the formation of grains. [31]
Model Validation
The model was validated using temperature and load data measured from physical experiments.Unlike the high level of agreement obtained between the measured and simulated temperature evolution, the load data from the measurement and the stress from a selected location in the simulation cannot be compared since they are different quantities.Nevertheless, load development through the experimental measurement showed a similar trend to the simulated stress development.A minor deviation can be noted in the trend of simulated stress evolution compared to the observed load development.As shown in Figure 3a-d, the simulated stress steadily built up during the isothermal solidification process, reaching about 10 MPa by the end.Subsequently, the simulated load exhibited a sharp rise, which is nearly inversely proportional to the temperature decrease.On the other hand, the progression of load obtained through experimental measurements did not exhibit a pronounced increase until the latter stages of the isothermal solidification process.As depicted in Figure 4, load measurement initiation was delayed, commencing around 60-80 s into the test.It is likely that the friction between the load cell block and the mold wall may have caused the delayed responses.Despite this delayed start, the measured load rapidly increases at the latter stages of the isothermal solidification process, subsequently displaying a sharp increase.
Results
It is worth noting that the Zn content in the studied Al alloys indicated the opposite effect on the load/stress build up through experimental measurement and simulation.As the Zn content increases, the simulated stress at 150 s showed a descending trend from %75 MPa in the Al-0.5%Znalloy to %72 MPa, 68 MPa, and 64 MPa, respectively, in the Al-1%Zn, Al-2%Zn, and Al-4%Zn alloys.The measured load steadily decreased from 880, 860, 830, and 750 N in the Al-0.5%Zn,A-1.0%Zn, Al-2% Zn, and Zn-4%Al alloy respectively.Figure 4a 2 -d 2 shows the distribution of HTI on the cast bar at the end of the solidification process of the studied Al alloys.It can be seen that the Zn content in the studied alloys has a direct influence on the distribution of HTI.In all instances, the peak HTI value can be found at the center of the cast bar.However, in the low-alloyed Al alloys (Al-0.5%Zn), the peak HTI (%0.020) is highly concentrated and rapidly drops off as it moves away from the center.Additional secondary peaks occur at a distance of onethird of a half-bar length away from the center, both to the left and right sides.The value associated with these peaks is %0.0058.
Hot Tearing Formation and the Prediction Using HTI
The HTI distribution became dispersed with increasing Zn content in the Al-1%Zn, Al-2%Zn, and Al-4%Zn alloys, as shown in Figure 4b 2 -d 2 .Despite this, the primary HTI peaks remained at the center of the cast bars, maintaining a value of %0.020.Two major changes in the HTI distribution can be observed with the increasing Zn content in these alloys.First of all, the tails of the major peak appear to be significantly wider and flatter with the increasing Zn concentration.On the other hand, the secondary peaks also indicated higher values and wider distribution under the same effect.
Development of Stress and Strain and Formation of Hot Tearing
Figure 5 shows the two extreme cases, with the evident hot tearing phenomenon (Figure 5 than the Al-4%Zn alloy at 94 s.The build-up of effective stress begins from the far end of the cast bar at the onset of solidification and propagates into the center.After 48 s, when the right half of the studied bar achieved full solidification, the effective stress in Al-1%Zn and Al-4%Zn alloys indicated distinctive respective peaks in the 55-80 mm and 60-75 mm regions.As the solidification proceeds, the effective stress in Al-1%Zn continued to build up with no additional change in pattern and eventually settled with an evident peak in the 55-75 mm region with a value of 19.5 MPa. Additional effective stress peak build-up can be observed at the 40 mm region in the Al-4%Zn alloy after 68 s, and the effective stress distribution is shown to be wider (40-75 mm) with a lower peak value at 16 MPa.
As the initial solidification (0-58 s) progresses from the end of the cast bar toward the center, the effective strain gradually builds up toward the solidification direction in both alloys with the first peak value of 0.15 observed at the most recently solidified site (around 35-40 mm).After 58 s, effective strain saw a slight drop at around 25-35 mm.As solidification proceeds toward the central region of the casting, the effective strain reached the peak, measuring 0.225 and 0.210 for the Al-1%Zn and Al-4%Zn alloys, respectively.There is a substantial increase in effective strain at the midpoint of the bar when the solid fraction surpasses 60%.In the case of the Al-1%Zn alloy, the effective strain experiences a sharp increase from 0.00225 to 0.02225 as the solid fraction increased from 60% to 100%.In contrast, the rise in the effective strain at the center of the Al-4%Zn alloy is more gradual, with an increase from 0.00725 to 0.020 as the solid fraction increased from 60% to 100%.
The progress of HTI over different sites follows a similar pattern to the effective strain, which starts with a gradual increase to a weak peak in the 35-40 mm region, followed by a slight drop until 10 mm.After that, the HTI increased rapidly when the center of the bar reached the final stage of solidification (above 60% solid fraction).A distinct difference in terms of the HTI distribution along the solidification direction can be noted between the Al-1%Zn and Al-4%Zn alloys.The major HTI peak at the center of the cast Al-1%Zn bar is characterized by its narrow distribution.This peak, initially measuring 0.0225, swiftly diminishes to 0.0005 within a 10 mm distance from the center.A secondary HTI peak appears at the 35 mm location in the Al-1%Zn alloy, reaching 0.010 and then diminishing to 0 at 72 mm.The secondary peak HTI in Al-1%Zn alloy is found at 35 mm, with a peak value of 0.010, and drops to 0 at 72 mm.On the other hand, the peak HTI distribution is significantly wider in the Al-4%Zn alloy, with a peak value of 0.022, and drops to 0.008 at 10 mm away from the center.The secondary peak also indicated a wide distribution with a relatively high peak value of 0.015, which gradually dropped to 0 at 80 mm.The above findings suggest that the relatively uniform HTI distribution along the solidification length may be the key to reducing the HTS of Al-Zn alloys.
Discussion
Previous studies have found that it is difficult to use the lambda curve to describe the correlation of HTS with the Zn content for Al-Zn binary alloys since there is no agreement between the experimental data sets from different research groups.Viano found the peak HTS to be at 0.5% Zn, a much lower solute concentration than other experimental studies, and the compositional range affected by hot tearing was significantly narrower, something which emphasizes the importance of good feeding. [17]he current study shows hot tearing in 0.5% Zn and 1.0% Zn, and no hot tearing for 2.0 and 4.0% Zn, supporting Viano's findings.
Two mechanisms have been proposed for hot tearing.1) The void mechanism which only occurs when feeding is unable to compensate for shrinkage and contraction before tensile coherency.Once the void has formed, the adjacent grains are free to grow and complete solidification unconstrained.It is proposed that this is the dominant failure mechanism for low-solute alloys to the left of the peak in the lambda curve.2) The classic tear mechanism is observed when feeding is unable to compensate for shrinkage and contraction after tensile coherency occurs.The amount of tearing, and how localized it is, depends upon how early tensile coherency occurs and how long afterward feeding becomes insufficient.It is proposed that this is the dominant failure mechanism for high-solute alloys to the right of the peak in the lambda curve.
The imposed strain on the hot spot region from thermal contraction contributes to the hot tearing formation.The imposed strain is attributed to a few factors such as the length of the restrained bar, mold preheat, and a melt superheat.Campbell [32] quantifies the strain theory of hot tearing (ε) as where α is the coefficient of thermal expansion, ΔT is the change in temperature, L is the length of the casting, and I is length of the hot spot.
From Equation ( 3), the strain on the hot spot region increases with increase in both the length of the casting and the cooling rate.The experimental rig and the cooling conditions in the current study determine the level of strain imposed on the hot spot region.
To understand the strain differences between two alloys, the solid fraction and effective strain profiles in the center of the test bars in the last six seconds before it is solidified are plotted in Figure 6.The fraction solid profile longitudinally along the cast bar at a later stage of solidification is seen in Figure 6a.In the Al-1%Zn alloy, the solid fraction increased quicker along the longitudinal distance, with a narrower semisolid pool at 9 mm and a faster increase of solid fraction from 0.48 to 1 in 6 s at the center point.Most of the solidification of the Al-1%Zn is concentrated in a narrower band at the center of the test bar with a higher rate of solidification.The rapid increase in fraction solid translates into an order of magnitude drop in permeability in the last stages of solidification.By comparison, Al-4.0%Zn shows a wider semisolid region of 12.5 mm and the solid fraction increases much more slowly (from 0.63 to 1.0 in 6 s).The solidification of the Al-4%Zn proceeds at a slower rate across a wider area of the hot spot.The broader region of slower solidification growth of the Al-4%Zn alloy leads to a higher permeability in the mushy zone.A good permeability is associated with a high ability to heal micropores and therefore to reduce the hot tears.
Changes of the effective strain in the center zone for the two alloys are also quite different as shown in Figure 6b.The adjacent zone located 15 mm away from the center point for Al-1%Zn alloy is slightly lower than that of the Al-4%Zn alloy at the same location.The gradients of the effective strain for both Al-1%Zn and Al-4%Zn were calculated as 0.0026 mm À1 and 0.0012 mm À1 respectively, and the strain gradient of Al-1%Zn alloy was much higher than that of the Al-4%Zn alloy.The effective strain at the center point increases much faster from 0 to 0.021 for the Al-1% Zn alloy, while the effective strain at the center point increases from 0.075 to 0.20 for the Al-4%Zn alloy.
It was seen that the Al-4%Zn alloy had a much wider solidification range (23 °C) that that of the Al-1%Zn (11 °C).This results in a broader hot spot and a slower solidification rate, both of which would contribute to a small strain in the hot spot.While the simulation results showed only a minor reduction of the effective strain in the hot spot, there is a significantly higher strain rate in the last moments of solidification at the hot spot in the Al-1%Zn alloy and a higher strain gradient, and these may be the main mechanisms for the formation of the hot tearing.
Conclusion
The HTS of Al-Zn binary alloys has been investigated through physical experiments and numerical simulation.The test samples with 0.5 and 1.0% Zn solute contents showed hot tears while those with 2.0 and 4.0% Zn solute contents had no sign of such tearing.The simulation results have been used to investigate in detail the evolution of the solid fraction, effective stress, effective strain, and HTI to reveal the correlation between these and provide insight into what is happening.A detailed comparison has been made between the Al-1%Zn and Al-4%Zn alloys to reveal the mechanism of the hot tearing formation.
In the last moments before the alloy solidified, most of the solidification of the Al-1%Zn is concentrated in a narrower band at the center of the test bar with a higher rate of solidification.The rapid increase in fraction solid translates into an order of magnitude drop in permeability in the last stages of solidification.By comparison, the solidification of the Al-4% Zn proceeds at a slower rate across a wider area of the hot spot.The broader region of slower solidification growth of the Al-4%Zn alloy leads to a higher permeability in the mushy zone and therefore a higher ability to heal micropores and reduce HTS.
No notable difference in effective strain between Al-1%Zn and Al-4%Zn alloys has been found.Interestingly, the Al-1% Zn alloy showed not only a much higher strain rate at the hot spot of the test bar than that of the Al-4%Zn alloy, but also a higher strain gradient along the longitudinal direction.
It is believed that the formation of hot tears in the Al-1%Zn alloy is caused by the low permeability, high strain rate, and high strain gradient generated in the last moments of solidification.
Figure 1 .
Figure 1.a) Schematic view of the mold and b) casting with mold.
Figure 2 .
Figure 2. The heat transfer coefficient setting separately of the four end regions and the central region: a) two interfacial heat transfer coefficients, b,c) restricted surfaces (T refers to the node at which the hot spot temperature was measured and similarly S refers to the node at which the end stress of the bar was measured).
3. 1 .
Stress and Load Development at the End of the Test Bar
Figure 3
Figure3presents the measured temperature/load evolution and the associated simulation of temperature/stress as seen at the nodes T and S during the solidification process of the studied Al-Zn alloys.The simulated temperature evolution closely aligned with the measured data, indicating a high level of agreement for all of the four alloys studied.At the start of the measurement, the alloy melt underwent a rapid cooling process from 700 to 650 °C for 20 s.Following this, isothermal solidification began at around 650 °C and continued until %85 s from the onset of the measurement.Subsequently, the continuous cooling of the alloy melt started at around 2 °C s À1 until 150 s after the start of the experiment when the temperature dropped below the region of interest to 525 °C.Unlike the high level of agreement obtained between the measured and simulated temperature evolution, the load data from the measurement and the stress from a selected location in the simulation cannot be compared since they are different quantities.Nevertheless, load development through the experimental measurement showed a similar trend to the simulated stress development.A minor deviation can be noted in the trend of simulated stress evolution compared to the observed load development.
Figure 4
Figure 4 shows the hot tearing phenomena observed in the studied Al-Zn alloys and the associated simulation of the HTI distribution on the cast bars.According to Figure 4a 1 ,b 1 , the Al-0.5%Zn and Al-1.0%Zn alloys underwent hot tearing damage.The cracks observed in these figures appear to have penetrated perpendicularly to the length of the cast bar and extended toward the center of the casting.On the other hand, the Al alloy with Zn contents of 2% and 4% did not indicate any hot tearing damage, as shown in Figure 4c 1 ,d 1 .
Figure5shows the two extreme cases, with the evident hot tearing phenomenon (Figure5left-side, Al-1%Zn) and no hot tearing defect (Figure5right-side, Al-4%Zn), and the evolution of the solid fraction, effective stress, effective strain, and HTI profile longitudinally along the cast bar over time.Due to the symmetric nature of the cast bar design and load-stress distribution, only the right-side half of the cast bar was analyzed and presented.As shown in Figure5b1 ,b 2 , the solidification of Al-1%Zn and Al-4%Zn both commenced at the far end of the cast bar and propagated toward the center, indicating a good sequence of directional solidification.The Al-1%Zn alloy achieved complete solidification after 86.5 s into the test, which was slightly earlier
Figure 5 .
Figure 5. Simulated solid fraction, stress, strain, and HTI versus the bar length and time a) Al-1.0Zn and b) Al-4.0Zn.
Figure 6 .
Figure 6.a) Solid fraction and b) effective strain in the later stage of solidification for the Al-1%Zn and Al-4%Zn alloys. | 2023-11-26T16:15:53.173Z | 2023-11-23T00:00:00.000 | {
"year": 2024,
"sha1": "9a555bb62ffe70dba2592b7dffdd2197a18e2991",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1002/adem.202301471",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "b6cebbb7e319323a5b98dd4c026d8fbb1a028a1c",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": []
} |
8644792 | pes2o/s2orc | v3-fos-license | m
The importance of ADA (adenosine deaminase) in the immune system and the role of its interaction with an ADA-binding cell membrane protein dipeptidyl peptidase IV (DPPIV), identical to the activated immune cell antigen, CD26, has attracted the interest of researchers for many years. To investigate the specific properties in the structure-function relationship of the ADA/DPPIV-CD26 complex, its soluble form, identical to large ADA (LADA), was isolated from human blood serum, human pleural fluid and bovine kidney cortex. The kinetic constants (Km and Vmax) of LADA and of small ADA (SADA), purified from bovine lung and spleen, were compared using adenosine (Ado) and 2'-deoxyadenosine (2'-dAdo) as substrates. The Michaelis constant, Km, evidences a higher affinity of both substrates (in particular of more toxic 2'-dAdo) for LADA and proves the modulation of toxic nucleoside neutralization in the extracellular medium due to complex formation between ADA and DPPIV-CD26. The values of Vmax are significantly higher for SADA, but the efficiency, Vmax/Km, in LADA-catalyzed 2'-dAdo deamination is higher than that in Ado deamination. The interaction of all enzyme preparations with derivatives of adenosine and erythro-9-(2-hydroxy-3-nonyl)adenine (EHNA) was studied. 1-DeazaEHNA and 3-deazaEHNA demonstrate stronger inhibiting activity towards LADA, the DPPIV-CD26-bound form of ADA. The observed differences between the properties of the two ADA isoforms may be considered as a consequence of SADA binding with DPPIV-CD26. Both SADA and LADA indicated a similar pH-profile of adenosine deamination reaction with the optimum at pHs 6.5-7.5, while the pH-profile of dipeptidyl peptidase activity of the ADA/DPPIV-CD26 complex appeared in a more alkaline region.
INTRODUCTION
Dipeptidyl peptidase IV (DPPIV, EC 3.4.14.5) is a widely distributed multifunctional glycoprotein, expressed as a non-covalently linked homodimer of 210 kDa at the cell surface in nearly all mam-malian tissues.It was defined as an antigen CD26 on activated human T lymphocytes and medullary thymocytes, and plays a significant role in the immune and endocrine systems, bone marrow mobilization, cancer growth, cell adhesion, etc. (Gorrell et al., 2001;Boonacker & Van Noorden, 2003;Mentlein, S. Sharoyan and others 2004;Gorrell, 2005).DPPIV substrates are a number of chemokines and neuropeptides, the glucagon-like peptide and the glucose-dependent insulinotropic peptide essential in diabetes mellitus, etc.In addition to the integral membrane form, a soluble form of DPPIV-CD26 occurs in the serum, seminal fluid, pleural fluid, bile, and kidney.
Adenosine deaminase (ADA, adenosine aminohydrolase, EC 3.5.4.4), an enzyme involved in the metabolism of purine nucleosides, catalyses the irreversible hydrolytic deamination of adenosine (Ado) and 2´-deoxyadenosine (2´-dAdo) to inosine and 2´deoxyinosine, respectively.The enzyme is widely distributed in vertebrate tissues and plays a critical role in a number of physiological systems (Senba et al., 1987;Hirschhorn, 1990).In nature, several isoforms of ADA are known that differ by molecular mass, kinetic properties and tissue distribution (Van Der Weyden & Kelley, 1976).These isoforms are: the minor isoenzyme, ADA2, (Shrader et al., 1978), and two molecular forms of the major isoenzyme: small, SADA, a monomer with molecular mass 35-40 kDa, and large, LADA, with molecular mass 280-300 kDa.The latter is formed by complex formation between the catalytic subunit, SADA, with a protein named ADA complexing protein (ADAcp), recently identified as DPPIV-CD26 (Fox et al.,1984;Kameoka et al.,1993;Dong et al., 1996).First of all, the membrane-localized complex ADA/DPPIV-CD26 protects the cells from Ado-and 2´-dAdo-mediated apoptosis and inhibition of proliferation.Co-localization of this complex and Ado receptor, A1R, has been shown in different cells (Franco et al., 1998;Beraudi et al., 2003).The interaction between these proteins could be a functional basis of the extra-enzymatic role of ecto-ADA in modulating ligand-induced signalling, desensitization and internalization of A1Rs (Gines et al., 2001).
The importance of ADA in the immune system in general (Giblett et al., 1972;Van der Weyden & Kelley, 1977;Hirschhorn, 1990), and the unresolved questions regarding the role of its interaction with ADAcp-CD26-DPPIV, expressed in activated immune cells (De Meester et al., 1994;Martin et al., 1995;Dong et al., 1996), have been in the field of researchers' interest for many years.The 3D-structure of the ADA/DPPIV-CD26 complex shows that their interaction does not result in steric interference and can not modulate the catalytic sites of the enzymes (Ludwig et al., 2004) or affect their enzymatic activities (De Meester et al., 1994).The available information concerning the collation of the two ADA isoforms is contradictory: Akedo et al. (1972) observed a larger activity of SADA compared with that of LADA; an increase of the resultant activity at LADA formation by SADA binding with ADA-CP was shown in the works by Schrader and Stacy (1977) and Weisman et al. (1988); no differences in LADA and SADA catalytic properties were found by other researchers ( Van der Weyden & Kelley, 1976;Trotta et al., 1979;Fonoll et al., 1982).
In the present work we intended to compare the catalytic peculiarities of SADA and its natural complex with DPPIV-CD26, LADA.Hopefully, the obtained information will contribute to the problem of potential therapeutic significance: whether or not the SADA binding with DPPIV-CD26 is involved in regulation of the enzyme interaction with substrates, substrate analogs, and inhibitors.To this end, we isolated SADA from bovine spleen and lung, and LADA -from human blood serum, human pleural fluid, and bovine kidney cortex.Significant differences in the kinetic constants of the two isoforms in the reactions of Ado and 2´-dAdo deamination, and in inhibition of these reactions by the derivatives of adenosine and EHNA, used for the first time in the LADA-catalyzed reactions, were observed.Obviously, it is reasonable to consider the observed differences between the enzymatic properties of the two ADA isoforms as a consequence of the catalytic subunit binding with DPPIV-CD26.
Preparation of SADA and LADA.SADA from bovine spleen and lung was isolated and purified as described earlier (Sharoyan et al., 1994).In the present work, electrophoretically homogeneous preparations of specific activity about 350 µmol/min per mg of protein were used.LADA from bovine kidney cortex, human blood serum and pleural fluid was isolated using the chromatographic procedures described below.
Kidney cortex was separated from fat and medulla, washed with 0.15 M NaCl, and homogenized with a PT-1 homogenizer at 8 000 r.p.m. for 1.5 min in 10 mM K,Na-phosphate buffer, pH 7.4 (buffer А), at a ratio of 1:5 (w/v).The homogenate was centrifuged at 15 000 g for 20 min, 4 o C. Then the supernatant was batch-bound to DEAE-cellulose (pre-equilibrated in buffer A) for 45 min at 4 o C. The settled cellulose was packed into a glass column, washed Influence of dipeptidyl peptidase IV on adenosine deaminase with three column volumes of buffer A, and the same containing 20 mM KCl.The proteins adsorbed on the cellulose were eluted with buffer A containing 0.3 М KCl.Fractions of 10 ml were collected and analyzed for both ADA and DPP activities.The blood serum and pleural fluid were subjected to similar procedures of batch-binding and elution on DEAE-cellulose after preliminary dialysis against 20 vol. of buffer А at 4 o C, overnight.
The active fractions of DEAE-cellulose eluate were pooled and subjected to gel-filtration in 10 ml portions through Sephadex G-200 med (3 × 45 cm) equilibrated with buffer A containing 0.1 M KCl (buffer B).The high molecular mass fractions possessing ADA activity were collected, dialyzed against 10 vol. of buffer А at 4 o C, overnight, and put on a column of DEAE-Sephadex A-50 (2 × 12 cm) pre-equilibrated in buffer A. After washing the column with the same buffer, the adsorbed proteins were eluted with a linear gradient of 0.02-0.2M KCl, 200 ml total volume.The protein fractions eluted at 0.13-0.15M KCl contained both ADA and DPP activities.They were pooled, dialysed against 10 vol. of 20 mM K,Na-phosphate buffer pH 7.4 containing 0.1 М KCl (buffer C) at 4 o C, overnight, and subjected to affinity chromatography on bovine lung SADA-liganded CNBr-Sepharose (pre-equilibrated in buffer C).The column was washed with buffer C containing increasing concentrations of KCl, typically: 0.1, 0.3, 0.5, 0.8 and 1 М, 10 ml of each.Afterwards, protein fractions were eluted with buffer C containing 6 M urea, and pooled and dialysed against buffer A to remove urea.This fraction also possessed both of the enzymatic activities.It was concentrated using a micro-concentrator and subjected to gel-filtration through Sephadex G-200 sf column equilibrated in buffer B.
Disc electrophoresis in non-denaturing conditions indicated nearly 90% purity of the obtained preparations.They were kept at -20 o C for 2 months without a significant loss of activity.
Enzyme assays.The ADA activity was assayed by determination of ammonia liberated in the reaction of substrate, Ado or 2´-dAdo, deamination, using a phenol-hypochlorite colorimetric method, described earlier (Mardanyan et al., 2001).The enzyme concentrations in the assay mixture were 0.1 µg /ml for SADA from lung and spleen, and 30, 100, and 150 µg/ml for LADA from kidney cortex, pleural fluid and blood serum, respectively.
DPP activity was assayed using Gly-Pro-pNA as a substrate.Usually, 500 µl of assay mixture contained 40 mM K,Na-phosphate buffer, pH 7.5, and an enzyme sample (30-60 µg of protein).The reaction was initiated by addition of a 2 mM stock solution of substrate in water to a final concentration of 0.24 mM.After incubation for 20-30 min at 37 o C, the reaction was stopped by cooling the assay mixture in an ice bath.The amount of product was evaluated from differential absorbance at 390 nm of a sample against an identical mixture without the enzyme, using the absorption coefficient of the chromogenic group, nitroaniline, at this wavelength of 9.9 mM - 1 cm -1 (Mentlein & Struckhoff, 1989).
Protein determination was performed by the method of Bradford (1976) using BSA as a standard, and/or spectrophotometrically, from the difference of absorbance of a protein solution at 215 and 225 nm (Murphy & Kies, 1960).
Determination of kinetic constants.Kinetic constants were obtained by analyzing enzyme activity dependences on the substrate concentration using the Michaelis-Menten equation for enzyme kinetics, with Lineweaver-Burk transformation.The inhibition constant, K i , was determined using the Dixon graphical method described earlier (Mardanyan et al., 2001).These enzymatic parameters were determined using the GraFit software (Leatherbarrow, 2001).
pH-dependences.The ADA and DPP activities were determined in a pH range 5.0-9.3, using 50 mM K,Na-phosphate buffer for the pH interval 5.0-7.5, and 50 mM boric buffer within pHs 7.5-9.3.The enzyme assay mixtures were preincubated at 37 o C for 5 min at the appropriate pH before the reaction was started by substrate addition.The pK a double bell equation of the GraFit software (Leatherbarrow, 2001) was used to analyze the obtained bell-shape pH profiles and to evaluate the apparent pK a and pK b values, respectively, in acidic and basic slopes.
Spectral measurements.Spectral measurements were performed at 25 o C in UV-VIS spectrophotometers Specord M-40 and 50PC (Germany), equipped with thermostated holders.
Statistical analyses.Statistical analyses were performed using the statistical software InStat, version 3 for Windows (GraphPad Software, Inc., San Diego, CA, USA).Specific differences were tested using Student's t-test.Results are expressed as means ± S.D.
Purification of LADA
Figure 1 shows the elution diagram obtained upon gel filtration on Sephadex G-200 med of a sample from kidney cortex, eluted from DEAE-cellulose (as described in Materials and Methods).Identical diagrams were obtained for pleural fluid and blood serum samples.Curve 2 in this figure evidences that in these tissues low-molecular SADA is present in a S. Sharoyan and others negligible amount.The fraction possessing most of the ADA activity fits molecular mass of 280-300 kDa, and we identified it as the LADA isoform.Curve 3 shows that the same protein fractions possess also the DPP activity.We tried to separate the ADA and DPP activities to obtain DPPIV free of ADA activity.For this we used several treatments: thermal (incubation at 56°C), acidic (incubation at pH 3.5), and urea (incubation in 6 or 8 M urea).After passing all of these samples through identical Sephadex G-100 superfine columns equilibrated with buffer B, the same protein fraction, eluted in the void volume of the column, had both the ADA and DPP activities.
Table 1 shows the specific activity of LADA from the three sources at all purification stages, following gel filtration through G-200 med .It is worth to note that the specific activities of the obtained enzyme preparations are tissue-dependent: they differ both at the initial and the final stages (Table 1, rows 1 and 4, respectively).
The DPP activity of the final LADA preparations from bovine kidney cortex and human blood serum in the reaction of chromogenic group depletion from Gly-Pro-pNA were characterized by the approximate values for K m , 105 and 80 µM, and by V max values differing by one order, 10.0 and 1 µmol/ min per mg, respectively.
We should note that in the obtained preparations of LADA, the 'ADA complexing protein' DP-PIV-CD26 was not saturated with SADA; it could bind an additional amount of SADA: after 2 h of incubation of LADA at room temperature in the presence of SADA and passing the mixture through a Sephadex G-100 column, the specific ADA activity of the high-molecular mass protein fraction eluted in the void volume was higher than that of the initial LADA sample (not presented).This observation confirms LADA formation as a SADA complex with a multisubunit protein that contains unoccupied sites (Trotta, 1982).
Differences in the catalytic properties of LADA and SADA
In Table 2, the values of Michaelis constant, K m , and the activity of the enzyme at saturation with substrate, V max , for two SADA and three LADA preparations in reactions of both Ado and 2´-dAdo deamination are reported.At a glance, we can see the similarity of the parameters for SADA samples from bovine lung and spleen (rows 1 and 2, respectively) and of those for LADA samples from bovine kidney cortex, human pleural fluid and blood serum (respectively, rows 3, 4, 5), showing at the same time significant differences between the parameters for the two isoforms.
The data for K m demonstrate a higher affinity of both substrates for LADA compared with SADA: the K m values of Ado in the reaction of deamination by SADA are almost twice as high as those for LADA, while the K m values of 2´-dAdo for SADA are approximately four times as high as those for LADA.These differences prove that the affinity for both substrates (especially 2´-dAdo) are higher when SADA is in complex with DPPIV-CD26.
The V max values for SADA, compared with those for LADA from kidney, are higher by nearly two orders, and compared to those for LADA from pleural fluid and blood serum -by three orders, for both substrates.Such significant differences cannot be considered as a consequence of the difference in the molecular masses of the two isoforms only (no more than one order of magnitude, 35 and 300 kDa, respectively, for SADA and LADA,).
It is worth to note that the above mentioned observations and other publications (Daddona & Kelley, 1978;Trotta, 1982) allow one to conclude that SADA does not saturate all the binding sites of DPPIV-CD26 in LADA, and the enzyme preparation usually represents a mixture of molecules of DPPIV-CD26 differently saturated by the catalytic subunit SADA.This circumstance hinders the evaluation of the number of catalytic units in the assay mixture and the evaluation of the enzyme catalytic centre activity.Therefore, we were limited in the evaluation of the reaction efficiency, V max /K m .The data in the last two columns of Table 2 indicate that this parameter for SADA is 20-400 times as high as those for LADA.Let's note that it does not differ significantly in SADA-catalyzed deamination of the two substrates, but the efficiency of LADA- The protein fractions eluted from DEAE-cellulose by buffer A containing 0.3 M KCl were pooled and subjected to gel filtration as described in Materials and Methods.The obtained fractions were analysed for protein content and ADA and DPP activities.The diagrams represent: 1, protein concentration, absorbance at 280 nm (l); 2, adenosine deaminase activity, absorbance of assay mixture at 625 nm (¡); 3, dipeptidyl peptidase activity, absorbance of released nitroaniline chromogenic group at 390 nm (∆).
Influence of dipeptidyl peptidase IV on adenosine deaminase catalyzed 2´-dAdo deamination is higher than that for Ado.
The observed differences in the kinetic constants for the two molecular forms of ADA reflect the importance of neutralization of toxic nucleosides in the extracellular medium by DPPIV-CD26-bound ADA, and of modulation of this function by the complex formation between them.
Interaction of LADA and SADA with inhibitors
In Table 3, the inhibition constants, K i , are reported for derivatives of adenosine (1-dAdo and 3-dAdo), and for EHNA and its derivatives (1-dEHNA and 3-dEHNA), for SADA and LADA in the reaction of Ado deamination.These data show similar inhibitory efficiencies of 1-dAdo, 3-dAdo, and EHNA for the two ADA molecular forms.In contrast, the K i values of 1-dEHNA and 3-dEHNA for LADA are 2-3 times lower than those for SADA, evidencing a higher affinity of these compounds for LADA.In accordance with the proposed mechanism of ADA interaction with EHNA (Frieden et al., 1980), the binding of EHNA derivatives to the enzyme complexed with DPPIV-CD26 results in a more pronounced conformational rearrangement near the active centre, leading to the formation of more stable enzyme-inhibitor complexes than in the case of SADA.Probably, if ADA is bound to DPPIV -CD26, its hydrophobic binding site becomes more exposed and binds the hydrophobic erythro-nonyl moiety of EHNA derivatives more strongly.This suggestion correlates with our previous observation, when 3-dEHNA more effectively prevented SADA from chemical modification by N-bromosuccinimide than EHNA did (Mardanyan et al., 2001).
In Table 4, K i values are presented of the same inhibitors for SADA from lung and LADA from kidney cortex and blood serum in the reaction of 2´-dAdo deamination.These data do not show significant differences in the K i values for all five inhibitors for the two ADA isoforms.However, the K i values for EHNA and its derivatives for SADA indicate that the deamination of 2´-dAdo could be inhibited with these compounds about three times more effectively than Ado deamination.Apparently, the 2´-deoxyribose moiety of 2´-dAdo hinders its competition for the SADA binding site with EHNA and its derivatives.
pH dependences of ADA and DPP activities
The pH dependences of SADA and LADA activities in the deamination of Ado were studied in pH interval 5.0-9.3, as described in Materials and Methods.Bell-shaped pH profiles were obtained *Since SADA content in tissues used as sources for LADA purification was negligible (Fig. 1), we present the enzyme activity at different purification stages after removing of SADA traces by gel filtration on G-200 med .**Because of limited availability of tuberculous pleural fluid, we omitted the affinity chromatography step at enzyme purification from this fluid.Otherwise, the amount of the final product would not allow completing the kinetic properties study.(not shown), demonstrating a similar broad pH optimum for the two isoforms of ADA.In Table 5, the obtained pK a and pK b values are presented in the acidic and basic sides, respectively, for SADA from lung (for the enzyme from spleen identical data were obtained) and for LADA from kidney and blood serum (the data for the enzyme from pleural fluid were identical).
The observed identity of the pK a and pK b values of SADA and LADA indicates that the rearrangement of SADA tertiary structure at its complexation with DPPIV-CD26 does not influence the role of essential amino acids (acidic, Glu217 and Asp295, and basic, His238 (Wilson et al., 1991)) in the enzymatic activity.
In the last row of Table 5, the pH parameters of LADA-catalysed dipeptidyl peptidase activity are presented.The spontaneous depletion of the substrate restricted the investigation of this reaction in the region of the pH higher than 9. Nevertheless, an analysis of the obtained data with the GraFit software shows that the pH optimum of DPP activity of LADA is in a more alkaline and wider region than that of its ADA activity.
In conclusion, we can state that the detailed comparison of the kinetic parameters of the two molecular forms of ADA shows several significant differences between free SADA and that in a soluble complex with DPPIV-CD26 (LADA).Obviously, the kinetic characteristics of SADA (actually, the catalytic subunit of ADA), are modulated by the interaction with DPPIV-CD26 in favour of neutralization of toxic nucleosides (especially 2´-dAdo) in the extracellular medium.The observed higher affinity Enzyme Activity pH-optimum pK a pK b SADA from lung ADA 6.5-8.1 5.9 ± 0.08 9.0 ± 0.11 LADA from blood serum ADA 6.5-7.8 5.7 ± 0.12 9.0 ± 0.12 LADA from kidney ADA 6.5-7.6 5.6 ± 0.12 8.4 ± 0.11 LADA from kidney DPP 7.0~9.05.9 ± 0.11 -------* *The spontaneous depletion of the substrate restricted the investigation of this reaction in the region of pH higher than 9.0 .
Figure 1 .
Figure 1.Gel filtration on Sephadex G-200 med of kidney cortex homogenate fraction eluted from DEAE cellulose.The protein fractions eluted from DEAE-cellulose by buffer A containing 0.3 M KCl were pooled and subjected to gel filtration as described in Materials and Methods.The obtained fractions were analysed for protein content and ADA and DPP activities.The diagrams represent: 1, protein concentration, absorbance at 280 nm (l); 2, adenosine deaminase activity, absorbance of assay mixture at 625 nm (¡); 3, dipeptidyl peptidase activity, absorbance of released nitroaniline chromogenic group at 390 nm (∆).
pK a and pK b values in the acidic and basic sides, respectively, were evaluated using GraFit software.Statistical analyses were performed with InStat software by Student's t-test.Values are means ± S.D. of three independent experiments in triplicate.
Table 2 . Kinetic constants of LADA and SADA from different tissues.
Kinetic constants for Ado and 2´-dAdo deamination reactions catalysed by LADA and SADA from different tissues were calculated with GraFit software using Michaelis-Menten equation with Lineweaver-Burk transformation.Statistical analyses were performed with InStat software.Significance was determined by Student's t-test.Data represent means of five independent experiments ± S.D.
*The significance of difference between mean values for two SADA and three LADA preparations
Table 3 . Inhibition constants for adenosine-and EHNA-derivatives in reaction of adenosine deamination catalysed by LADA and SADA from different tissues.
K i was determined using graphical method of Dixon: the constant is equal to the abscissa of intercept of linear plots of reciprocal velocity dependence on inhibitor concentration at two concentrations of substrate.The plots were obtained with GraFit software using linear regression equation.Statistical analyses were performed with InStat software.Significance was determined by Student's t-test.Values are means of five independent experiments ± S.D. The significance of differences between K i values for 1-dEHNA and 3-dEHNA for SADA and LADA is statistically significant, P < 0.005.The differences between K i values for 1-dAdo, 3-dAdo and EHNA for SADA and LADA isoenzymes are not statistically significant. *
Table 4 . Inhibition constants for adenosine-and EHNA-derivatives in reaction of 2´-dAdo deamination catalysed by LADA and SADA from different tissues.
K i was determined as above (Table3).Values are means of three independent experiments ± S.D.The differences between K i values for SADA and LADA are not statistically significant. | 2014-10-01T00:00:00.000Z | 2006-08-21T00:00:00.000 | {
"year": 2006,
"sha1": "e07b371227e4f68fa31b35043b6396be7c8ff290",
"oa_license": "CCBYSA",
"oa_url": "https://ojs.ptbioch.edu.pl/index.php/abp/article/download/3325/2383",
"oa_status": "GOLD",
"pdf_src": "CiteSeerX",
"pdf_hash": "cfc03d47b8fc4311cf5f6aa5679e2da48a5a8857",
"s2fieldsofstudy": [
"Biology",
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
92810301 | pes2o/s2orc | v3-fos-license | Evaluation of a High Concentrate Omega-3 for Correcting the Omega-3 Fatty Acid Nutritional Deficiency in Non-alcoholic Fatty Liver Disease and Effects on Hepatic Steatosis ( CONDIN )
This RCT investigated the safety and efficacy of MF4637, a high concentrate omega-3 fatty acid preparation, in correcting the omega-3 fatty acid nutritional deficiency in non-alcoholic fatty liver disease (NAFLD). Whether MF4637 could lower liver fat was evaluated in a subset of patients. 176 subjects with NAFLD were randomised to receive MF4637 (n=87) or placebo (n=89) for 24 weeks, in addition to following standard-of-care dietary guidelines. The omega-3 index, omega-6:omega-3 fatty acid ratio and quantitative measurements of red blood cell (RBC) eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA) were determined at baseline and study completion. Magnetic resonance imaging of liver fat was conducted in a subset of patients. Administration of MF4637 for 24 weeks significantly increased the omega-3 index and absolute values of RBC EPA and DHA, and decreased the omega-6:omega-3 fatty acid ratio (p<0.0001). A significant reduction in liver fat content was reported in both groups. An inverse relationship between change in absolute RBC EPA+DHA and change in liver fat, AST and ALT was observed. Post-hoc analysis demonstrated a significant liver fat-lowering effect of MF4637 in a subset of patients with baseline fatty liver index score ≥ 40. In conclusion, MF4637 corrected the omega-3 fatty acid nutritional deficiency in NAFLD patients.
Introduction
Non-alcoholic fatty liver disease (NAFLD) is the presence of hepatic steatosis (> 5% liver fat) that is not related to significant alcohol consumption, hereditary disorders, viral infection or steatogenic medication [1].Early NAFLD is typically reversible, but can develop in some 30% of cases into nonalcoholic steatohepatitis (NASH), presenting as hepatic steatosis with inflammation, ballooning and evidence of hepatocellular injury with or without fibrosis [1][2][3].NAFLD is associated with metabolic risk factors such as obesity, diabetes and dyslipidaemia, and its prevalence has risen sharply in line with the rising rates of obesity and diabetes [1,4,5].In Western countries, NAFLD is the leading cause of liver disease [6].NAFLD is estimated to affect 20-30% of the general population, with the prevalence increasing to approximately 75% of patients with obesity or diabetes, and 90-95% in the morbidly obese [6][7][8][9].The estimated prevalence of NASH is lower, but significant, at 2-3% of the general population and one-third of the morbidly obese [1,7].
Identification of NAFLD patients in a clinical setting is commonly performed due to suspicion from raised liver enzymes and confirmation of hepatic steatosis by ultrasound.More recently, more advanced techniques such as Fibroscan (Echosens) and lipidomic analysis (OWL) have become available at specialised and general practitioner level.A validated algorithm for steatosis risk, called the fatty liver index (FLI), uses clinically available measurements to predict steatosis and to identify populations at risk of developing further liver-related morbidities [10].
NAFLD is associated with increased morbidity and mortality, particularly from cardiovascular disease (CVD) [1,2,11,12].This is due to the fatty liver becoming insulin resistant and increasing its production of glucose and very low density lipoproteins (VLDLs) [13].The resulting hyperglycaemia, hyper-triglyceridaemia and lowered HDL-cholesterol are all risk factors for the development of CVD [1,2,11,13].NASH also has the potential to develop into liver cirrhosis, from which 30-40% of patients will die of liver-related causes such as liver failure or hepatocellular carcinoma within a ten-year period [2,3,14].Despite the increasing prevalence of NAFLD and its associated morbidity and mortality, there is currently no approved drug therapy for its treatment.The World Gastroenterology Organisation (WGO) guidelines state that in addition to pharmacological management of comorbidities such as diabetes and dyslipidaemia, weight loss and increased physical exercise are the most effective ways to reduce liver fat [6].However, such lifestyle changes are typically difficult to sustain in the long term, creating a significant unmet need for this condition.
There is mounting evidence that long-chain polyunsaturated fatty acids (PUFAs), especially the marine omega-3 fatty acids eicosapentenoic acid (EPA) and docosahexenoic acid (DHA), are depleted in patients with NAFLD [15][16][17][18][19].This may be due to several factors including impairment of the hepatic metabolic pathways responsible for the synthesis of EPA and DHA from their precursors, increased utilisation due to lipid peroxidation caused by raised oxidative stress in NALFD, as well as reduced dietary intake [15,16,20,21].Increased levels of omega-3 PUFAs and reduction of the omega-6:omega-3 ratio enables a shift in hepatic fat metabolism away from de novo lipogenesis and towards fatty acid oxidation and secretion, thereby potentially reducing steatosis in NAFLD [17,[22][23][24][25][26].In support of this, a recently published systematic review and meta-analysis of omega-3 fatty acids in NAFLD patients demonstrates statistically and clinically significant consistent reduction in steatosis with approximately 3 g EPA plus DHA daily [27].This level of EPA and DHA is difficult to obtain from normal diet, where daily intake of a fatty fish meal would be required.Daily intake of fatty fish would increase the risk of exposure to pollutants (such as mercury and polychlorinated biphenyls (PCBs) above the recommended tolerable daily intake [28].Omega-3 PUFAs from supplementation should also consider pollutant content.Overall, existing data demonstrate that NAFLD patients have reduced levels of EPA and DHA compared to healthy individuals and that there are beneficial effects on liver steatosis from increased intake of omega-3 PUFAs at approximately 3 g/day.
The purpose of this study was to investigate the safety and efficacy of MF4637, a medical food comprising concentrated long chain omega-3 fatty acids, in correcting the omega-3 fatty acid nutritional deficiency present in NAFLD.The hypotheses being tested are that MF4637 will significantly improve the omega-3 index (EPA+DHA in red blood cells (RBCs)) and lower the RBC omega-6:omega-3 fatty acid ratio in patients with NAFLD.The potential for MF4637 to reduce hepatic fat content was evaluated using MRI-PDFF in a subset of patients.Additional post-hoc stratification was performed using the FLI.
Study Design
This was a randomized double-blind placebo-controlled repeated-dose study conducted at 21 investigative sites across the U.S. All procedures involving human participants were approved by Quorum Review IRB, Seattle.All participants provided written informed consent.The trial is registered with ID NCT02923804 at the U.S. National Library of Medicine's ClinicalTrials.govwebsite.
Participants were recruited based on the suspected diagnosis of NAFLD, confirmed either by diagnostic imaging performed within the previous year, or by abdominal ultrasound performed at screening.Eligibility was determined, after the informed consent process, at screening which included review of medical history and current medications, measurement of vital signs (height, weight, blood pressure, heart rate and BMI), haemoglobin A1c (HbA1c), thyroid-stimulating hormone (TSH) and liver function testing.Following written informed consent, each participant was centrally randomised 1:1, stratifying by site, marine omega-3 fatty acid intake (≥ 250 mg/day and < 250 mg/day), diabetes and statin use, to receive either MF4637 or a placebo (olive oil) for 24 weeks.Randomisation numbers corresponding to predetermined intervention were assigned in a sequential manner to each subject via an Interactive Voice/Web Response System.One hundred and seventysix subjects were subsequently randomised to receive either MF4637 (n=87) or placebo (n=89) (Figure 1).The medical food MF4637 was supplied by BASF AS as soft gel capsules, with each 1 g capsule containing marine-sourced EPA and DHA as ethyl esters (460 mg and 380 mg, respectively).Placebo capsules were identical in size and appearance to MF4637 and contained 1 g of olive oil.The investigational products were administered in a double-blinded fashion.Study participants were required to take three capsules per day of either MF4637 or placebo with food for 24 weeks.Thus, daily intakes of EPA and DHA in the MF4637 group were 1.38 g and 1.14 g, respectively.Compliance was measured via subject interview and unused capsule counts.
In addition to the investigational product, study participants were advised to reduce normal caloric intake as recommended by the American Association for the Study of Liver Disease (AASLD) standard-of-care guidelines for NAFLD [1], and to maintain stable physical activity levels throughout the study.To provide the American Heart Association (AHA) recommended dietary intake of omega-3 fatty acids [29], participants were required to consume two meals of omega-3 rich fish per week (from a choice of salmon, herring, whitefish, sardines, bluefish and trout) and to reduce foods rich in trans-and omega-6 fatty acids (fried foods and snacks, fast foods, bacon, turkey bacon, hams, nuts, peanut butter, sesame seeds, sunflower seeds, pumpkin seeds, vegetable oils and margarine (including soybean oil and corn oil), mayonnaise and salad dressing).Dietary intake was monitored regularly throughout the study via participant's food diaries.
At baseline (week 0), week 12 and study completion (week 24), weight, blood pressure, heart rate and BMI were recorded and blood samples collected to assess efficacy (Omega-3 index, RBC omega-6:omega-3 ratio and quantitative measurements of RBC EPA and DHA) and safety (standard clinical biochemistry and haematology panels including liver function tests).Adverse events were monitored throughout the study.MRI assessments of liver fat were performed at baseline (week 0) and study completion (week 24).
The primary endpoint of the trial was to test the effect of administration with concentrated EPA and DHA on the omega-3 index (RBC EPA + DHA).Secondary endpoints included quantitative measurement of RBC EPA and DHA and assessment of the RBC omega-6:omega-3 ratio.The potential for MF4637 to reduce hepatic fat content was evaluated as an exploratory outcome.
Inclusion and Exclusion Criteria
Selection of the NAFLD study population aimed to include subjects with hepatic steatosis, excluding those with a previous diagnosis of NASH indicating more advanced liver disease.Inclusion criteria included age ≥ 18 years and a recent (< 1 year) suspected clinical diagnosis of NAFLD including an imaging modality (e.g., ultrasound).If diagnosis was > 1 year or an imaging test was absent, an abdominal ultrasound was performed at screening to confirm diagnosis of NAFLD.Other inclusion criteria included not smoking, BMI between 18-39.9 kg/m 2 and if on statin medication a history of > 1 month on a stable dose.Exclusion criteria included a diagnosis of NASH; bilirubin > 2 times the upper limit of normal; other causes of liver inflammation i.e. hepatitis A, B or C, HIV, cirrhosis, Wilson's disease, autoimmune hepatitis, haemochromatosis, alcoholic steatohepatitis, pancreatitis, or prescription medications known to cause liver toxicity or damage; history of bariatric surgery; significant weight loss (> 5% body weight) or rapid weight loss (> 1.6 kg/week) within six months of screening; cancer; significant cardiovascular disease including untreated hypertension and significant gastrointestinal, renal, pulmonary, hepatic, biliary or endocrine disease.Furthermore, subjects were excluded if there was significant alcohol consumption; use of any medicine or dietary supplement that may affect NAFLD or lipid metabolism (including omega-3 supplements); use of anti-coagulants; pregnancy / breastfeeding or sensitivity to any of the study medications or excipients.
Quantitative Measurement of RBC EPA and DHA
Concentrations of total RBC EPA and DHA were measured quantitatively using UPLC-MS/MS.Blood samples were collected into EDTA vacutainer tubes, centrifuged, and plasma and white blood cells (buffy coat) removed.The remaining RBCs were washed three times with saline, and 0.5 mL of the washed packed RBCs added to 1 mL of distilled water, to which was added 150 µL of EDTA/ascorbic acid.The sample was mixed well and stored at -80 ºC until analysed.
For the quantitative analytical methodology, a specific amount of standard curve solutions, matrix blanks, quality control samples and thawed study samples was acidified with HCl and internal standard was added to all tubes except for blanks.Samples were mixed well and incubated at 100 ºC for 45 minutes, and then cooled to room temperature.Extraction solvent (hexane: dichloromethane:2-propanol, in a 20:10:1 ratio) was added to each tube, which was mixed well and centrifuged.Capped tubes were submerged in a dry ice-acetone bath to freeze the aqueous layer, and the organic layer in each tube was transferred to another tube.This was evaporated to dryness at 45 ºC, then reconstitution solution was added, mixed thoroughly, and reconstituted samples were transferred into LC-MS vials for injection.The UPLC-MS/MS systems consisted of an Aquity Tandem Quadruple detector, auto-sample manager, binary solvent manager, column manager and Empower 3 data acquisition system.The UPLC column for optimum chromatographic conditions was an Aquity UPLC BEH, C18, 2.1 x 50mm 1.7 µm, assembled with Waters in-line pre-column filter.The mobile phase was a 20:80 mixture of 5 mM ammonium acetate in water and acetonitrile.The injection volume was 5.0 mL, flow rate was 0.30 mL/min, run time was approximately 2.5 minutes, column temperature was ambient and sample temperature was 5 ºC ± 2 ºC.From the resulting chromatograms, EPA and DHA in each sample were calculated by calibration curve using peak area response ratio as response function.The quantitative method provided a range of 1 to 500 µg/mL for EPA, and 5 to 500 µg/mL for DHA.
Qualitative Measurement of RBC EPA and DHA
Qualitative measurement of EPA and DHA involved measuring the fatty acid profile of RBCs (consisting of a total of 30 fatty acids) using a gas chromatograph system with auto sampler and FID detector.Blood sample collection and processing was identical to that of quantitative analysis of RBC EPA and DHA.A specified amount (2 mL) of BF3-MeOH was added to thawed RBC samples, mixed, flushed with N2 gas and incubated for 10 minutes at 100 ºC.After cooling, 250 µL of purified water and 750 µL of heptane was added to each tube and mixed well.Tubes were centrifuged at 4,000 rpm for 5 minutes and the top heptane layer transferred to another tube and washed with purified water.The top (heptane) layer was transferred to another tube and evaporated to dryness under a stream of N2 gas at 50 ºC.Each tube was reconstituted with 10 µL of heptane, transferred to a GC vial and flushed with N2 in preparation for injection.For this methodology, the column was a DB Wax, 30 m x 0.25 mm ID, 0.15 µm film or equivalent.The chromatic conditions were a GC (Varian 3900) with FID detector, helium carrier gas, an initial oven temperature of 170 ºC, increased at 3 ºC/min to 200 ºC, held for 3 min, increased at 2.5 ºC/min to 225 ºC, held for 5 min, then increased at 20 ºC to 245 ºC, then held for 12 min.An external standard was injected three times, then reinjected for every 10 sample injections.From the three consecutive standard injections, an average response factor (RF) for each individual fatty acid was calculated, using the peak area of each individual fatty acid detected.
Assessment of Change in Liver Fat
Assessment of the change in hepatic fat fraction was measured via magnetic resonance imaging (MRI).For each subject, the MRI protocol included a localization sequence and a 2-dimensional sixecho spoiled gradient-recalled-echo breath hold sequence.A three-plane localizer followed by a coronal breath-hold localizer was recommended for accurate axial slice prescription.If the scanner was not capable of acquiring six echoes simultaneously, multiple acquisitions with single-echo sequences were performed.From either the six-echo or six single-echo MRI series, the radiologist identified a circular region of interest (ROI) within each of the nine Couinaud segments of the liver using the first echo of the series.The radiologist then identified regions with an approximately 2.5 cm diameter in each of the nine Couinaud segments, except for segment 1 (the caudate), in which a region with a diameter of approximately 1.5 cm was identified.The ROI in the caudate was smaller since the caudate is generally too small to identify a region larger than 1.5 cm.The radiologist excluded blood vessels and the periphery of the liver when identifying the ROIs.A fat fraction map was calculated from the six-echo sequence using a multi-interference technique, which took into account the contribution from the individual resonances in fat spectrum to the observed MRI signal to obtain an accurate estimate of fat.The whole liver HFF was expressed as the mean fat fraction across all 9 user-defined ROIs in the liver.Fatty liver index (FLI), an algorithm used to predict the presence of hepatic steatosis based on measured values for serum triglycerides (in mg/dL), serum GGT (in IU/L), BMI (in kg/m 2 ) and waist circumference (in cm), was calculated using the following equation [10]: FLI = (e 0.953*loge (triglycerides) + 0.139*BMI + 0.718*loge (GGT) + 0.053*waist circumference -15.745) / (1 + e 0.953*loge (triglycerides) + 0.139*BMI + 0.718*loge (GGT) + 0.053*waist circumference -15.745) * 100.
Sample Size Calculations
It has been reported that the omega-3 index is 0.5% higher in healthy subjects compared to those with some form of liver dysfunction, leading to the assumption that a minimum increase of 0.5% in RBC EPA+DHA may be necessary to achieve nutritional sufficiency in NAFLD patients [30,31].A conservative between-intervention difference for RBC EPA+DHA, measured as a change from baseline score between standard of care and standard of care plus MF4637, was set at 1.0% [32] and the standard deviation at 2.0 with a correlation of 0.5 and equal allocation of subjects across the two intervention groups, yielding 64 subjects per intervention arm for a total of 128.To address the uncertainty of the estimates of intervention effectiveness from the emerging literature, an adaptive blinded mid-course sample size re-estimation procedure was originally planned for the point at which approximately 30% of the subjects had completed one post-baseline visit (i.e. to Week 12) and had provided the RBC EPA+DHA results (for baseline and Week 12).The sample size re-estimation was performed by one unblinded study statistician.When 30% of the subjects had provided the Week 24 RBC DHA and EPA data, and the data were considered "lockable" by data management, the data file was exported to a limited access subdirectory, the effect size (change from baseline in plasma level) estimated and the conditional power (CP) were calculated.Because the CP was between 41% and 90%, the number of subjects per intervention arm was increased, in order to recover the targeted power of 90%.Given that the interim analysis was performed at 30% of the initial sample size, and the targeted power was 90%, the minimum conditional power cut-off value (CP min) was set at 41%.The procedure was performed, as per the Charter, and the recommendation was to increase the sample size to 75 subjects per intervention (i.e. 150 subjects).The actual number of participants recruited to the study was 176.
Statistical Analysis
The primary outcome (RBC EPA + DHA) was analysed using a repeated analysis of covariance (ANCOVA) with the stratification factors as covariates, to compare the changes in the combined EPA + DHA outcome between the two intervention groups (MF4637 group and placebo) across the study.Additional outcomes were analysed using the same ANCOVA model applied to the primary outcome: RBC EPA, RBC DHA and the omega-6: omega-3 ratio.
Results
Of the 176 participants that underwent randomisation, 154 completed the study (75 in the MF4637 group and 79 in the placebo group) (Figure 1).Of those participants randomised, 167 (81 in the MF4637 group and 86 in the placebo group) were included in the modified ITT primary outcome analysis.Reasons for exclusion from the modified ITT population included refusal to continue intervention or observations (n=7) and significant non-compliance assessed by capsule counts and diaries (n=2).
Baseline anthropometric and biochemical variables of participants randomised to the placebo and MF4637 groups are detailed in Table 1.Of note is the higher mean fasting insulin concentration in the placebo group, which together with the comparable mean fasting glucose concentration, suggests a likelihood that the placebo group was more insulin resistant than the MF4637 group at study entry. 1 Data for n=87 participants. 2Data for n= 85 participants. 3Data for n=86 participants. 4Data for n=81 participants.
Table 2 details the main anthropometric and biochemical variables for participants randomised to the placebo and MF4637 groups at baseline and after 24 weeks of intervention.
Compliance regarding the investigational products was 89% in the MF4637 group and 91% in the placebo group.There were no serious adverse events related to study interventions reported during the 24-week study.Mild incidences of eructation (n=1), dysgeusia (n=1), abdominal bloating (n=1) and increased blood triglycerides (n=1) together with a moderate case of diarrhoea (n=1) were reported in the MF4637 group and suspected to be related.One participant in the MF4637 group and two participants in the placebo group discontinued the study due to adverse events. 1 Data for n=87 participants. 2Data for n=85 participants. 3Data for n=86 participants. 4Data for n= 81 participants.
Effect of Intervention on Omega-3 Index
The baseline omega-3 index was similar for placebo and intervention groups.Compared to placebo, the mean omega-3 index increased significantly from 4.8% to 8.0% at study completion in the MF4637 group, representing a mean 3.2% change from baseline (P<0.0001)(Table 3).In the placebo group, the omega-3 index increased slightly from 4.9% at baseline to 5.3% at study completion, representing a mean change of 0.4%.Regression analysis of the data for participants in the MF4637 group suggests that the change in omega-3 index was inversely related to baseline omega-3 index, with lower baseline values resulting in greater increases by the end of the 24-week intervention. 1P value is for the mean percentage change from baseline to 24 weeks between placebo and MF4637 groups using ANCOVA.Values are expressed as mean (SD).Abbreviations: RBC, red blood cell; DHA, docosahexaenoic acid; EPA, eicosapentaenoic acid.
Effect of Intervention on RBC EPA + DHA, EPA and DHA Values
Absolute RBC EPA + DHA increased on average from 29.6 µg/mL at baseline to 52.9 µg/mL at study completion (representing a significant increase of 21.2 µg/mL) in the MF4637 group, compared to a 1.2 µg/mL increase from baseline in the placebo group (significance between groups, P<0.0001) (Table 3).In terms of absolute values of EPA and DHA separately, RBC EPA increased by a significant 7.1 µg/mL to 10.6 µg/mL at study completion in the MF4637 group versus a 0.4 µg/mL increase in the placebo group (P<0.0001)(Table 3).RBC DHA increased by more than EPA, with a mean of 14.1 µg/mL increase to 42.4 µg/mL in the MF4637 group versus a 0.7 µg/mL increase in the placebo group (P<0.0001)(Table 3).
Regarding the percentage of individual EPA and DHA as a proportion of total RBC fatty acids, both parameters increased significantly in the MF4637 group compared to placebo (Table 3).Specifically, RBC EPA as a percentage of total fatty acids increased by a significant 0.9% to 1.4% in the MF4637 group compared to a 0.002% increase in the placebo group (P<0.0001).RBC DHA as a percentage of total fatty acids increased by a greater proportion than EPA, resulting in a 2.3% increase to 6.6% in the MF4637 group versus 0.4% increase in the placebo group (P<0.0001).
Effect of Intervention on RBC Omega-6:Omega-3 Ratio
RBC omega-6:omega-3 ratio was not different between the two groups at baseline.Following administration of MF4637 for 24 weeks, the RBC omega-6:omega-3 ratio decreased by a mean of 1.6 from 4.9 at baseline to 3.3 at study completion, compared to a 0.2 decrease to 4.7 in the placebo group (P<0.0001)(Table 3).
Effect of Intervention on Liver Fat
In the modified ITT analysis of liver fat content, 120 participants (60 in each trial arm) completed both the baseline and end of study MRI assessment.In this population, both the MF4637 and placebo groups demonstrated a decrease in liver fat percentage (26% and 28% respectively), (Table 4).As such, there was no statistically significant difference in the decrease in liver fat between the groups.
Relationship Between RBC EPA + DHA Enrichment and Liver Fat Content
Regression analysis of the data by intervention group suggests that the change from baseline in liver fat percentage was inversely related to the change in absolute RBC EPA + DHA values in the MF4637 group.Thus, the largest decreases in liver fat were observed in participants with the greatest increases in absolute RBC EPA + DHA (Figure 2).Hence, whilst there was no significant difference between MF4637 and placebo with regard to overall reduction of liver fat, there was an association between increasing RBC EPA+DHA enrichment and decreasing percentage liver fat content.
Relationship Between Baseline Fatty Liver Index (FLI) and Change in Liver Fat Content
Post-hoc analysis of the MF4637 group utilising ANCOVA with baseline fatty liver index (FLI) as covariate, found that in those patients with higher baseline FLI scores (indicative of more probable fatty liver), there was a greater reduction in liver fat compared to placebo (Table 5).Following 24 weeks of intervention with MF4637, patients with baseline FLI ≥ 40 (n=17) had a placebo corrected, statistically significant 44% relative decrease in liver fat content (P=0.009).This equates to a 7.45% absolute decrease in placebo corrected liver fat content for the MF4637 group.Data for n=89 participants (n=43 placebo; n=46 MF4637). 3 Data for n=28 participants (n=16 placebo; n=12 MF4637). 4 Data for n=17 participants (n=12 placebo; n=5 MF4637).Values expressed as mean (SD).
Relationship Between RBC EPA + DHA Enrichment and Liver Enzymes AST and ALT
At study entry, the mean baseline concentrations of the liver enzymes AST and ALT were within the normal range for both the placebo and MF4637 groups.Similar to the relationship between RBC EPA + DHA enrichment and change in liver fat discussed above, a non-statistical inverse association was also found between the change in absolute RBC EPA + DHA and change in the concentrations of the liver enzymes AST and ALT in the MF4637 group.Thus, with increasing change in absolute RBC EPA + DHA, there were greater decreases in both AST and ALT concentrations.These associations were seen despite the low levels of liver enzymes.
Effect of Intervention on Plasma Triglycerides
At study completion, plasma triglycerides (Table 2) decreased by a statistically significant 18% from baseline values in the MF4637 group (P=0.0008)compared to a 7% reduction in triglycerides from baseline values in the placebo group (P=0.52; for placebo adjusted effect of M4637 P=0.053).The baseline levels for triglycerides were only moderately increased compared to normality and would clinically be defined as "borderline high".
Discussion
This study demonstrates that intervention with MF4637 for 24 weeks significantly raises the omega-3 index and decreases the omega-6:omega-3 fatty acid ratio in adults with NAFLD.Furthermore, the EPA and DHA enrichment achieved with MF4637 was significantly greater than that obtained by dietary recommendation alone.This is of importance, considering the depleted omega-3 status of NAFLD patients [15][16][17][18][19] and the current lack of therapeutic options for the treatment of NAFLD other than lifestyle recommendation [6].Furthermore, the metabolic efficacy of MF4637 was confirmed through its significant lowering of plasma triglyceride levels compared to baseline levels in the MF4637 group.
When assessing the mITT population from whom baseline and post-intervention data from MRI-PDFF were available, intervention with both MF4637 and placebo caused a significant reduction in hepatic steatosis which was significant within each of the groups.There may be several reasons for the liver fat-lowering effect observed in the placebo group.All study participants were required to follow the standard-of-care dietary recommendations for the management of NAFLD.This included adherence to a diet with reduced caloric intake, and increased omega-3 and reduced omega-6 and trans-fatty acid consumption.Hence, participants in the placebo group may have achieved a decrease in liver fat percentage from the effects of these dietary recommendations alone, particularly from increased omega-3 fatty acid intake from the diet.However, it should be remembered that the MF4637 group had a greater increase in omega-3 index than placebo suggesting a minimal influence from dietary changes.Similar findings of liver fat improvement in the placebo group have been reported in several other studies.These studies propose that MRI-PDFF volatility in early NAFLD subjects may contribute to data variability [33].
Of general note in the current study is the relatively low baseline liver fat by MRI-PDFF (mean 17% and 14% in placebo and intervention groups respectively) which together with relatively low baseline AST and ALT levels indicate an early stage of NAFLD in this study population.Early stages of NAFLD are characterised by changeable liver fat content which can be affected by factors such as high-fat meals.This is in contrast to advanced NAFLD, in which the liver fat is likely to be more stable and less influenced by such factors.
A further confounding factor may be the high number (over one-third) of diabetic participants, and the number of subjects taking metformin and thiazolidinediones during the trial.From the mean baseline fasting insulin and glucose concentrations, the placebo group is also likely to have been more insulin resistant than the MF4637 group at study These factors may have had some effect on liver fat metabolism.Indeed, on stratification of the data by diabetes status, those with diabetes had a greater reduction in liver fat from baseline (mean decrease of 4.9% in MF4637 group versus 6.3% decrease in placebo group) compared to those that did not have diabetes (mean decrease of 1.6% in MF4637 group versus 3.3% decrease in placebo group).To date there have been very few trials conducted in the diabetic NAFLD population.
A number of individual studies and several meta-analyses have reported favourable outcomes with omega-3 fatty acid intervention in patients with NAFLD [27,43,44].Despite a high degree of heterogeneity in patient population, study duration, dose and form of omega-3 fatty acids, a recent meta-analysis concluded that omega-3 fatty acids are associated with significant improvements in liver fat content and the liver enzymes ALT and GGT when taking approximately 3 g/d of EPA and DHA [27].The positive effect of omega-3 fatty acids on liver fat was also confirmed in an earlier metaanalysis [44].Surprisingly, only four of the eight trials performed to date included some form of measurement of EPA and DHA enrichment following intervention [36,38,39,42].A strength of the current study is the measurement of both omega-3 and omega-6:omega-3 fatty acid ratios, as well as the quantification of individual EPA and DHA in RBCs at baseline and study completion.This has enabled additional regression analyses to be performed, which suggest an inverse relationship between change in absolute RBC EPA+DHA and change in liver fat content and liver enzyme concentrations.Similar findings were reported in a study of high dose omega-3 in NAFLD patients where beneficial effects on liver fat content correlated with DHA content in RBCs [39].Another strength of the current study was the use of MRI-PDFF to accurately assess change in liver fat content, which is the most accurate assessment method besides highly invasive liver biopsy [45].
A limitation of this study was the finding of relatively low level of hepatic steatosis in study participants, which restricted the potential for more significant effects to be observed on liver-related outcomes.Additionally, it may be speculated that the time window, prior to inclusion, for an allowable diagnosis of 1 year may have allowed significant development or regression of the disease at time of recruitment.
Post-hoc use of the FLI to stratify patients showed an association between higher FLI scores and greater decrease in hepatic fat in the MF4637 group.FLI scores below 30 are predictive of a liver without steatosis [10].The high number of subjects with FLI<30 confirms that this study recruited a relatively healthy population.However, highly statistically significant improvements in hepatic fat content seen in those with a baseline FLI>40 suggesting that this patient group can receive beneficial effects of intervention compared to placebo.Such a use of FLI is in accordance with the aims of its developers who propose that the "potential clinical uses of FLI include the selection of subjects to be referred for ultrasonography and the identification of [NAFLD] patients for intensified lifestyle counselling" [10].The FLI score is composed, in part, of measurements of triglycerides and the liver enzyme GGT.Increased triglycerides in the liver is the cause of steatosis and raised liver enzymes are a consequence of liver damage.Plasma triglycerides are sensitive to omega-3 fatty acid intervention.In a meta-analysis, omega-3 fatty acids were shown to decrease liver enzymes (in particular GGT) in NAFLD patients, providing evidence that omega-3 intervention has a beneficial effect on liver cell physiology [27].This places the FLI as a potentially valuable tool for identifying patients responsive to MF4637 management.
Figure 1 :
Figure 1: CONSORT flow chart of participant flow.
Figure 2 :
Figure 2: Relationship between change in absolute RBC EPA + DHA and change in liver fat.
Table 1 :
Baseline Anthropometric and Biochemical Variables of Participants Randomised to Placebo and MF4637 Groups
Table 2 :
Anthropometric and Biochemical Variables of Participants Randomised to Placebo and MF4637 Groups at Baseline and Study Completion
Table 3 :
RBC Fatty Acid Content at Baseline and After 12-and 24 Weeks Intervention with Placebo or MF4637
Table 4 :
MRI Liver Fat Percentage at Baseline and After 24 Week Intervention with Placebo or MF4637
Table 5 :
Change in MRI Liver Fat Percentage After 24 Week Intervention with MF4637 Stratified by Baseline FLI Score P value is for the mean percentage change from baseline to 24 weeks (placebo corrected) using ANCOVA. 1 | 2018-07-25T21:17:06.917Z | 2018-07-13T00:00:00.000 | {
"year": 2018,
"sha1": "998caf5df2c1068c767c72d8942bad6edf0cd3cb",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6643/10/8/1126/pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "a816f8321fb677e3e386e3a6181252b6a0f5e11b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
264146851 | pes2o/s2orc | v3-fos-license | Two-loop master integrals for a planar topology contributing to $pp \rightarrow t\bar{t}j$
We report on recent progress for the QCD corrections to top quark pair plus jet production. In particular, we discuss a recent computation for the two-loop master integrals associated to a two-loop five-point pentagon-box integral configuration with one internal massive propagator, that contributes to top quark pair production in association with a jet in the QCD planar limit.
Introduction
The Large Hadron Collider (LHC) is entering the high-precision era with the High-Luminosity plan (HI-LHC).This project will enable experimental collaborations to measure many interesting observables at percent level precision.In order to be able to compare the experimental measurements with theoretical predictions, it is mandatory to achieve a theoretical uncertainty on the same level of the experimental one.One of the ingredients that are needed in order to achieve this goal are nextto-next-to leading order (NNLO) QCD corrections.While a lot progress has been done recently in this framework, QCD corrections at NNLO are still not available for all the most interesting observables at LHC.One of the observables for which NNLO QCD corrections are yet to be obtained is the top quark pair production in association with a jet.As the top quark is the heaviest particle in the Standard Model (SM) of particle physics, it has many important implications for the nature of the fundamental forces.In particular many properties of the SM are sensitive to the value of the top quark mass as, for example, the stability of the SM vacuum whose precision measurement is a high priority at the (LHC).The standard process which is exploited to measure the top quark mass at the LHC is top quark pair production.This process is known with vey high precision both theoretically and experimentally [1,2].However, it has been recently argued that top quark pair production in association with a jet is even more sensitive to the top quark mass [3][4][5].The stateof-the-art for the theoretical predictions of this process is represented by the next-to-leading order (NLO) QCD corrections [6,7], along with complete decay information and interfaces with a parton shower [8][9][10][11][12].However, in order to match the experimental precision, see for example [13,14], next-to-next-to-leading order (NNLO) corrections are required.
In order to be able to perform a full NNLO prediction for this observable several computational difficulties have to be overcome.One of the major obstacles is the computation of the required twoloop scattering amplitudes.Recently a great progress has been made in the calculation of scattering amplitudes for 2 → 3 processes [15][16][17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33][34], which led to a number of NNLO QCD theoretical predictions [35][36][37][38][39][40].Yet, the amplitudes necessary to perform a NNLO theoretical prediction for top quark pair plus a jet production at LHC represent a substantial step forward with respect to the current state-of-the-art.Indeed, the top quark mass which appear in the internal propagators is responsible for a significant growth in the complexity of the computation.This feature affect both the algebraic complexity in the amplitude reconstruction, and the analytic complexity of the Feynman integrals.
In this context, I report on the recent progress made in the computation of two-loop Feynman integrals relevant for the NNLO QCD corrections to → t [41].This project builds upon previous work where the authors computed the one-loop helicity amplitudes expanded up to O ( 2 ) in the dimensional regulator [42], which are needed for the NNLO correections.In [41] the authors studied the master integrals associated to a five-point pentagon-box topology with one internal massive propagator, that contributes to top-quark pair production in association with a jet in the leading color QCD planar limit.The computation represents a step forward in complexity with respect to the five-point massless [15,17,[43][44][45][46] and one off-shell external leg cases [47][48][49][50][51].
The master integrals have been computed exploiting the differential equation method [52,53].The system of differential equations has been written with respect to a canonical basis of master integrals [54].A major bottleneck for this computation is the solution of a large system of Integration-by-Parts (IBP) relations [55,56].In order to overcome this issue finite fields arithmetic [57][58][59], as implemented in the FiniteFlow library [59], has been employed.We obtained a semianalytic solution for the master integrals through the generalised power series method [60][61][62], as described in [62] and implemented in the Mathematica package DiffExp [63].In order to solve the system of differential equations semi-analytically, we used high precision numerical boundary conditions obtained by means of the Mathematica package AMFlow [64], which implements the auxiliary mass flow method [65][66][67].Finally, we also derived the analytic representation of the alphabet for the system of differential equations.Interestingly, the structure of the alphabet has the same analytic structure as in the five-point massless [15] and in the one-mass [47] cases.
The outcome of the work presented in [41], and summarised in the present proceeding, is two-fold.First, we obtained a solution for the master integrals under study which has the potential for phenomenological applications, as it has been done for other processes [42,47,50,[68][69][70][71][72][73][74].Moreover, the study of the analytic structure of the alphabet solution is a fundamental step in order to achieve a complete analytic representation.As a consequence, the work presented in [41] represents a fist step toward an analytic computation for the NNLO QCD corrections to top quark pair production in association with a jet in the QCD leading color planar limit.
Summary of the computation
We considered the pentagon-box Feynman integral topology in = 4 − 2 dimensions as shown in figure 1.This can be written as, 9 , 10 , 11 1 , 2 , 3 , 4 , 5 , 6 , where 1 , • • • , 11 ≥ 0. The topology is defined by the following set of propagators, and numerators: and the integration measure is: The momenta are considered outgoing from the graphs and the particles are on-shell, i.e.
and 2 is the top quark squared mass.After performing IBP reduction [56,75], as implemented in the software LiteRed [76,77] and FiniteFlow [59], we found a total number of 88 MIs (see [41] for the complete list).
3
< l a t e x i t s h a 1 _ b a s e 6 4 = " y + W D o q t t s z d e x E L Z g 6 Black lines denote massless particles and red double-lines denote massive particles.We wrote a system of differential equations for the MIs ì I ( ì , ) in canonical form [54]: where is the total differential with respect to the kinematic invariants, and the matrix ( ì ) is a linear combination of logarithms: The are matrices of rational numbers, and the alphabet { ( ì )} is made by algebraic functions of the kinematic invariants ì .
Canonical Basis
The canonical basis of MIs ì I has been constructed starting from the observation of an emerging pattern for 2 → 3 scattering amplitudes [15,17,18,43,45,47,48,50,[78][79][80].This feature implies that we are able to rely on a good set of uniform trascendental (UT) candidate MIs as starting point for the basis construction.Specifically, one can test candidates from the MIs basis for the massless and one-mass five-point [44,45,47,50] (for e.g. → + 2 and → 3 ) cases, as well as other integral topologies which involve internal massive propagators1.Guided by this initial set of data, our approach relies on the possibility to perform IBP reduction and evaluate the differential equations matrix over finite fields.Due to the presence of square roots, we do not attempt to construct the canonical form directly.Instead we search for a linear form, with respect to , which contains only rational matrices in the kinematic invariants.Indeed, the square roots appearing in the UT basis can be absorbed in the normalisation of the integral basis2, and therefore we can neglect them while evaluating the differential equations over finite fields.The strategy adopted in [41] can be then summarised as follows: 1For example a large number of MIs for the two-mass four-point → scattering [81] appear as subtopologies in our 88 integral system.This feature allowed us to reduce the number of completely unknown MIs in UT form to 40.2This approach is discussed in Ref. [59].
• Given a starting set of UT candidate MIs, we study the structure of the differential equations from a univariate slice reconstruction.Specifically, we search for a linear form in ì J ( ì where (0) is a diagonal matrix; • We study the homogenous part of the system of differential equations sector-by-sector, in order to determine the correct normalisation for the MIs; • If the starting choice of integral basis, for a given sector, does not satisfies a differential equations of the form in Eq. ( 7), we make a different ansatz based on criteria described below.
Once the whole system of differential equations is in the form of Eq. ( 7) we can rotate it into canonical form: where ( ì ) is a diagonal matrix which contains the square roots of the kinematic invariants.Such matrix satisfies the differential equation: The canonical form of the differential equations can then be written as: As anticipated, if the starting integral basis does not satisfies a differential equations of the form in Eq. ( 7) we change the starting ansatz.This is done accordingly to a set of criteria inspired by patterns observed in previously studied cases: • For two and three external legs MIs the choice of candidates can involve scalar integrals with dotted denominators; • For four external legs MIs the choice of candidates can involve scalar integrals with dotted denominators or the numerators 9 , 10 , 11 ; • For five external legs, canonical MIs candidates can involve scalar integrals with the numerators 9 , 10 , 11 and local integrand insertions , where are defined from the splitting of the loop momenta into four dimensional and (−2) dimensional components, = [4] + I conclude the present discussion with some remarks.First, given the high number of kinematic invariants, and the large size of the IBPs systems to solve, it is important to ensure that the maximum numerator rank and number of dotted propagators is minimised.As second remark, I mention that the method exploited to build a canonical basis might still require some work on the sub-topologies contribution to the differential equations for a given sector.Indeed, we found that some sectors required additional rotations in sub-sectors.However, this step was particularly simple in our cases.Interestingly, such feature did not appear in any of the most complicated five-point topologies, where the UT integrals can be constructed exploiting just local numerator insertions.
Analytic structure
Even though the system of differential equations has been integrate semi-analytically exploiting the generalised series expansion method, we studied the alphabet structure of the solution.This aspect is crucial for understanding the analytic solution and it is the first step towards constructing a well defined special function basis for the set of MIs under consideration.
The system of differential equations can be written in terms of d-logarithmic forms using an alphabet which is made of 71 letters : In order to identify the alphabet we adopted a strategy along the lines of the one described in Refs.[82][83][84], which we briefly summarise.As first step we identify the set of rational letters inside the alphabet.This can be done by looking at the denominators in the differential equations system.The remaining letters are, therefore, algebraic in the kinematic invariants (i.e. they contain square roots).To obtain this set of letters we proceed as follows.We determine the linear relations in the total derivative matrix ( ì ) and we find a minimal set of independent one forms.Then, for each independent entry of the derivative matrix one determines which square roots appear in the denominators.Finally, it is possible to construct an ansatz using free polynomials in the variables which depends on the square roots in the one-form under study.The form of the ansatz depends on the number of square roots, e.g. if there is just one square root we can use an ansatz of the kind, and in the case of two square roots, Ω(, , ) := While it is always possible to expand the form of Eq. ( 14) into one similar to Eq. ( 13), the structure in Eq. ( 14) is preferable.Indeed, the polynomial degree of the unknown variable is lower as noted in Ref. [47].
Following this strategy we have identified an alphabet which can be split into two subsets, W and W , which are, respectively, rational and algebraic in the kinematic invariants.For the rational letters we define, and for the algebraic letters The rational set of letters W can be furthermore divided into three subsets.The subset W is made by letters which are linear combinations of the Mandelstam variables = ( + ) 2 .The letters in the subset W can be written as traces over -matrices: Finally, the rational letters in the third subset, W , are related to the roots that appear in the differential equations system: which are defined as follows: where (ì ) = • is the Gram matrix.
Similarly to the rational subset of letters, also the algebraic one W can be split into three subsets.The first one, W −1 , contains letters which can be written in terms of the quantity Ω as defined above in Eq. (13).Instead, the letters associated to the subset W , contain dependence on the Dirac 5 matrix.Therefore, they can be written as ratios of tr ± ( • • • ) objects, defined as The final subset, W −2 , is made by letters in terms of Ω as defined above in Eq. ( 14).I finish this discussion we the following consideration.The alphabet structure just presented shows a similar pattern to the ones observed in other five-particle kinematic configurations [44,47,50,85].This feature suggests that it might exists a general alphabet structure for all polylogarihmic two-loop integrals with five or fewer legs.
Numerical Evaluation
In order to validate our work we exploited the package DiffExp [63] to evaluate numerically the MIs.This package implements the generalised power series method [62], which gives a semianalytical solution to the system of differential equations as an expansion around its singular points: In the previous equations is a variable that parametrise the integration path in the kinematic invariants space, are singular points for the system of differential equations, is the radius of convergence of the series solution around and (, 1 , 2 ) are matrices which depend on the system of differential equations and the boundary conditions.Since we were interested in a numerical evaluation of the MIs, the system of differential equations has been integrated using high-precision numerical boundary conditions obtained with the package AMFlow [64], which implements the auxiliary mass flow method [65][66][67].The numerical solution obtained with DiffExp has been checked for several points against an independent evaluation performed with AMFlow finding full agreement for all the points under study.
The solution for the MIs presented in Ref. [41] has not been optimised for a realistic phasespace integration.However, the successful applications of the generalised power series method to phenomenological studies in Refs.[42,47,50,[68][69][70][71][72][73][74], offers hope that a phenomenologically oriented improvement of the implementation previously discussed may be achievable in the near future.
4 <
r d 8 7 3 U B D l V h j O B M 7 e f a U w p m 9 A R 9 i y V N E Y d 5 I t D Z + T c K k M S J c q W N G S h / p 7 I a a z 1 N A 5 t Z 0 z N W K 9 6 c / E / r 5 e Z 6 C b I u U w z g 5 I t F 0 W Z I C Y h 8 6 / J k C t k R k w t o U x x e y t h Y 6 o o M z Y b 1 4 b g r 7 6 8 T t q X N f + q V q 8 2 b o s w y n A K Z 3 A B P l x D A + 6 h C S 1 g g P A C b / D u P D m v z s e y s e Q U E y f w B 8 7 n D x e d i 5 c = < / l a t e x i t > l a t e x i t s h a 1 _ b a s e 6 4 = " p u k K C 4 / a / 4 V u m d v 9 C 1 r D g x p F P r o = " > A A A B 6 H i c b V D L T g J B E O z F F + I L 9 e h l I j H x R H Y N P o 5 E L x 4 h k U c C G z I 7 9 M L I 7 O x m Z t a E E L 7 A i w e N 8 e o n e f N v H G A P C l b S S a
5 Figure 1 :
Figure 1:The pentagon-box topology contributing to → t in the QCD leading color planar limit. | 2023-10-17T06:42:50.376Z | 2023-10-15T00:00:00.000 | {
"year": 2023,
"sha1": "e0ad3eb97f152320aebde06373dae84753037025",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "e0ad3eb97f152320aebde06373dae84753037025",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
2639766 | pes2o/s2orc | v3-fos-license | A review of analgesic and emotive breathing: a multidisciplinary approach
The diaphragm is the primary muscle involved in breathing and other non-primarily respiratory functions such as the maintenance of correct posture and lumbar and sacroiliac movement. It intervenes to facilitate cleaning of the upper airways through coughing, facilitates the evacuation of the intestines, and promotes the redistribution of the body’s blood. The diaphragm also has the ability to affect the perception of pain and the emotional state of the patient, functions that are the subject of this article. The aim of this article is to gather for the first time, within a single text, information on the nonrespiratory functions of the diaphragm muscle and its analgesic and emotional response functions. It also aims to highlight and reflect on the fact that when the diaphragm is treated manually, a daily occurrence for manual operators, it is not just an area of musculature that is treated but the entire body, including the psyche. This reflection allows for a multidisciplinary approach to the diaphragm and the collaboration of various medical and nonmedical practitioners, with the ultimate goal of regaining or improving the patient’s physical and mental well-being.
Introduction
The diaphragm is the main respiratory muscle that is able to influence, through its contractions, the act of breathing. 1 It provides good mechanical function of the spine and sacroiliac joint, as well as affecting the working of the pelvic and buccal floor muscles. 1,2 The diaphragm also facilitates the venous and lymphatic return, works to maintain a balanced posture during several changes of the body positions, allowing the viscera above and below the diaphragm to work properly. [2][3][4][5] The diaphragm works together with various processes such as expectoration through coughing and the actions of vomiting, defecation, and swallowing. 2,5 The diaphragm muscle plays a key role in health and in the many activities of the human body.
The respiratory diaphragm muscle is innervated by the phrenic nerve (C3-C5) and the vagus nerve (cranial nerve X); the first receives pulses from groups of medullary neurons of the pre-Bötzinger complex and neurons of the parafacial retrotrapezoid complex, which in turn receive orders over retroambiguus from the core of the bulb, although the mechanisms that underlie these links are not completely clear. 2 The vagus nerve is part of the parasympathetic autonomous system originating from the spinal oblongata, and terminates at the nucleus ambiguus. 6 The phrenic nerve and the vagus nerve anastomose at the neck level. 2 The areas of the brain involved in the control of breathing are different and their activation weight changes depending on the type of breathing, metabolic conditions, and interoceptive and exteroceptive information. [7][8][9][10] The aim of this article is to gather for the first time, within a single text, information on the nonrespiratory functions of the diaphragm muscle: its analgesic and emotional response functions. It also aims to highlight and reflect on the fact that when the diaphragm is treated manually, a daily occurrence for manual operators, it is not just an area of musculature that is treated but the entire body, including the psyche. This reflection allows for a multidisciplinary approach to the diaphragm and the collaboration of various medical and nonmedical practitioners, with the ultimate goal of regaining or improving the patient's physical and mental well-being.
Analgesic respiration
The perception of pain is diminished if the breath is held after a deep breath, a condition in which the diaphragm is lowered. 11 This event appears to reflect the involvement of baroreceptors. With this action, the respiratory systolic pressure increases, with a decrease in the cardiac frequency. 11 We know that when the baroreceptors situated in the carotid body and the area of the aortic arch in the adventitia of the vessels are naturally stimulated by the cardiac cycle, in particular by the systole, the nociceptive stimulus is attenuated by the activation of the baroreceptors. 12 The intervention of the baroreceptors affects the muscle tone, as it decreases the activity of the sympathetic nervous system, reducing the contractile tone. 12 The reduction of pain perception is greater if the subject is aware of the pain itself. 12 Acute and chronic pain can alter the baroreceptor function and consequently damage the regulatory function of the cardiovascular system; this will lead, in the long run, to an increased risk of morbidity and mortality. 13 The baroreceptors are structures that are activated if the vessel is stretched by passing blood. 14 The passing afferents are collected by the nucleus of the solitary tract (NTS), which regulates efferential intervention of the vagal system and the inhibitory sympathetic efferent in the spinal cord near the nucleus ambiguus, the dorsal motor nucleus, and the rostral ventrolateral medulla oblongata area. 14 Baroceptor afferents affect different areas of the central nervous system, with a generalized inhibitory effect. 14 The NTS interconnects with the reticular formation, from which information is sent to the front, lateral, and medial prefrontal insula and the anterior cingulate cortex; even the thalamus, hypothalamus, and periaqueductal gray area receive signals from NTS baroreceptors. 14 There is a close relationship between emotion, respiration, and the intervention of baroreceptors. 14 Emotional experience influences the response to pain, because the pain response is not simply a neural process started by nociceptive afferents. 12 Emotional states, such as anxiety or depression, and psychiatric disorders are able to negatively alter the baroreceptor response. 15 Stress can lead to anxiety and/or depression, resulting in an alteration of the proper functioning of the diaphragm. 16 Modifications in the emotional state cause a perception of greater pain. 17 We can state that the diaphragm has an influence on baroreceptors and the perception of pain and vice versa.
Diaphragm movement changes the body pressure, as it facilitates the venous return and lymphatic flow upward. 2 This modulation of pressure influences the redistribution of blood. 18 It is this action that probably determines the baroreceptors response and the reduced perception of pain, but there are no scientific texts, as yet, to substantiate this claim.
Recent scientific evidence highlights the ability to carry pain afferents by the vagus nerve, especially for visceral pain. It is generally believed that pain arising from the viscera is mediated exclusively by spinal afferents, because vagal afferents primarily convey interoceptive information, but do not contribute to the perception of pain. 19 Studies have shown that vagal afferents respond to nociceptive mechanical and chemical stimulation from the visceral area and this leads to brain stem representation of nociceptive signals. 19 We know that the NTS stimulates the vagus nerve. We can assume that a physiological function of the diaphragm muscle can somehow reduce the afferent nociceptive stimulation from the vagus nerve, or through adequate visceral pressure and/or proper functions of the viscera at the lowering of the diaphragm. 20 There is no scientific evidence to confirm this reflection at this time.
An incorrect positioning of the diaphragm due to systemic or local pathological reasons, as in chronic obstructive pulmonary disease, previous laparoscopic surgery, cerebral ischemia, and somatic problems related to somatic disorders of the lumbar and sacroiliac joint and cervical spine, can be the source problems of pain, both locally and in visceral articulated functions. [21][22][23][24][25][26][27][28] The causes of induced localized, somatic, or visceral or systemic pain may be varied. There are no studies that correlate these diseases and pain with exhaustive explanations directly related to the actions and functions of the diaphragm muscle. We can assume that the baroreceptors are not adequately stimulated due to an alteration of diaphragm mobility; this will lead to a greater feeling of pain. The same diaphragm can be a source of afferent pain probably because the phrenic nerve, a mixed nerve that carries motor and sensory information, shares information with the trigeminal nucleus spinae. 2,29,30 The latter has a connection with the NTS, and there is speculation that this connection is the cause of pain from the diaphragm. 31 The diaphragm has a phrenic center, consisting of a strong "V"-shaped connective component having a variable percentage of the amount of contractile tissue. 32 The fascial system is richly innervated by proprioceptors, which may become a source of afferent pain, transforming in turn into nociceptors. 33 The crural and connective area is populated by proprioceptors, and we can assume that alteration of the position and function of the respiratory muscle creates a condition of irritability of these proprioceptors and consequent occurrence of painful afferents. 2 The right phrenic nerve penetrates the diaphragm at the level of the connective tissue of the phrenic center, while the left phrenic nerve penetrates the muscle area of the muscle; the right nerve has a faster electro conductivity than the left. 32, 34 We might suppose that if the position of the diaphragm is not physiological, the phrenic nerve will be stretched or irritated in different ways, causing nociceptive afferents, in the same way as irritation of a peripheral nerve from the surrounding tissues. 35,36
Emotional respiration
The diaphragm has many functions, including maintenance of the systemic biochemical and emotive equilibrium. 37 The most important stimulus for the generation of respiration is provided by chemoreceptors that manage the biochemical equilibrium of the organism. 37 Breathing is also affected by environmental conditions inside and outside the body, and it is thought to have other ways of neural stimulation in respect of chemoreceptor stimulation. 37 The action of the diaphragm is not controlled solely by metabolic demands, but also by emotional states, such as sadness, fear, anxiety, and anger. 37 The interaction between respiration and emotion involves a complex interaction between the brain stem and the brain centers such as the limbic area and cortex. 37,38 The life of the person and his/her personality influence the behavior of the diaphram. 37 The amygdala, which is part of the limbic system, is reciprocally connected to each of the respiratory areas, just as the medulla oblongata, and is considered the most important area that manages emotive breathing. 37,39 The amygdala is divided into three areas -the basolateral, cortical, and central areas; the basolateral amygdala sends efferents to the central area, which in direct and indirect mode sends signals to the hypothalamus and the brainstem trunk. 39 The amygdala is stimulated by dopamine production from tegmental area of the midbrain, and recent studies on animal models show that the dopamine that reaches the amygdala manages emotive respiration. 39 The amygdala efferents pass through the areas connected to respiration, such as the NTS and correlated structures. 39 The sensations of breathing are the result of two cortical and subcortical processes: a discriminative process that assesses spatial awareness, timing, and intensity of breath; and an affective process that assesses the emotional feelings of the respiratory components. 9 Breathing stimulates the mechanoreceptors of the diaphragm and the visceroceptors of the organs moved by inhalation-exhalation, which constitute the interception mechanism. 9,40 The latter constitutes the awareness of the condition of the body based on the information derived directly from the body. 40 The ascent and descent of the diaphragm also stimulates the skin and organs of the mediastinum, and the complexity of afferent structures determines the different representations of central respiration. 2,41 The afferent pathways of interoception project to the autonomous medulla centers and the brain stem, where they are split into the anterior cingulate cortex and the dorsal posterior insula, thanks to the extension of the thalamus cortical. 42 Interoception can modulate the representation of exteroceptive body, as well as the tolerance to pain; dysregulation of the pathways that manage or stimulate interoception could cause a distortion of body image, affecting emotion. 42,43 The same anxious state alters afferent pathways related to breathing (receptors that adapt quickly and slowly, type C bronchial and lung receptors, high-threshold-type Aδ receptors, cough receptors, and neuroepithelial receptors), amplifying one or more receptor pathways related to respiration. 41 We can strongly assume that an altered function of the diaphragm can adversely affect the patient's emotional state, probably because the interoceptive pathways stimulated by breathing are managed as motivational information, as these pathways of information are bidirectional. 41 The request for challenging breathing, as in a physical exercise, could cause a strong emotional reaction in anxious people, making them relive symptoms and psychological disorders, worsening the respiratory function. 41 We know that patients with respiratory ailments or chronic pain suffer from anxiety disorders, and this condition implies that the function of the diaphragm can worsen with physical exertion Journal of Multidisciplinary Healthcare 2016:9 submit your manuscript | www.dovepress.com
100
Bordoni et al without taking into account also the emotional aspect of the patient. 44,45 Interoception is also linked to the visceral movement caused by respiration, and we know that people more susceptible to visceral afferents show more intense emotions. 46 A probable cause could be related to the neurogenic neuroinflammation spreading inflammatory substances in the spinal cord, involving more areas and making them more likely to respond to a minimum of stimuli, making the afferential paths related to interception a further cause for anxiety and pain. 46 This event could lead to a pleiotropic effect of functional impairment of the connective tissue, further destabilizing the functioning of the diaphragm. 46 This neurogenic neuroinflammation is found in respiratory diseases, such as allergies of the upper airway and chronic diseases (such as chronic obstructive pulmonary disease), as well as in conditions of chronic pain. [47][48][49] One could speak of "emotional allodynia breathing" when a breath that stimulates interoceptive afferent pathways causes symptoms linked to psychological aspects.
The innervation of the diaphragm muscle may be responsible, directly and indirectly, for the emotional state of the person. Afferent stimulation to the NTS by the phrenic nerve could affect the emotional response, because the NTS handles visceral afferents and has a close relationship with the same nerve. 50,51 Confirmation of this assumption is expected. The phrenic nerve has subdiaphragmatic ganglia with a relationship with the adrenal gland. 52,53 The significance of this relationship is unknown. But we know that the adrenal gland and the hypothalamus-pituitary axis (HPA) affects the feeling of pain and the intensity of emotional aspects. 54,55 The possible existence of a cause-effect on the HPA axis and the direct intervention of the phrenic nerve with the emotional aspect and the intensity of the perceived pain threshold need more studies and research. Further research is expected. The vagus nerve affects the emotional spectrum and respiratory rate, probably always through the NTS. We do not have exhaustive knowledge of whether or not there is also a bidirectional response to emotional conditions by the diaphragm from the vagus nerve. [56][57][58] It is undeniable that a respiratory disorder can alter the emotional picture, similar to depression and anxiety, and it is equally true that an altered emotional state worsens the respiratory function. 59,60 Considering also the emotional side during manual treatment, such as manual therapy or an osteopathic treatment, would probably be more helpful for the patient.
Conclusion
The diaphragm influences the intensity of the pain and there is an indisputable association with emotions and experience acknowledged by the patient. Every professional such as the doctor, the manual operator, and the psychologist ends their sphere of competence where the other professional's competence begins. This reflection allows for a multidisciplinary approach to the diaphragm, and collaboration of various medical and nonmedical figures, with the ultimate goal to regain or improve patient's physical and mental well-being.
Disclosure
The authors report no conflicts of interest in this work. | 2018-04-03T04:44:35.074Z | 2016-02-29T00:00:00.000 | {
"year": 2016,
"sha1": "0defc0fb6ba97ced15bbda03ebc94f8c231dd349",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=29187",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9fcdc1cfbc6ee183d9f3ce4e1dda203a1103b37b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
27205093 | pes2o/s2orc | v3-fos-license | The Synergistic Enhancement of Cloning Efficiency in Individualized Human Pluripotent Stem Cells by Peroxisome Proliferative-activated Receptor-γ (PPARγ) Activation and Rho-associated Kinase (ROCK) Inhibition*
Background: hPSCs cloning efficiency is still low. Results: Pioglitazone, a PPARγ agonist, along with Rho kinase inhibitor, Y-27632, increased cloning efficiency (2–3-fold versus Y-27632) through enhanced membrane localization of β-catenin and E-cadherin. Conclusion: Cloning efficiency in individualized hPSCs was enhanced synergistically by PPARγ activation and Rho kinase inhibition. Significance: This offers a new approach to hPSCs expansion for biomedical applications. Although human pluripotent stem cells (hPSCs) provide valuable sources for regenerative medicine, their applicability is dependent on obtaining both suitable up-scaled and cost effective cultures. The Rho-associated kinase (ROCK) inhibitor Y-27632 permits hPSC survival upon dissociation; however, cloning efficiency is often still low. Here we have shown that pioglitazone, a selective peroxisome proliferative-activated receptor-γ agonist, along with Y-27632 synergistically diminished dissociation-induced apoptosis and increased cloning efficiency (2–3-fold versus Y-27632) without affecting pluripotency of hPSCs. Pioglitazone exerted its positive effect by inhibition of glycogen synthase kinase (GSK3) activity and enhancement of membranous β-catenin and E-cadherin proteins. These effects were reversed by GW-9662, an irreversible peroxisome proliferative-activated receptor-γ antagonist. This novel setting provided a step toward hPSC manipulation and its biomedical applications.
signaling (9). Additionally, it was shown that PPAR␥ activation significantly reduced apoptosis of isolated rat cardiomyocytes that were subject to hypoxia/reoxygenation, at least in part by facilitation of Akt rephosphorylation (10). We reported that the PPAR␥ agonist enhanced the proliferation and survival rate of mouse embryonic stem cells (11). Therefore, we hypothesized that the PPAR␥ agonist, pioglitazone, might positively affect survival of dissociated single hPSCs and increase colony formation.
The ROCK inhibitor Y-27632 (Calbiochem, 688000) was added to the culture medium at a final concentration of 10 M (6). Pioglitazone (Cayman, 18570) and GW9662 (Sigma, M6191) were dissolved in dimethyl sulfoxide (DMSO). To find an effective dose of pioglitazone, we treated the cells with 2, 4, 8, and 16 M of ROCK inhibitor. Y-27632 (6) and GW9662 (11) were prepared at a final concentration of 10 M. All small molecules were added to the culture medium for the first 24 h after the cells were replated. Subsequently, the cell cultures were continued in the absence of small molecules. To induce differentiation, hPSCs were grown in suspension as embryoid bodies in hPSC medium without basic FGF and small molecules for 2 weeks.
The CHO-K1 cell line (Pasteur Institute, Tehran, Iran) was also used for transfection experiments. CHO cells were cultured and maintained as previously described (15).
Colony Formation of Single Dissociated Single hPSCs-We evaluated the effect of PPAR␥ activation on cloning efficiency of single dissociated single hPSCs [(number of alkaline phosphatase-positive colonies/number of seeded cells) ϫ 100] by analyzing the numbers of feeder-independent colonies. For this purpose single cells were plated into Matrigel-coated tissue culture dishes at a density of 60 ϫ 10 3 hPSCs/well of a 6-well plate in hPSC medium. The cloning efficiency was calculated by ImageJ software version 1.4 (8).
Plasmids and Co-transfection-We used the following plasmids in this study: PPAR␥-EGFP expression plasmid (16), PDSred-N1 (Clontech), RhoA V14, and PIP5K1␣ (kindly provided by Dr. Nicolai E. Savaskan, Friedrich Alexander University of Erlangen-Nuremberg, Germany). Co-transfection of plasmids into CHO cells was performed using Lipofectamine LTX reagent (Invitrogen, 15338-100). The cell numbers and amount of plasmids for each transfection were determined based on the manufacturer's instructions. Two days post-transfection, we used the cells for further analyses.
Gene Expression Analysis-Total RNA was extracted using the RNeasy Kit (Qiagen, 74004), and cDNA was synthesized starting with 1 g of total RNA using reverse transcriptase and a hexamer primer (TaKaRa). Real-time (SYBR Green) PCR was performed in a thermal cycler Rotor gene 6000 (Corbett) according to the manufacturer's protocol (TaKaRa). The PCR mixture contained 10 l of Rotor-Gene SYBR Green PCR Master Mix (TaKaRa), 3 pmol of each primer, and 25 ng of cDNA for each reaction in a final volume of 20 l. All samples were assessed in relation to the levels of GAPDH expression as an internal control.
All measurements were performed in triplicate. Real-time specific primer pairs were designated by Beacon Designer software (version 7.2) as obtained from Metabion (Planegg/ Steinkirchen, Germany). The primer sequences are listed in Table 1. Real-time data were assessed and reported according to the ⌬⌬Ct method.
Subcellular Fractionation-The discontinuous sucrose gradient approach was used to isolate nuclear and plasma membrane fractions. At the initial step, we added a homogenization buffer (0.25 M sucrose, 10 mM HEPES, pH 7.5) that contained protease inhibitor mixture (Calbiochem, 539134) to freshly harvested cells. The cells were incubated on ice for 10 min. After sonication and homogenization of the pellet by a tight glass homogenizer in homogenization buffer that contained protease inhibitor, the suspension was centrifuged at 3000 ϫ g for 15 min. At this step the pellet included the nucleus and plasma membrane. Next, the suspension was centrifuged on a sucrose buffer gradient (buffer A (0.3 M sucrose, 50 mM Tris, pH 7.5, 1 mM MgCl 2 ) and buffer B (1.8 M sucrose, 50 mM Tris, pH 7.5, 1 mM MgCl 2 )) at 110,000 ϫ g for 90 min. Both buffers contained protease inhibitor mixture. Finally, the nucleus fraction (pellet of the previous step) was washed with buffer A at 15,000 ϫ g for 15 min.
Chromatin Immunoprecipitation (ChIP)-We used the Pierce™ Agarose ChIP kit (Life Technologies, Inc., 26156) according to the manufacturer's protocol to investigate PPAR␥ response element within -catenin and E-cadherin promoters.
Co-immunoprecipitation-Pierce co-immunoprecipitation (Co-IP, Life Technologies, 26149) was performed to analyze -catenin and E-cadherin interaction according to the manufacturer's instructions.
Alkaline Phosphatase and Immunofluorescence Staining-The colony formation assay was performed with an alkaline phosphatase kit (Sigma, 86R) according to the manufacturer's instructions.
For immunostaining, colonies were fixed with 4% paraformaldehyde (Sigma, P6148) for 30 min at 4°C followed by permeabilization with 0.4% Triton X-100 (Sigma, T8532) in PBS, blocked with secondary antibody-related host serum for 1 h, treated with the primary antibody for 1 h, and incubated with secondary antibody for 1 h. Primary antibodies used in this study were: anti-Oct4 (1:100, Santa Cruz Biotechnology, SC-5279), anti-Nanog Flow Cytometry Analysis of Cell Cycle, Proliferation, and Apoptosis-For cell-cycle analysis, hPSCs seeded for 24 h were fixed in 70% ethanol. After washing, the cells were suspended in PBS that included RNase A and propidium iodide (1 mg/ml) solution. For identifying and examining proliferating cells, we incubated the cycling cells with 5-bromo-2Ј-deoxyuridine (BrdU) for 1 h. After DNA denaturation, the cells were stained with monoclonal anti-BrdU (Sigma, B2531) as the primary antibody and IgM-FITC (Millipore, AP124F) as the secondary antibody. Apoptosis analysis was conducted at 24 h after cell seeding by using the following three protocols: Annexin V, terminal transferase dUTP nick end labeling (TUNEL) and caspase-3 activity. For annexin V analysis, cells were labeled with propidium iodide and Annexin V-FITC (IQ Products, IQP-120F) according to the manufacturer's protocol.
For the TUNEL assay, cells were stained to detect apoptotic nuclei by the DeadEnd Fluorometric TUNEL System (Promega, G3250) according to the manufacturer's instructions, then analyzed by flow cytometry. Improvement in cellular viability was further confirmed by the caspase-3/7 activation assay as a cellular marker of apoptosis using a commercially available kit (APT403, Millipore) according to the manufacturer's instructions. Cells were analyzed by FACS Caliber flow cytometer (BD Biosciences), and the data were processed according to the ModFit LT™ version 4.0 program.
Statistical Analysis-Data were expressed as the means Ϯ S.E. Statistical analysis of RT-qPCR and Western blotting with three independent cultures were performed by Graphpad prism software version 6 and Image J software, respectively. The results were subsequently compared using one-way analysis of variance followed by Tukey's post-hoc test or the t test when two independent groups were compared. The mean difference was significant at the p Ͻ 0.05 level.
Results
Elevated Colony Formation of Dissociated Single hPSCs in the Presence of Pioglitazone and Y-27632-We took into consideration our previous findings (11) of the positive effect of PPAR␥ activation on mouse embryonic stem cell proliferation to determine if a potent agonist of PPAR␥ could serve as a potential factor to improve hPSCs viability along with Y-27632 (ROCK inhibitor). Pioglitazone, a highly specific PPAR␥ agonist, was added to the culture medium at various concentrations (2-16 M) along with Y-27632 (Fig. 1, A and B) for the first 24 h after plating of dissociated single hPSCs. Subsequently the cells were cultured for 7 days. Only pioglitazone did not result in alkaline phosphatase-positive colonies in the same manner as DMSO alone (data not shown). According to the results, the 8 M concentration of pioglitazone was the most efficient in terms of colony formation as detected by the numbers of alkaline phosphatase (AP)-positive colonies (Fig. 1, A and B). Therefore, this concentration was used for additional experimentation. To confirm that PPAR␥ activation led to enhanced cloning efficiency, we treated the cells simultaneously with Y-27632 and a PPAR␥ antagonist (GW9662). In this condition, a drop in colony formation was observed (p Ͻ 0.05, Fig. 1C). Of note, in two other hPSC lines we observed that single cells co-treated with Y-27632 plus pioglitazone had the best cloning efficiency, which was significantly reduced by the PPAR␥ antagonist, GW9662 (Fig. 1C, at least p Ͻ 0.05). Therefore, treatment by Y-27632 plus pioglitazone on dissociated single hPSCs had a synergistic effect on colony formation compared with Y-27632 alone. Furthermore, due to universality of the PPAR␥ effect on cloning efficiency, we continued the other experiments with a hESC cell line, RH5.
Pioglitazone Did Not Modulate hPSCs Apoptosis and Proliferation-We sought to determine if the increase in cloning efficiency with Y-27632 plus pioglitazone could be attributed to a decreased rate of apoptosis or enhancement of proliferation. We used three distinct assays (annexin V, TUNEL, and caspase-3 activation) to evaluate apoptosis. The results showed a clearly evident significant decrease in the rate of apoptotic cells after Y-27632 treatment compared with untreated cells (at least p Ͻ 0.01; Fig. 2, A-C). However, co-treatment of the cell with Y-27632 and pioglitazone or GW9662 did not significantly change the rate of apoptotic cells.
To validate if co-treatment of Y-27632 and pioglitazone increased the proliferation rate, we repeated the previous experiments and assessed the numbers of cells that were in S phase. Flow cytometry data showed no significant difference in S phase cell numbers co-treated with Y-27632 and pioglitazone compared with only Y-27632 (Fig. 2D, p Ͼ 0.05). The BrdU proliferation assay confirmed the same trend in proliferation rate (Fig. 2E).
Therefore, cell preservation under this circumstance was possibly not due to reduced apoptosis or an increased proliferation rate. We proposed that adhesion alteration resulted in colony formation enhancement.
Down-regulation of PPAR␥, -Catenin, and E-cadherin Proteins in Dissociated Single hPSCs-Cytoskeletal components play a major role in cell-ECM/cell interactions which result in increasing viability. During cell dissociation cytoskeletal phosphorylation leads to dissociation induced apoptosis due to disruption of cytoskeletal components (4,17). In this experiment we measured the transcript and protein levels of -catenin and E-cadherin as cell-ECM/cell components and PPAR␥ in dissociated single hPSCs after 4 h. We detected no significant changes in mRNA expression in dissociated single cells and colonies (Fig. 3A). Surprisingly, dissociation of hPSCs resulted in down-regulation of the protein contents of E-cadherin, -catenin, and PPAR␥ (Fig. 3, B and C). This was also demonstrated by immunostaining (Fig. 3D). Therefore, it seems that PPAR␥ was involved in repair of cell-ECM/cell interaction disruption.
Augmentative Role of Pioglitazone in Colony Formation through -Catenin and E-cadherin Escalation-The role of E-cadherin and its associated molecule -catenin in cell-cell interaction is critical for the survival and differentiation of hPSCs (18). Therefore, changes in E-cadherin and -catenin expression under pioglitazone treatment have been determined by protein level analysis and co-immunoprecipitation OCTOBER 23, 2015 • VOLUME 290 • NUMBER 43 at 4 h post-treatment of dissociated single hPSCs. Co-treatment of Y-27632 and pioglitazone up-regulated E-cadherin and -catenin proteins compared with Y-27632-treated cells (Fig. 4, A and B). Interestingly, the PPAR␥ antagonist (GW-9962) reversed the conditions generated by pioglitazone.
Pioglitazone Augmented Cloning Efficiency of hPSCs
It has been reported that -catenin performance in hPSCs is dependent on its subcellular localization (19). Additionally, colony formation is a consequence of -catenin localization in the membrane, whereas nuclear localization of -catenin results in nuclear gene expression (20,21). Therefore, we conducted immunofluorescence staining to study -catenin subcellular localization after treatment of hPSCs with pioglitazone. Immunostaining data showed membrane localization of -catenin in hPSCs (Fig. 4C).
Next, we sought to determine whether plasma membrane localization of -catenin increased upon pioglitazone treatment. Thus, Western blotting of plasma membrane and nuclear fractions at 4 h post-treatment of dissociated single hPSCs was performed. The subcellular plasma membrane fraction of -catenin increased significantly in Y-27632 plus pioglitazone; however, the nuclear fraction showed no significant change (Fig. 4D). c-Myc and Tau proteins were used as positive controls for nuclear and plasma membrane proteins, respectively (Fig. 4D). We performed co-immunopre-cipitation for -catenin and subsequent Western blotting for E-cadherin to show if pioglitazone could also influence the interactions of E-cadherin and -catenin. The result revealed that pioglitazone synergistically affected the assembly of E-cadherin and -catenin (Fig. 4E).
Furthermore, to evaluate whether Wnt signaling was involved in membrane-tethered -catenin, we analyzed hPSC extracts for phospho-GSK3 (p-GSK3) by using a phospho-specific antibody that reacted with phosphorylated GSK3,-ser9 (P-GSK3,-Ser-9). The activity of GSK3 is inhibited via phosphorylation of Ser-9 (22). We observed that the level of P-GSK3,-Ser-9 increased upon pioglitazone co-treatment with Y-27632 (Fig. 5A). As already depicted, there was an increased accumulation of -catenin and E-cadherin proteins in the Y-27632 plus pioglitazone cell extracts (Fig. 4, A and B). However, -catenin and E-cadherin transcripts were not significantly induced (Fig. 5B), which suggested that both alterations occurred at on the protein level. For additional confirmation, we examined the recruitment existence of PPAR␥ on E-cadherin and -catenin promoters. Chromatin was isolated from hPSCs maintained in hPSC medium that contained Y-27632 and/or Y-27632 plus pioglitazone using the PPAR␥ antibody. ChIP analysis indicated that recruitment of PPAR␥ on E-cadherin and -catenin promoters in Y-27632 plus pioglitazonetreated cells was similar to Y-27632-treated cells (Fig. 5C). Collectively, these data show that pioglitazone induces accumulation of membrane-tethered -catenin and the E-cadherin protein complex.
PPAR␥ Expression Regulated by the Rho/ROCK Signaling Pathway during hPSCs Dissociation-To determine whether modulation of PPAR␥ expression after hPSC dissociation (Fig. 3) resulted from Rho/ROCK activation during dissociation, we treated dissociated single hPSCs with Y-27632 as an inhibitor of the ROCK signaling pathway and assessed expression levels of PPAR␥, -catenin, and E-cadherin. There was a significant increase in expressions of E-cadherin and -catenin in Y-27632-treated cells within 4 h after treatment, whereas increased PPAR␥ expression occurred within the second hour after treatment of dissociated single hPSCs (Fig. 6A). These findings suggested prior up-regulation of PPAR␥ compared with -catenin and E-cadherin transcripts in Y-27632-treated cells.
Next, we sought to determine whether the Rho/ROCK pathway directly affected PPAR␥ expression. We chose two factors from the beginning and end of this pathway, RhoA and PIP5K, respectively. These factors were separately co-transfected with a PPAR␥ expression plasmid under the regulation of a CMV promoter in a CHO cell line. Co-transfection results showed a considerable decrease in PPAR␥ protein expression that was affected by PIP5K as one of the final factors of the Rho/ROCK pathway (Fig. 6, B and C). According to the data the Rho/ROCK signaling pathway exerted its regulatory role on PPAR␥ by a direct inhibitory effect on its expression. There is no significant difference between DMSO and GW9662 treatments. C, immunostaining showed localization of -catenin in the cell membrane at day 3. Scale bar: 100 m. D, Western blotting for plasma membrane (PM) and nuclear (N) fractions 4 h post-treatment of dissociated single hPSCs. We observed enhancement of -catenin in the plasma membrane fragment. c-Myc and Tau proteins were used as positive controls for nuclear and plasma membrane proteins, respectively. E, co-immunoprecipitation (IP) for -catenin and subsequent Western blotting (WB) for E-cadherin. Pioglitazone synergistically regulated the assembly of E-cadherin and -catenin. ***, p Ͻ 0.001.
Pioglitazone and ROCK Inhibitor Y-27632 Did Not Affect hPSC Pluripotency-We assessed hPSC colony growth to determine the presence of a possible effect of pioglitazone on their morphological quality. Colonies grown for 31 passages in vitro retained predominantly undifferentiated morphological features such as well defined borders and small cells with a high nucleus:cytoplasm ratio (Fig. 7A). They expressed standard undifferentiating markers (ALP, Oct4, SSEA3, SSEA4, TRA-1-60, and TRA-1-81; Fig. 7A). The effect of pioglitazone on the undifferentiated hPSC state was assessed by analyzing the expression levels of stemness factors (NANOG, OCT4, and SOX2; Fig. 7B) by RT-qPCR at passage-31 (Fig. 7C). Pioglitazone enhanced the expression level of the stemness factor NANOG (Fig. 6B) in the undifferentiated state. Additionally, the differentiation potential of hPSCs was evaluated by spontaneous differentiation and the expression of PAX6, Nestin, and SOX1 at the RNA level (Fig. 7C) and PAX6 (ectodermal), SOX7 (endodermal), and EOMES (mesodermal) at the protein level (Fig. 7D). Collectively, the data showed that co-treatment of pioglitazone with Y-27632 did not negatively affect hPSC self-renewal.
Discussion
To our knowledge this is the first report where co-implementation of pioglitazone as a highly selective PPAR␥ agonist, with a ROCK inhibitor, Y27632, has increased survival and cloning efficiency (2-3-fold versus Y27632 alone) of individualized hPSCs under feeder-free culture conditions. However, pioglitazone alone did not enhance cloning efficiency.
We conducted a search for a possible mechanism for the positive effect of pioglitazone. Our cell cycle, proliferation, and apoptosis analyses showed no significant alteration in the evaluated parameters in dissociated single hPSCs after treatment of pioglitazone plus Y-27632 compared with Y-27632. We demonstrated that the addition of the ROCK inhibitor, Y-27632, to Matrigel as an ECM for expansion of hPSCs increased cloning efficiency compared with its presence solely in culture medium through up-regulation of adhesion integrins (8). Therefore, we proposed that alteration in cell adhesion cytoskeletal elements resulted in colony formation enhancement (Fig. 8). It was demonstrated that activation of the Rho/ROCK pathway after the loss of E-cadherin-dependent intercellular adhesion played a pivotal role in the apoptosis of dissociated single hPSCs (4). Inappropriate destabilization of -catenin in induction of apoptosis has been shown in tumor cells (23). We observed that the FIGURE 5. Induction of -catenin by pioglitazone. A, Western blot analysis for phospho-GSK3 (P-GSK3,-ser9) in dissociated single hPSCs. Data quantification showed enhanced P-GSK3 in cell extracts of Y-27632 plus pioglitazone (Pio). ***, p Ͻ 0.001. B, RT-qPCR analysis for -catenin and E-cadherin transcripts were not significantly induced in Y-27632 plus pioglitazone. C, ChIP for the existence of -cateninand E-cadherin-associated PPAR␥ binding sites was performed using related antibodies. There was no significant difference (NS) between both groups. There were no detected response elements for PPAR␥ on E-cadherin and -catenin promoters. At the fourth hour, expressions of PPAR␥, E-cadherin, and -catenin increased significantly. B and C, we sought to determine if the Rho/ROCK pathway directly affected PPAR␥ expression. RhoA and PIP5K were separately co-transfected with a PPAR␥ expression plasmid under the regulation of a CMV promoter in a CHO cell line. Analysis of Western blots showed PPAR␥ protein expression was influenced by PIP5K. *, p Ͻ 0.05. Therefore, the Rho/ROCK signaling pathway exerted its regulatory role on PPAR␥ by a direct inhibitory effect on its expression. protein levels of E-cadherin, -catenin, and PPAR␥ down-regulated in individualized hPSCs. In contrast these protein levels up-regulated after application of PPAR␥ activation and ROCK inhibition. Immunostaining, co-immunoprecipitation of E-cadherin and -catenin and plasma membrane, and nuclear fractionation of the cells showed more plasma membrane localization of -catenin and its direct interaction with E-cadherin after pioglitazone treatment. This result was consistent with a previous study which reported that ligand-activated PPAR␥ directly interacted with -catenin and resulted in retaining this component in the cytosol (24). The up-regulation of -catenin in individualized hPSCs occurred through GSK3 inactivation by ligand-activated PPAR␥. This escalation of membranous -catenin along with E-cadherin led to intensified colony formation in dissociated single hPSCs by pioglitazone and inhibition of -catenin-mediated transcriptional pathways involved in promoting cell proliferation.
The mechanism of pioglitazone and Y-27632 action in enhancing E-cadherin is an intriguing question that awaits future investigation. A recent report has suggested that -catenin transcriptional activity through modulation of Tcf3 activity plays a role in preventing exit from the pluripo-tent state (25). In contrast, it has been shown that transcriptional activity of -catenin is negligible during self-renewal, which is due to the tight association of -catenin with plasma membrane, where it is in a complex with E-cadherin (26). On the other hand, E-cadherin also can recruit -catenin to the cell membrane and prevent its nuclear localization and transactivation (27). The positive role of Wnt pathway activation by GSK3 inhibition in maintenance of the undifferentiated hPSCs has been presented previously (28), although this is controversial (29).
Of interest, we observed that the expression of PPAR␥ transcripts increased after ROCK inhibition. On the other hand, overexpression of Rho/ROCK pathway components significantly decreased the amount of PPAR␥ protein. In the case of RhoA, there were no significant changes in the PPAR␥ level due to the absence of ROCK (a mediating factor in this pathway) to transfer this signaling. However, the level of PPAR␥ decreased significantly with PIP5K transfection. The inhibitory role of PPAR␥ agonists on the Rho/ROCK pathway in cultured rat aortic smooth muscle cells has previously been demonstrated (30). It was reported that Rho/ROCK activation inhibited expression of PPAR␥, which thereby caused reduced adipogen- RT-qPCR analysis for stemness genes (B) in undifferentiated state is shown. Shown are RT-qPCR (C) and Western blot (D) analyses after spontaneous differentiation by embryoid body formation in hPSC medium without basic FGF for 2 weeks. *, p Ͻ 0.05. esis (31) and increased cell proliferation in pulmonary artery smooth muscle cells of sheep (32). Those two pathways likely regulate each other in hPSCs.
Co-treatment of pioglitazone and Y-27632 also markedly increased the cloning efficiency of hPSCs without affecting their pluripotency. However, we observed up-regulation of NANOG that could be related to interaction with -catenin. It was demonstrated that increased -catenin or the addition of Wnt3A to the culture medium promoted pluripotency and led to NANOG expression (33).
Taken together, the addition of the PPAR␥ agonist, pioglitazone, and Y-27632 to culture medium synergistically increased cloning efficiency of both individualized hESCs and hiPSCs compared with Y-27632 alone in feeder-free culture conditions upon passaging. This might be related to adhesion through enhanced up-regulation and accumulation of membranous -catenin and its interaction with E-cadherin as well as augmentation of signal transduction from the ECM, external environment, and the cell membrane into the cytoplasm, which resulted in changes to cellular dynamics and further downstream targets that regulated gene expression. These results provided a more favorable condition toward hPSCs manipulation and their biomedical applications. | 2018-04-03T00:55:39.881Z | 2015-09-02T00:00:00.000 | {
"year": 2015,
"sha1": "8bfe7e41a37cda679cedad6bd9cfdc040d98d89c",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/290/43/26303.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "4c083c97ecc85d5f7e311c0dd42ede1d7ec26521",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
219948349 | pes2o/s2orc | v3-fos-license | New Therapeutic Targets in Antineutrophil Cytoplasm Antibody–Associated Vasculitis
Antineutrophil cytoplasm antibody (ANCA)–associated vasculitis (AAV) is a rare systemic autoimmune disease that is characterized by necrotizing inflammation of predominantly the small blood vessels and the presence of circulating ANCAs directed against myeloperoxidase or proteinase 3. Current treatment strategies for severe disease, supported by the findings of several well‐coordinated randomized controlled trials, aim to induce remission with high‐dose glucocorticoids and either rituximab or cyclophosphamide, followed by relapse prevention with a period of sustained low‐dose treatment. This approach has dramatically improved outcomes in AAV; however, a significant proportion of patients develop serious treatment‐related side effects or experience relapse. Recent advances in our understanding of the pathogenesis of AAV have led to the identification of novel therapeutic targets that may address these problems, including strategies directed at the aberrant adaptive autoimmune response (B and T cell–directed treatments) and those targeting innate immune elements (complement, monocytes, and neutrophils). It is anticipated that these novel treatments, used alone or in combination, will lead to more effective and less toxic treatment regimens for patients with AAV.
Introduction
The antineutrophil cytoplasm antibody (ANCA)-associated vasculitides (AAVs) are a group of rare systemic inflammatory diseases which include granulomatosis with polyangiitis (GPA), microscopic polyangiitis (MPA), and eosinophilic GPA (EGPA). They are associated with the presence of autoantibodies to myeloperoxidase (MPO) or proteinase 3 (PR3), which are thought to play a pathogenic role, although ~10% of patients with AAV are ANCA negative and have similar clinical features as those who are ANCA positive (1). The AAVs are multisystem diseases that share common characteristics such as necrotizing inflammation of predominantly the small blood vessels, though the clinical presentation can vary widely, both in disease severity and in the spectrum of organ involvement.
Historically, outcomes in patients with AAV were poor, with a mortality rate of up to 80% at 1 year prior to the use of immunosuppressive treatment. This potential for poor outcomes was transformed with the introduction of cyclophosphamide, used in combination with glucocorticoids, in the 1970s.
Since then, there has been decade upon decade of improvements in patient outcomes (2,3), reflecting a combination of improved general medical care, earlier diagnosis, and more refined immunosuppressive regimens that have reduced toxicity from long-term cyclophosphamide use. These therapeutic strategies have been informed by a number of well-designed and collaborative clinical trials, and based on their findings, current guidelines stratify treatment depending on the severity and phase of the disease, usually with an initial period of intense immunosuppression to induce remission (often with cyclophosphamide or rituximab), followed by prevention of relapse with a period of sustained low-dose maintenance treatment (with drugs such as azathioprine, mycophenolate mofetil, or rituximab) (4).
With this approach, disease remission is attained in most patients during the first 6 months of treatment, and survival is estimated to be >90% at 1 year. However, unmet treatment needs remain: most deaths in the first year are now attributed to side effects of treatment, particularly infection. During long-term follow-up, therapy-related adverse events, including malignancy and cardiovascular disease, remain problematic, thus underscoring the need to refine treatment regimens further (5,6). In addition, approaches that can induce more sustained remission, thus avoiding the accrual of damage caused by disease relapse and its retreatment, are badly needed in this patient population. Advances in our understanding of disease pathogenesisrelated to the aberrant adaptive T and B cell responses that underlie this autoimmune disease, and to the role of innate immune components, including neutrophils, monocytes, and complement, as mediators of vascular damage-provide an opportunity to identify more specific, less toxic treatments for AAV. Many novel therapeutic agents are being investigated in preclinical and early-phase clinical studies (selected trials are listed in Table 1). In this review, we summarize the possible future therapeutic options for AAV (7-10) (Figure 1), although EGPA is not included herein because it has a distinct pathogenesis and different therapeutic approaches.
Targeting B cells
The emergence of rituximab as an effective induction and remission-maintenance treatment is arguably the most significant development in the management of AAV since the introduction of cytotoxic therapy almost a half century ago. B cells are clearly central to the pathogenesis of the disease, as they produce ANCAs. Moreover, studies have shown that the number of activated B cells correlates with disease activity (11), and B cell repopulation, and possibly the phenotype of B cells, after treatment with rituximab may be predictive of relapse (12).
Several second-generation anti-CD20 drugs are in development; these differ from rituximab in their epitope specificity, pharmacokinetics, and ability to induce either complement or antibody-dependent cytotoxicity and apoptosis, which in turn may impact the rapidity, depth, and duration of their depleting effect on the circulating and tissue B cell pools. Ofatumumab, a fully humanized anti-CD20 monoclonal antibody (mAb), has been tested in one small case series of patients with AAV, with the results showing its therapeutic benefit. However, none of these second-generation agents has been tested in randomized controlled trials (RCTs) in patients with AAV (13).
Obinutuzumab, which has increased antibody-dependent cytotoxicity and a greater capacity for direct B cell killing compared to rituximab, has shown promise in a phase II study in patients with lupus nephritis (ClinicalTrials.gov identifier: NCT02550652) (14). Conversely, an early study of another humanized anti-CD20 antibody, ocrelizumab, in patients with lupus was terminated early due to a higher-than-expected rate of infections (15,16). Whether these second-generation anti-CD20 drugs might provide incremental benefit over rituximab in AAV, without increasing the incidence and frequency of short-or long-term toxic effects (e.g., hypogammaglobulinemia, impaired vaccine responses), will require more detailed study.
The inhibition of B cell cytokines and survival factors is an alternative approach to direct targeting to B cells. B lymphocyte stimulator (BLyS), for example, plays an important role in B cell survival, and circulating BLyS levels are higher in patients with AAV than in healthy individuals. After treatment with rituximab (in AAV and other autoimmune diseases), serum BLyS levels rise (17,18), which may herald the occurrence of a relapse. In vitro, BlyS is released from neutrophils stimulated with ANCAs, suggesting that it has a specific role in AAV (19).
Belimumab is an anti-BlyS mAb licensed for the treatment of lupus and under investigation in AAV. The Belimumab in Remission of Vasculitis (BREVAS) trial examined the addition of belimumab to azathioprine and glucocorticoids for remission maintenance in patients who received either rituximab or cyclophosphamide for induction (20). Regrettably, the trial was terminated early due to under-recruitment, and no benefit for the primary end point (improvement in the relapse rate) was observed. However, in the subgroup of patients treated with rituximab for induction, fewer relapses were seen with belimumab (0 of 14 patients experiencing relapse in the belimumab group compared to 3 of 13 patients in the placebo group). Although both the number of patients and relapses were small, this might suggest a potential benefit of belimumab after treatment with rituximab (20).
The combination of rituximab and belimumab will be investigated further in the Rituximab and Belimumab Combination Therapy in PR3-AAV trial (COMBIVAS) (ClinicalTrials.gov identifier: NCT03967925), in which patients will be treated with rituximab and glucocorticoids for remission induction, and will be randomized to receive either belimumab or placebo for 1 year. It is hypothesized that the addition of belimumab will potentiate the effect of rituximab on B cell depletion and prevent the return of autoreactive cells, or suppress a broader repertoire of B cells (including those not expressing CD20) than can be achieved with rituximab alone, thus inducing more rapid and sustained remission (21).
Bortezomib, a proteasome inhibitor that drives plasma cells with high immunoglobulin synthesis to apoptosis, is an alternative approach for targeting the CD20-negative cell population. In a mouse model of MPO-AAV, treatment with bortezomib depleted MPO-specific plasma cells and decreased the severity of glomerulonephritis (22). There is a single report of its successful use in a patient with treatment-resistant PR3-AAV (23). The routine use of bortezomib is likely to be limited because of its side effect profile, as >30% of patients develop painful peripheral neuropathy. However, several novel and potentially less toxic proteasome inhibitors are currently in development (24,25).
Novel cell-based approaches may also be used to target autoreactive B cells. Chimeric antigen receptor (CAR) T cells are autologous cells that can be engineered to specifically target CD19+ B cells-an approach that has shown efficacy in some hematologic malignancies. It is also possible that an extension of this technology-chimeric autoantibody receptor (CAAR) T cells-may target autoreactive B cells through their antigen-specific B cell receptor. CAAR T cells have been tested in a model of pemphigus in humanized mice, in which it was observed that CAAR T cells induced lysis of pathogenic B cells (26). Although still in the early stages of development, CAAR T cells may provide a curative approach in AAV and other autoimmune diseases for which the target autoantigens have been defined.
Targeting T cells
The importance of aberrant T cell responses in AAV is increasingly recognized, and studies in experimental models of MPO-AAV have been particularly informative (27). In mice, disease can be attenuated by depletion of either CD4 or CD8 T cells, and adoptive transfer of T cells can initiate glomerular injury independently of ANCAs. In patients with AAV, circulating ANCAs are predominantly class-switched IgG1 and IgG4, implying that helper T cells play a role in this process. Immunostaining has identified T cells in the glomeruli and tubulointerstitium in renal biopsy tissue from patients with renal AAV, and several abnormalities in circulating T cell phenotype or function have been reported in patients with active disease (28,29). An exhausted T cell phenotype has been shown to correlate with a reduced risk of disease relapse (30). Activation of circulating T cells is reported to persist, despite treatment, during periods of remission, and therefore there may be a particular role for anti-T cell therapies in preventing relapse (31).
There are several small studies using T cell-directed therapies for remission induction, typically in patients with refractory disease. In an open-label cohort study of 15 patients with relapsing or refractory AAV, treatment with anti-thymocyte globulin led to a favorable response in 13 patients (32). Alemtuzumab is a humanized anti-CD52 mAb that depletes all lymphocytes, with a particularly long-lasting effect on T cells; CD4+ T cell counts take ~60 months to recover (33). When CD4+ T cells do eventually repopulate after treatment with alemtuzumab, some reports | 365 in patients with multiple sclerosis show a skew toward a Treg cell phenotype, which could contribute to long-lasting immunomodulatory effects (34). Long-term follow-up of 71 patients with refractory AAV treated with alemtuzumab showed that remission was achieved in 80% of patients, although relapse and severe adverse events were common (35). A subsequent phase II RCT, the Alemtuzumab for ANCA Associated Refractory Vasculitis (ALEVIATE) trial, compared high-and low-dose alemtuzumab in a mixed cohort of patients with refractory AAV or Behçet's disease. A preliminary report from the study indicated that 6 months after treatment, remission was achieved in 65% of patients and, although relapse was common, 35% had sustained remission at 1 year, by which point 26% of patients had experienced an adverse event (36). Thus, there may be a role for alemtuzumab in patients with difficult-to-treat disease, although the potential for adverse events is high compared to standard treatment strategies, likely reflecting both drug toxicity and susceptibility to infection due to disease-related factors. There are also reports of autoimmune phenomena occurring after alemtuzumab treatment in other diseases, such as multiple sclerosis, which is thought to be driven by expansion of T cells that have escaped deletion and become chronically activated; whether patients with AAV are at similar risk should also be considered (37).
In patients with nonsevere disease, abatacept, a fusion protein comprising the Fc region of IgG1 fused to CTLA-4, has been tested. Abatacept prevents the costimulatory signaling occurring via CD80 and CD86 that is needed for antigen-presenting activation of T cells (38). In an open-label study of 20 patients with relapsing, non-severe GPA who received abatacept in addition to methotrexate, mycophenolate mofetil, or azathioprine, remission rates of 80% were observed, and >70% of patients were able to wean glucocorticoid treatment (39). An RCT that is currently recruiting patients, the Abatacept for the Treatment of Relapsing Non-Severe GPA (ABROGATE) trial (ClinicalTrials.gov identifier: NCT02108860), is evaluating this approach using a glucocorticoid-free regimen.
The Th17/interleukin-17 (IL-17)/IL-23 axis is also known to play a role in the pathogenesis of AAV. Serum levels of IL-23 and IL-17 are raised in patients with active disease, and stimulation of neutrophils with ANCAs has been shown to induce production of IL-17 (40,41). In one study, IL-17-deficient mice were protected from developing MPO-AAV (42). Monoclonal antibodies against IL-17 (seikinumab) and IL-23 (ustekinumab) have been tested in patients with psoriasis and those with rheumatoid arthritis, but there are no reports of their use in patients with AAV to date.
Targeting cytokines
Levels of circulating cytokines such as IL-6 and tumor necrosis factor (TNF) are elevated in patients with active AAV (43,44). A number of open-label studies and case series have demonstrated the successful use of anti-TNF therapies, although these results were not confirmed when tested in RCTs, the largest of which-the Wegener's Granulomatosis Etanercept Trial (WGET)-recruited 174 patients with GPA. In this trial, the patients were randomized to receive either etanercept or placebo, in addition to standard treatment with glucocorticoids and methotrexate or cyclophosphamide, for remission maintenance (45). There was no benefit from etanercept on the rate of sustained remission, and this negative trial outcome means that anti-TNF agents have largely been discounted as a potential therapeutic option in AAV, though they may yet have a niche role in the treatment of specific disease manifestations such as ocular inflammation (46). IL-6 promotes B cell differentiation, activates macrophages, and induces production of other cytokines; serum IL-6 levels are elevated in patients with AAV, and IL-6 is expressed at sites of tissue inflammation (43). There are several case reports describing patients with AAV treated with tocilizumab, a humanized anti-IL-6 receptor mAb, showing that complete and sustained remission was achieved in many of the patients with otherwise refractory disease (43,47). Others have reported less favorable outcomes with tocilizumab, including treatment failure or infectious complications (47). Given the successful use of tocilizumab in other systemic autoimmune rheumatic diseases, controlled studies may be warranted to define the role of anti-IL-6 therapy in AAV.
Targeting complement
While historically regarded as a "pauci-immune" vasculitis, with few immunoglobulin or complement deposits in tissue, the past decade has seen the important role of complement in disease pathogenesis come to light (8). In patients with AAV, careful examination has identified complement deposition at sites of tissue inflammation, and altered levels of plasma and urinary complement components have been shown to correlate with disease severity (48,49). Convincing evidence of complement involvement came from experimental mouse models, in which a series of elegant studies dissected a role for alternative pathway activation, and for the receptor of C5a (C5aR), a potent anaphylatoxin and chemoattractant, in disease pathogenesis (50). In vitro, C5a can prime neutrophils to respond to ANCA stimulation, and both ANCA-stimulated neutrophils and neutrophil extracellular traps (NETs) can activate the alternative complement cascade, leading to a positive feedback loop (51,52). Ultimately, it was shown that in mice transgenic for the human C5aR, a small molecule antagonist of the C5aR1 (avacopan, CCX168) was an effective treatment for MPO-AAV in a passive transfer model (53).
This compound then showed promising results in an earlyphase clinical study of patients with AAV, in which it was found to be a noninferior substitute for prednisolone during remission induction (54). However, that study was small (n = 67 patients), was of short duration (12 weeks), and included only patients with nonsevere disease. A subsequent phase III trial, the Avacopan in Patients With ANCA-Associated Vasculitis (ADVOCATE) trial (Clin-icalTrials.gov identifier: NCT02994927), completed recruitment of 300 patients in 2018. The patients were randomized to receive either avacopan or glucocorticoids during remission induction with either cyclophosphamide or rituximab, and top-line data released in late 2019 suggested noninferiority of avacopan at 26 weeks and superiority over glucocorticoids at 52 weeks, with an acceptable safety profile. However, we still await full publication of the study results, and it should be highlighted that the most severe cases (those with an estimated glomerular filtration rate of <15 ml/ minute, requiring dialysis or plasma exchange) were still excluded.
An alternative anti-C5a treatment, IFX-1, a mAb that targets C5a rather than C5aR, which may therefore have differing biologic effects from those of avocapan, is also being evaluated in phase II studies. Patients will be randomized to receive standard glucocorticoids, a combination of IFX-1 and reduced-dose glucocorticoids, or IFX-1 and no glucocorticoids during the remission-induction phase (European study, ClinicalTrials.gov identifier: NCT03895801) or randomized to receive standard of care plus IFX-1 or placebo (North American study, ClinicalTrials.gov identifier: NCT03712345). Recruitment is ongoing and completion is estimated by July 2021. Blockade of C5 cleavage is another potential treatment, though descriptions on eculizumab use in patients with AAV are limited to individual case reports (55).
Targeting neutrophils and monocytes
There is extensive evidence demonstrating the pathogenic role of neutrophils in AAV. Studies first published nearly 30 years ago have shown that ANCAs bind to and activate neutrophils, leading to degranulation and production of reactive oxygen species (56). ANCA stimulation of neutrophils has also been shown to activate intracellular signaling cascades, leading to increased neutrophil adhesion and transmigration at the vascular endothelium (57). ANCA stimulation can induce NETosis, a specialized form of cell death with release of NETs (extracellular meshes of decondensed chromatin and granular proteins). NETs are pathogenic in AAV: they can activate dendritic cells, autoreactive B cells, and complement; they are directly injurious to endothelium; and they may play a role in loss of tolerance to ANCA antigens (52,58). While many studies have focused on the role of neutrophils, monocytes also express the ANCA autoantigens and respond similarly to ANCA stimulation in vitro (59), and thus may contribute to tissue injury. A number of agents that target these neutrophil-and monocyte-mediated functions in AAV are in preclinical development.
Inhibiting NETosis may attenuate both vascular damage and potentiation of the autoimmune response by limiting aberrant extracellular expression of ANCA autoantigens. Peptidylarginine deiminase 4 (PAD-4) is essential for NET formation, as it plays a role in citrullination of histones, and PAD-4 inhibition decreases NET formation in vitro (60). In a mouse model of MPO-AAV, PAD-4 deficiency or use of a selective inhibitor decreased NETosis, MPO deposition, glomerular injury, and cell infiltration (61). PAD-4 inhibition has also been tested in mouse models of lupus, but as yet there have been no studies in humans.
Cathepsin C is a lysosomal peptidase that acts in the bone marrow to cleave neutrophil serine proteinases (NSPs), including neutrophil elastase and PR3, to their mature, active forms (62). Activated neutrophils release large amounts of these NSPs into the extracellular space, where they may initiate tissue inflammation and injury as a constituent of NETs. In a mouse model of MPO-AAV, knockout of cathepsin C protected from disease and decreased MPO-ANCA-induced IL-1β production in vitro (63). Cleaved NSP may also remain bound to the neutrophil cell surface and, in PR3-AAV, this translocation of PR3 to the cell surface may perpetuate disease; there is evidence in GPA that patients with higher levels of membrane-bound PR3 have more severe disease features (64,65). Thus, reducing cell surface expression of PR3 by preventing its activation by cathepsin C has been suggested as a potential therapeutic strategy. A recently developed pharmacologic inhibitor of cathepsin C was found to decrease the levels of membrane-bound PR3 on neutrophils, with no effect on neutrophil differentiation. When used in vitro, PR3-ANCA-mediated neutrophil activation was diminished and the compound also showed pharmacologic activity in mice, although it was not tested in an in vivo model of AAV (66).
Directly targeting MPO, the other ANCA autoantigen, is another approach that has been assessed in animal models. Like PR3, MPO is released from neutrophils and monocytes following activation, and may cause injury and activate autoreactive B and T cells. Extracellular MPO can deposit in glomeruli in AAV, and the amount of deposition correlates with the severity of disease (67). In vivo, treatment of mice with an MPO inhibitor decreased the severity of crescentic glomerulonephritis (67).
Spleen tyrosine kinase (Syk) is a cytoplasmic protein tyrosine kinase that is highly expressed in neutrophils and monocytes, and it plays a role in signaling for activatory Fc receptors and some integrins (68). Syk is activated in neutrophils following ANCA stimulation (69), and can be detected in leukocytes within glomerular lesions in patients with ANCA-associated renal disease (70,71). Fostamatinib, a small molecule inhibitor with selectivity for Syk, inhibits ANCA-induced neutrophil responses in vitro, and is an effective treatment for MPO-AAV in a rat model (70,72). Syk is also critical for B cell receptor signaling, and treatment with fostamatinib reduced autoantibody responses in an experimental model of anti-glomerular basement membrane disease, suggesting a potential dual therapeutic effect in AAV (73).
There may be concerns that targeting these innate immune responses may leave patients vulnerable to infection, though it is reassuring that congenital deficiencies of cathespin C and MPO, for example, have relatively mild clinical phenotypes. In addition, clinical studies of Syk inhibition in other diseases, including | 367 rheumatoid arthritis, IgA nephropathy, and idiopathic thrombocytopenic purpura, did not show a risk of severe infections, and thus future trials testing these approaches in patients with AAV are warranted.
Combination drug therapy
Drugs targeting the innate immune response may be an effective substitute for glucocorticoids during acute disease, though they may be less effective for suppressing the underlying adaptive response, which is needed to secure long-term remission. Conversely, specific targeting of B and T lymphocytes may not provide sufficiently rapid responses during acute flares to prevent accrual of organ damage. As an increasing number of potential therapeutic agents are identified, future studies will need to address how they are best used in sequence or in combination, either with each other or with existing therapies, and during different phases of the disease, to improve outcomes and reduce toxicity. The negative results in some RCTs in patients with AAV, such as in those treated with etanercept in the WGET study, may relate to their use as add-on therapy to conventional treatment, such that potential signals of biologic activity were lost, whereas the recent enthusiasm for complement inhibition arose following a successful trial designed to demonstrate that avacopan could wholly replace glucocorticoids during remission induction.
Conversely, "multi-target" therapy has recently emerged as an effective approach in patients with lupus nephritis, and combination approaches that target multiple aspects of the immune and inflammatory response may likewise be an effective way to provide rapid and sustained disease control in AAV, while avoiding toxicities caused by excessive exposure to individual drugs. One such combination approach using low-dose intravenous cyclophosphamide and rituximab, along with a rapid oral glucocorticoid taper, has been described in an open-label cohort study. Rates of remission at 6 months, mortality, long-term relapse rates, and renal outcomes were favorable when compared to those in matched historical controls from European Vasculitis Study Group studies (74,75). A similar approach, using a short course of oral cyclophosphamide and rituximab, has been reported in a single-center, retrospective case series of 129 patients, again showing favorable rates of remission induction and relapse (76). It is suggested that the early combination of cyclophosphamide and rituximab may allow reduction in glucocorticoid exposure during acute disease, while inducing sustained remission, perhaps through potentiation of B cell depletion.
However, concern for increased toxicity remains. A clinical trial of this combination treatment regimen-Exploring Durable Remission With Rituximab in Antineutrophil Cytoplasmic Antibody-Associated Vasculitis (ENDURRANCE) (ClinicalTrials.gov identifier: NCT03942887)-is currently recruiting patients, and will assess both immunologic responses (as its primary outcome) and clinical outcomes (including adverse events) following combination induction treatment with rituximab, low-dose intravenous cyclophosphamide, and low-dose glucocorticoids.
Conclusions
Modern immunosuppressive regimens have transformed outcomes in AAV, and several evidence-based treatment guidelines are now available, informed by the findings of high-quality RCTs. There are, however, unmet needs related to the potential for drug side effects and the unpredictable nature of disease relapse. Advances have been made in understanding the pathogenesis of AAV, which have identified many potential new targets for therapy that may be directed to various aspects of the adaptive and innate immune responses underlying disease. Some of these investigations have already progressed to clinical studies, such as targeting of the alternative complement cascade, and others remain in the preclinical stages of development. Future clinical trials of these novel therapeutic agents will need to establish their efficacy and, as an increasing number of potential treatments becomes available, will need to indicate how they can be used to complement or replace existing approaches. Moreover, with more agents at our disposal, future studies will need to incorporate the use of biomarkers and predictors of flare and stratification for patient factors that might influence treatment response, including age, comorbidities, and patterns of organ involvement (e.g., presence of granulomatous lesions, ANCA serotype and potentially genotype), so that subgroups of patients likely to benefit from a given therapy can be identified. This should allow for the development of more tailored treatment protocols that maximize response while minimizing the side effects from unnecessary drug exposure, and thus would improve outcomes in patients with AAV.
AUTHOR CONTRIBUTIONS
Drs. Prendecki and McAdoo drafted the article, revised it critically for important intellectual content, and approved the final version to be published. | 2020-06-21T13:02:17.964Z | 2020-06-20T00:00:00.000 | {
"year": 2020,
"sha1": "8e6ceb91ce95ffdea30ec3e6a262eed46e1cbc31",
"oa_license": "CCBY",
"oa_url": "http://hdl.handle.net/10044/1/80628",
"oa_status": "GREEN",
"pdf_src": "Wiley",
"pdf_hash": "05da00ab9d04dc9e2be7f7847076c72a6e193385",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
260079359 | pes2o/s2orc | v3-fos-license | Asking the right questions: The role of reflection for learning in and between projects
Learning is crucial for project-based organisations to improve and survive. As reflection is essential for learning in and between projects, this article examines how reflection unfolds and under which conditions it can act as a project learning mechanism. Using five gate reviews at a Dutch contractor as an embedded single case study, we show that reflection is necessary for but cannot guarantee the learning in and between projects. Reflection is emerging from and embedded in the specific context of interpersonal project work. This reflection-for-action stimulates the learning for the ongoing project, incorporates experience made in previous projects, and draws implications for future projects. However, for reflection to become a project learning mechanism, the reflection process needs to proceed to later phases and higher intensities which depends on the relevance of the project issue at hand, the motivation of project team members to discuss this issue, and the reflection support they receive.
Introduction
Projects have become a prevalent mode of organising in many industries.They allow organisations to flexibly address complexities and dynamics of contemporary economic, societal, and environmental challenges.The one-off nature of projects represents both a source for creating new knowledge and a barrier to the continuous improvement of organisational routines (Ayas & Zeniuk, 2001;Wiewiora, 2023).The temporary configuration of task-dependant resources appears to stimulate learning within projects but limits the dissemination of this learning between projects and the wider organisation.It is this paradoxical learning potential of projects that has attracted much attention in the literature.
Reflection is seen as an essential ingredient for project-based learning (Sense, 2007;Söderlund et al., 2008;Duryan, 2023).Without reflection, experiences gained in projects cannot be transformed and shared; thus, organisational learning will not occur (Sergeeva & Duryan, 2021).It has been shown that project setting and environment can promote reflective habits that go beyond single projects (Ayas & Zeniuk, 2001).If organisations "systematically incorporate reflective practices into their project management processes" (DeFilippi, 2001, p.6), projects can contribute to the long-term success of these organisations.
Despite the importance of reflective practices for learning in project-based organisations, an in-depth understanding of how these practices unfold in project work and can act as a learning mechanism is still missing.Such understanding is important since establishing reflection practices in projects creates a learning dilemma from a managerial perspective.On the one hand, it is argued that reflection is only effective for learning if tightly knotted with the day-to-day actions of project leaders and project teams and embedded in their direct interaction (Edmondson et al., 2001;Söderlund et al., 2008;Oeij et al., 2017).Yet, stimulating learning-effective reflective practices as an integral part of project activities remains difficult (Oeij et al., 2017;Kowalski and Russel, 2020).On the other hand, a too strong formalisation of reflection practices, for example, in the form of post-project reviews, often decouples reflection, and thus learning, from the actual project work and casts doubts on its usefulness since the concrete application of possible learning outcomes is left uncertain (Newell et al., 2006;Hartmann & Dorée, 2015;Guinness and Heathcote, 2022).It is this dilemma between reflection as a hardly manageable practice in project work and reflection as a formally organised but detached project activity that asks for further enquiry into the role of reflection as a learning mechanism in projects.
In this research, we thus aim at further entangling reflection practices in projects and answering the questions: How does the reflection of project team members unfold to enable learning in and between projects, and under which conditions can reflection become a learning mechanism in projects?We develop a conceptual framework and apply it to five gate reviews at a Dutch contractor as an embedded single case study.As gate reviews are formally organised review moments within ongoing projects (Sethi & Iqbal, 2008), they are particularly suited for reflection and hence used here as the unit of analysis.
Based on the investigated gate reviews, we will show that reflection emerges from and is embedded in the specific context of interpersonal project work.It creates the potential for learning within ongoing projects but can also contribute to the learning between projects.However, reflection does not guarantee that learning effectively takes place and will only unfold if project team members are motivated and supported to rethink relevant project issues for the adjustment and change of the ongoing project.
The potential of reflection for project-based learning
Reflection is a constitutive element for learning in organisations since it relates the thinking about and the assessment of ongoing or past events and situations experienced at the workplace to future behaviour and actions (Edmondson, 2002;Faller et al., 2020).It is a process through which individuals enquire their own personally relevant experience of a situation to make sense of it and to potentially generate a different understanding of it.This exploratory and transformative mode of creating meaning about work experiences makes reflection the propellant for organisational learning (Hilden & Tikkamäki, 2013;Reese, 2021).
If individuals become aware of the consequences of their actions, they can create new perspectives on and insights into their thinking and behaviour and eventually change them.This holds particularly for projects which, due to their temporary and idiosyncratic nature, often provide the organisational space to deal with new challenges and confront individuals with unexpected and surprising situations.Hence, projects are seen as the ideal setting for developing reflective practices and learning capabilities in organisations (Ayas & Zeniuk, 2001).At the same time, reflective practices are promoted as an integral part of project management to overcome contradictions between short-term project goals and long-term organisational learning strategies (DeFillippi, 2001;Lee-Kelley & Blackman, 2012;Duryan, 2023).They enable project managers to become improvement agents (Sundqvist, 2019) to be able to cope with complexity, dynamism, and uncertainty in project planning (Rotimi & Ramanayaka, 2015), to tackle critical and unforeseen incidents in projects (Oeij, et al., 2017), to span vertical and horizontal boundaries in project-based organisations (Duryan, 2023), and to encourage project team members to recognise their mental models (Chang et al., 2021).The latter indicates that projects are also conducive to reflective practices because they represent temporary social arenas where individuals interact and work together towards specific organisational goals.
Collaborative reflection in projects
Although, in the first place, reflection enables individuals to cognitively and introspectively transform their own experiences, the explication and sharing of these individual experiences in the organisational context of projects can initiate collaborative reflection.So can the diversity of knowledge that project team members bring to projects lead to differently experienced situations.Such variations in experiences and perceptions can create tensions and stimulate negotiations on the meaning of what happens, why it happens, and how to respond to it (Scarbrough et al., 2004;Tan, 2021.Collaborative and individual reflections become intertwined in a reciprocal process of discursive and relational practices and enrich each other (Knipfer et al., 2013).
Engaging in collaborative reflection then not only enables team members to validate and challenge personal ideas, actions, and plans by reviewing others and developing a common understanding of project-related problems (Gil & Mataveli, 2018;Wiese & Burke, 2019).It also affords project teams to scrutinise cultural and organisational beliefs and assumptions taken-for-granted and underlying their work practices.Collaborative reflection is an important mechanism for the learning in projects and developing project competencies in organisations (Söderlund et al., 2008;Duryan, 2023).
Reflection modes and learning
Learning from reflection can be initiated through two basic reflection modes, which tend to occur at different points in time: reflection-onaction and reflection-in-action (Schön, 1987).The retrospective reflection-on-action takes place after a task is finished.Project team members evaluate the accomplished task, create meaning from the experiences gained during task accomplishment and draw conclusions for future tasks.This reflection mode heavily relies on the possibility of surfacing knowledge gained during a project and capturing it as implications to be transferred to subsequent projects (Sergeeva & Duryan, 2021).It has gained much attention in practice since it appears to be easily organised as separate, controllable, and repeatable task preferably at the end of projects (Inkermann, et al., 2020).The danger is that reflection becomes a ritualised practice detached from the context of the actual project work with rather vague, general, and uncritical outcomes and thus without learning emerging from it (Boud & Walker, 1998;Guinness & Heathcote, 2022).The situated reflection-in-action takes place while performing a task.As a response to their immediate work experiences, project team members interpret the current working situation and directly adjust their way of working to accomplish their ongoing task (Sergeeva & Duryan, 2021).Because of the embeddedness in ongoing practices, reflection-in-action is less controllable and repeatable outside these practices (Oeij, et al., 2017).With reflection-for-action, a third reflection mode has been proposed (Killion & Todnem, 1991;Thompson & Thompson, 2023).It stresses the particular purpose of rethinking a practice in a forward-looking manner to anticipate future events and plan future actions.In the project context, this can mean that project team members examine their past actions in an ongoing project to inform and change their upcoming project work (Hartmann and Dorée, 2015).
Reflection process and intensity
In this paper, we conceptualise reflection in projects as a discursive process of articulating, sharing, and negotiating individual experiences of project issues within project teams to reach a collective understanding of the experienced issues and draw conclusions for further actions (cf.Knipfer et al., 2013;Duryan, 2023).This collaborative process can occur separately from or be intertwined with the actual project work.We analytically divide reflection processes into four phases (cf.Prilla et al., 2015;Oeij, et al., 2017;Franken, et al., 2018;Inkermann, et al., 2020: 1. Articulating experience.Project team members articulate and communicate their pre-understanding of an individually and/or collectively experienced issue and describe their feelings attached to this experience.This can include negative issues creating discomfort or positive issues eliciting contentedness, and this cognitive dissonance often triggers reflection processes (Chang et al., 2021).Team members also elaborate on the contextual factors that, from their perspective, shaped the experience.2. Developing an understanding of experience.Team members discuss their individual experiences to justify what happened and why it happened.Ideally, this leads to a shared understanding of the experienced issue.If such a shared understanding is reached, this sets the collective frame for evaluating the experience.
3. Re-evaluating understanding of experience.Team members evaluate their understanding of the experienced issue by linking relevant prior project or organisational knowledge or experiences to the experience they reflect on (Daudelin, 1996).This allows them to detect patterns of cause-effect relationships for the experienced issue (Boud et al., 2013) but also to challenge existing interpretations and groupthink, pose searching questions for alternative explanations, and explore different interpretations (van Woerkom, 2003).4. Drawing reflection outcome.Team members agree on what a satisfactory outcome of the re-evaluation is.This can include an improved or different understanding of or a new perspective on the experienced issue, advice on behavioural changes, or plans for action related to the project and/or organisation (Boud et al., 2013).
Previous research on workplace learning has shown that the intensity of reflection can vary, i.e. the extent to which the issue is scrutinised in the reflection process (Høyrup, 2004;Fleck & Fitzpatrick, 2010).Therefore, we distinguish four levels of reflection intensity in projects: Level 0 -Revisiting.Project team members only articulate their experiences of an issue without further elaboration.Although experiences are made explicit, the outcome remains rather unproductive for project learning (cf.Davis, 2006).Level 1 -Descriptive reflection.The experienced issue is described and justified based on prior knowledge but without exploring alternative explanations and searching for new perspectives to understand it (Lee, 2005;Ward & McCotter, 2004).It mainly will lead to what Argyris (1999) calls single-loop learning: "an error is detected and corrected without questioning or altering the underlying values of the system'' (p.68).Level 2 -Dialogic reflection.Team members deliberately step back from the experience to ponder the experienced issue from multiple perspectives and seek alternative explanations and new relationships with prior knowledge to re-organise or change project practices.Within the project context, assumptions on how project work must be done are often challenged, leading to double-loop learning (Argyris, 1999).Level 3 -Critical reflection.At this level, team members also question assumptions about their project practices but go beyond the actual project context scrutinising important taken-for-granted beliefs and values on which project work in an organisation builds (Mezirow, 1990;Reynolds, 1998;Matsuo, 2019).A fundamental critique of the organisation's ability to improve project practices is brought forward, which can become the origin of what has been coined as triple-loop learning (Tosey et al., 2012).
A higher reflection intensity is not necessarily better than a lower intensity level.The intensity will depend on the nature of the experienced issue.For example, descriptive reflection might be sufficient to understand project deviations and identify appropriate actions to bring the project back on track.On the other hand, critical reflection might be appropriate if project deviations systematically reoccur as the likely result of specific organisational rules underlying project practices.
Reflection conditions
The extent to which reflection in projects occurs is subject to several conditions pertaining to the project environment and team members.They can be categorised into three main groups: opportunity, ability, and motivation to reflect (cf.Kelloway & Barling, 2000).
Opportunity to reflect
Opportunity concerns conditions posed by the project environment in which the participants collect their experiences and engage in collaborative reflection.Reflection may take time.Particularly for reaching higher reflection intensities, sufficient time should be available to explore an experienced issue from multiple perspectives and search for alternative explanations (Moon, 1999;Wallman et al., 2009;Groen, 2015).This remains a significant struggle in projects (Chronéer & Backlund, 2015;Wiewiora et al., 2020).Engaging in reflection processes often requires a specific reason to reflect, and that reason is often found in ongoing projects (Hartmann & Dorée, 2015).It may also need encouragement, support, and guidance by single project team members who initiate reflection processes, structure them efficiently, and increases their quality (Fleck & Fitzpatrick, 2010;Koole et al., 2011;Chang et al., 2021).Whether a project environment provides challenges is another condition for reflection since it allows for opportunities to create experiences outside someone's comfort zone (Eraut & Hirsh, 2010).In combination with flexibility and creativity, it lays a fertile ground to learn by reflection (Kump et al., 2011).
Ability to reflect
Ability relates to the personal skills of the reflecting team members.This includes the mental capability of abstract thinking to create distance from the experience, take a helicopter view, explore causes and effects, and draw conclusions from experience (Groen, 2015).Negotiation and re-evaluating experiences in a collaborative setting also require communication skills of team members to elaborate on their experiences and for others to listen (Knipfer et al., 2013).Moreover, openness about mistakes is essential for reflection to be genuine and valuable for learning (de Groot et al., 2014;van Woerkom & Croon, 2008).It is the prerequisite for reaching a collective understanding of the mistake that can prevent it in the future.Reflecting itself is a skill that can be trained through repeated practices.Hence, reflection experience contributes to the ability to reflect (Ayas & Zeniuk, 2001;Fergusson, 2022).
Motivation to reflect
Motivation to reflect includes both intrinsic and extrinsic motivation of project team members.Intrinsic motivation concerns the willingness and inclination of team members to engage in individual or collaborative reflection, to share experiences, and to scrutinise the experience to learn from it (Knipfer et al., 2013;Nolan & Sim, 2011).Reflection practices of intrinsically motivated team members are triggered by discrepancies between the experienced issue and their mental models that create a particular curiosity to explore and understand the experience (Høyrup & Elkjaer, 2006;Chang et al., 2021).Extrinsic motivation relates to an external stimulus that encourages team members to participate actively in reflection (Fleck & Fitzpatrick, 2010).Related to motivation is trust.Without trust in collaborative reflection, participants will be reluctant to openly share their experiences and mistakes for fear of retaliation (Groen, 2015;Raelin, 2002).Høyrup & Elkjaer (2006) note that reflection in an organisation is not easy because management may not value the outcome, and employees might be afraid to reveal the shortcomings of the organisation or their superiors.Thus, trust is essential to question the organisation's values and assumptions in collective reflection.
Fig. 1 depicts our conceptual framework of project-based reflection processes.It conceptualises reflection as a multi-dimensional collaborative process.This process is initiated by experienced project issues, affected by project-related conditions, uses experiences and knowledge from past projects and known organisational practices, and potentially yields learning outcomes for the current project, future projects, and the wider organisation.
Research method
As collaborative reflection practices are embedded in projects and involve sharing and negotiating experiences amongst project team members, we adopt a qualitative, embedded single-case study approach (Yin, 2009).This qualitative approach allows us to investigate how and to which extent project teams can make sense of experienced issues and draw implications for their project work.By comparing multiple project contexts within the same organisation, we can better understand the role of reflection for learning in and between projects and how conditions that vary across projects influence this.
Embedded case study approach
The investigated case is a medium-sized Dutch civil contractor.It is a subsidiary of the largest contractor in the Netherlands with the core business of building concrete infrastructure (e.g., viaducts, locks, and tunnels) and executing the associated project management tasks.Recently, the contractor has started implementing gate review procedures for projects of 1 million euros and more.Five gate reviews are our embedded units of analysis.
Purpose and procedure of gate reviews
The purpose of the gate reviews is to provide management support to project teams from tender to closure based on assessed project performance.Moreover, the gate reviews are meant to provide the management board with early signs of deviant project trajectories and enable them to coordinate resources across projects.The gate review procedure knows eight fixed evaluation moments at different project phases at which several performance aspects (e.g., finances, organisation, contract management, risk management) are assessed.Based on the assessment, whether the project can proceed to the next phase is decided.All gates are mandatory except gates five, seven, and eight.Whether the project manager includes these gates depends on the project risks.For the case study, we selected the fourth, sixth, and seventh gate of five projects, which occur after the tender has been submitted.The fourth gate review is about preparing the project to get started.This can be followed by an optional fifth gate to review if all aspects of the preliminary design are deliberately thought about.Likewise, the final design is assessed in the sixth gate.The seventh and eighth gates are optional again.Their concern is whether the project team is ready for the start of the physical execution of the project and the transfer of the project to the client, respectively.
Organisational set-up of the gate reviews
While gate reviews are common for large-scale projects, how they are organised in the selected case is unique in the sense that the gate reviews are guided by facilitators who are independent and not involved in the reviewed project.Their role is to ask questions related to the assessed criteria and give a final verdict on the project's performance.The facilitator team consists of one permanent senior employee responsible for correctly utilising the gate review procedure, the lead gate reviewer.They are assisted in the reviewing process by another employee, known as the gate reviewer.The staffing of the facilitator team depends on the criteria being reviewed.The facilitators often conduct multiple gate reviews on different projects.The reviewed project is represented by the team consisting of members in different roles (i.e. the participants).However, not all project team members are involved in the gate reviews, and reviews often only include those seen as essential to inform about the project's performance.The facilitators collect information in two ways.First, project documentations are studied and compared with organisational standards.Second, group interviews are held with project team members to understand the actions taken and decisions made.After the gate review, the facilitators present their assessment results.These are binding and can be either: (1) green, the project performance is sufficient, and the project can proceed to the next phase; (2) orange, the project does not fully meet all assessment criteria but can proceed with the precondition that recommendations are followed; or (3) red, the project does not meet the assessment criteria and cannot move to the next phase until critical issues are solved.After the review, significant findings are documented and shared with the project team and management board.
The gate reviews represent formally organised evaluation moments while being integrated into ongoing projects.They are set up to facilitate the discussion between participating project team members and facilitators on the project's current performance.This discussion is expected to be fed by the articulation, sharing, and negotiation of gained experiences of project team members.The gate reviews are, thus, very well suited to address the reflection dilemma in projects and explore the role of reflection for project-based learning.The selection of gate reviews as our unit of analysis was driven by a theoretical purpose.To allow for explanations of role and extent of reflection, we selected five gate reviews that show many similarities in the organisational setup (gates, number of participants, and facilitator) but also some differences, mainly in the criticality of experienced problems before the gate reviews and thus the evaluation results.Table 1 provides the main characteristics of the five gate reviews.
Data collection
Multiple data collection methods were used (Table 2): document analysis, nonparticipant observations, and interviews with participants and facilitators of the gate reviews.
The document analysis provided an understanding of the project context and the verification of gate review outputs.The documents studied included the general project documentation, the project management plan, the project planning, and the minutes of the observed gate review.The context concerned the type and size of the project, project goals, project client, project stage, and the issues at play before the gate review.Gate review outputs included topics discussed, points of
Articulating experience
Participants describe the experienced issue, how they understand and feel about it, and its context.attention for the project, and implications.Gate review participants captured them, and we compared them with the outputs we derived from observations and interviews.
Articulating negatively experienced issues
Nonparticipant observations of the gate review sessions were used to gather data about reflection phases, reflection intensity, reflection conditions, and the role of reflection for project-based learning.The gate review sessions were audio recorded.Field notes were taken to register behaviour, interaction, and discussion amongst participants based on the operationalisation of the main concepts.In addition, for Gate Reviews A, B, C, and D, a brief evaluation of the gate review was held with the facilitators at the end of the review.Tables 3, 4, and 5 present the indicators used to identify reflection phases, intensity, and conditions.Not presented in one of the tables are indicators for the role of reflection for project-based learning.This was assessed regarding the links that gate review participants made to other project experiences or organisational knowledge (e.g., strategy, standards) and the extent to which they drew implications from the review for the project organisation.
Semi-structured interviews with participants of the gate reviews complemented the observations.During these interviews, questions were asked related to the context of the project evaluated and the gate review process as experienced by the interviewee.In total, 13 interviews were conducted, lasting between 25 and 50 min.12 interviews were held with project members -A (1), B (3), C (2), D (2), E (4) -and 1 interview with two facilitators (Gate Review E).For Gate Reviews A, B, C, and D, a brief evaluation of the gate review was held with the facilitators at the end of the review.All interviews and brief evaluations were recorded and transcribed for analysis.
After the five gate reviews, a panel meeting with three facilitators and a senior manager was organised.The meeting lasted 90 min and was meant to validate the findings from observations and interviews and identify practices for stimulating and supporting reflection in gate reviews.The meeting was held online and recorded for analysis.
Data analysis
Data were analysed within the individual and across gate reviews.For this purpose, project documents and gate review minutes, observation field notes, audio recordings of gate reviews, and transcribed interviews were coded.Directed qualitative content analysis was applied (Hsieh & Shannon, 2005).Initial codes were derived from the presented operationalisation of the reflection process (Table 3), reflection intensity (Table 4), reflection conditions (Table 5) and the role of reflection for learning (see 3.2).Then, we took four steps to code the recordings of the gate reviews.We also coded interviews, field notes, and documents in a fifth step.All coding was performed in ATLAS.TI 8.
Coding procedure
The first step aimed to identify all discussed topics of the recordings.Here, we differentiated between discussed topics, coded as reflection topics (between 18 and 42 per gate review), including reflection, and those that did not, coded as control topics.A topic is coded as a control topic when, for example, facilitators asked whether the project's schedule was on track, and the participants' responses did not lead to any discussion.Reflection topics included a reflection process covering a particular experience of participants and thus had a beginning and an end.Hence, a discussion topic was coded as a reflection topic when a single reflection activity indicated a specific reflection phase.Here, at least the elaboration of an experience by one participant and the response of another participant was needed to count for collaborative reflection (Fleck & Fitzpatrick, 2010).The second step was to determine the performed reflection phase for all identified reflection topics.This was done by coding the occurring reflection activities (see Table 3).For achieving a specific reflection phase, participants conducted at least one reflection activity corresponding to this phase.The third step comprised the coding of reflection intensity for each reflection topic (see Table 4).When multiple intensity indicators were present, the intensity was set at the level with the most indicators.For example, when a reflection topic had two indicators for descriptive reflection and one for dialogic reflection, the intensity level was set at the descriptive reflection.In the case of an equal number of indicators, the highest intensity was leading.In the fourth step, the discussed reflection topics were coded for the role of reflection for project-based learning.More specifically, if participants explicitly mentioned other projects to make sense of the experience they reflected, this was coded as linking to other project experiences.If participants referred to organisational procedures and standards to create an understanding of the experienced event, this was coded as linking to organisational knowledge.The drawing of implications was coded if participants explicitly expressed a cognitive or behavioural change or a required action for the project or the organisation.The fifth and final step included coding the reflection conditions based on the operationalised indicators (see Table 5).For this step, observational data were triangulated with interview data.For example, if participants had the impression that only some issues were sufficiently discussed, this was coded as needing more available time.
Conditions are not specific to a particular reflection topic but similar for all reflection topics of a single gate review.The researchers repeated all the above steps in a separate round to increase the coding stability.
Gate review analysis
After the coding of data, each gate review was separately analysed.We first quantitatively expressed the extent (absolute and relative) of reflection phases and intensity reached over all reflection topics addressed in the gate review.This was followed by analysing how the identified reflection conditions could explain the attained reflection phases and intensities.We then analysed the relationship between the reflection phase and intensity and the role of reflection for project-based learning.We were particularly interested in the reflection phases and intensity levels for which participants use experiences from previous projects and organisational knowledge and draw implications for the project and organisation.In the next step, we compared the individual results of the gate reviews.Patterns could be discovered by analysing the differences and similarities across gate reviews to build empirical evidence.This provided insights into the relationship between the extent of reflection and the reflection conditions in projects and how this relationship can enable learning within and between projects.
Findings
In the following, we present our findings in line with our framework.It should be noted that 'participants' refers to all project team members and facilitators participating in the gate reviews.
Discussed topics
Although the gate reviews were primarily meant to control and assess the progress of the projects, reflection played an essential role in all reviews.From the discussed topics, between 61% (Gate Review D) and 78% (Gate Review E) involved reflection (Table 6).
Across the five gate reviews, 59% of the reflection topics related to negatively perceived project performance, whereas only 7% had a positive connotation.According to facilitators, discussing topics that are going well is time-consuming and not motivating for project team members as it does not contribute to the immediate improvement of a project.It also does not fit with the nature of gate reviews, focusing on project performance and correcting any deviations.
The facilitators initiated many reflection processes asking how specific project tasks were executed based on predetermined assessment criteria.The reflection processes evolved from an initial control aspect when participants reported challenges and difficulties.In three cases, we identified notable exceptions.In Gate Review B, three reflection processes were initiated by project team members themselves.In Gate Review D, project team members started the reflection three times based on their experiences.In Gate Review E, this happened five times.Another noticeable finding is that in Gate Review D, 6 of the 26 reflection processes occurred during the feedback moment at the end.
According to project team members, different views of facilitators and project team members on the gate review initiated them.
Reflection phases
Although all reflection phases were covered in the identified reflection processes, not all were considered to the same extent (Table 7).All gate reviews show that later phases in the reflection process are less likely to be achieved.As initiating phase, articulating experience was present in all reflection processes.The next phase, developing a shared understanding, only occurred in 76% of the reflection processes.The last two phases occurred even less, with 67% for collaborative re-evaluation and 43% for drawing collective outcomes.
Particularly in Gate Reviews B, C, and D, attention was limited for the last two reflection phases, collaborative re-evaluation and drawing collective outcome.Participants focused on describing issues without evaluating whether an issue was seen as a challenge, problem, or positive experience.At the beginning of Gate Review D, the dialogue even went unstructured, and experiences were not placed as central discussion topics.Remaining descriptive implied an emphasis on checking the project's performance instead of learning from the issue at hand.The focus was more on understanding what happened than making sense of the experiences and improving the situation.A few reflection processes covered all phases in Gate Reviews B, C and D.Even though reflection outcomes were concluded in these processes, participants did not plan for actions or translate the outcomes into changed behaviour.No actions were yet taken based on the evaluation results two weeks after Gate Review B.
Participants performed many reflection activities in Gate Reviews A and E and achieved more reflection phases.Since both projects performed unsatisfactorily, challenges and problems were the main focus.Eventually, 61% of the reflection processes in Gate Review A and 50% of the reflection processes in Gate Review E achieved all reflection phases with much attention to the phases collaborative re-evaluation and drawing collective outcome.In both Gate Reviews, participants aimed to understand and learn from the experienced issues.In Gate Review A, the intention of the facilitators to understand and resolve these issues often led to the progression of the reflection process to the conclusion phase, in which the facilitators also gave much advice on how to improve.In four instances, the advice was built upon previous project experience.For example, for the problem of immature knowledge about the changes in the contract, one of the facilitators said: ''In prior projects, we have invested in many lunch lectures about specific topics like contractual awareness and changes in the contract.''In Gate Review E, the participants held a constructive dialogue in the collaborative re-evaluation phase by questioning each other's interpretations, adding perspectives, and determining the causes and effects of an experience.During the last reflection phase, they planned for action and explicitly stated implications for the organisation.
Reflection intensity
About 31% of all reflection processes could be characterised as revisiting reflection, 40% as descriptive reflection, 24% as dialogic reflection, and 5% as critical reflection.Most reflection processes concluded with the first two intensity levels (Table 8).Critical reflection is absent or seldom achieved.Noteworthy here is that critical reflection always resulted from reflection activities in the collaborative re-evaluation and drawing collective outcome phase.Most of the processes with a dialogic reflection also covered these two reflection phases.Reflection processes with a revisiting intensity remained within the first two reflection phases.When the number of achieved reflection phases increased, the intensity also increased.The reflection phases corresponded with the reflection intensities.This also means that gate reviews with more reflection phases achieved higher intensities.While in Gate Review E, 29% of the reflection processes finished with dialogic reflection and 14% with critical reflection, in Gate Review D, dialogic reflection was the highest intensity achieved in only 12% of the reflection processes.An explanation for the low intensities of Gate Review D is that participants often only explained an issue without exploring the underlying reasons for why it happened.This is in contrast with participants of Gate Review E. In this case, participants took multiple perspectives, questioned each other, and explicitly mentioned the broader implications of experiences.
Reflection role for learning
In all gate reviews, participants explicitly drew reflection outcomes for the project and the organisation (Table 9).40% of all reflection processes finished with implications for the current project.In Gate Reviews A and E, most implications for the project per reflection process were concluded, 67% and 50%, respectively.Both gate reviews dealt with challenges and problems in the project, and participants mainly focused on improving the projects.In Gate Review D, implications for the project were only drawn in 15% of the reflection processes.Across all gate reviews, implications for the project mainly regarded the planning for action to change working practices in the ongoing project.
The number of implications drawn for the organisation varies less across gate reviews, and only in 14% of all reflection processes were such implications the outcome.In Gate Review B, the relatively high number of organisational implications compared to project implications can be attributed to the well-performing project through which the discussion focused more on what other projects may learn.For example, project team members explained that the client and the project team assess each other's work, and a facilitator concluded: "I think this is a best practice which we need to implement further within the organisation".However, in all gate reviews, the organisational implications were often
Table 7
The achieved reflection phases in the reflection processes.not concrete actions but proposals for taking up specific issues at the organisational level and using well-experienced practices in other projects.For example, in Gate Review A, the lack of tender assumptions was discussed, and one of the facilitators stated: "We should really learn as an organisation to determine the target quantities and monitor the targets during the design process." There is a general tendency in all gate reviews that the more the reflection progressed, the more implications were formulated.While in 40% of the reflection processes that achieved the articulating experience phase, implications for the project were drawn, this was the case in 67% of the reflection processes with the drawing collective outcome phase.Likewise, implications for the organisation resulted from only 14% of all reflection processes with the articulating experience phase, whereas 33% of the processes with the drawing collective outcome phase finished with such implications.However, not all processes with many achieved reflection phases and a high intensity had implications for the organisation or the project.In Gate Review C, even in one reflection process, only a few reflection activities were conducted, but it finished at a dialogic reflection intensity and with implications for the organisation.Here, the topic regarded the use of 3D designs, which was already discussed in another gate review with the same facilitators and one of the project team members.Consequently, as they referred to the other gate review, the participants only needed a little discussion.
The analysis of the five gate reviews also revealed that with higher reflection intensities and more reflection phases achieved, experiences from other projects and organisational knowledge are more likely to be mobilised in the reflection process.In 26% of the reflection processes, the participants referred to experiences made in other projects, and in 8% of the processes, organisational knowledge was activated.Particularly in the phases collaborative re-evaluation and drawing collective outcome, project team members explored a project issue by comparing it to their existing cognitive frames built upon prior project experiences and accumulated organisational knowledge in the form of standards and procedures.Across all gate reviews, this was done to emphasise that similar issues were encountered in other projects, to stress the relevance, to understand the causes and effects of the current issue, to propose solutions, and to provide advice.In Gate Reviews A, B, and C mainly, the facilitators made links to other project experiences and resorted to organisational knowledge.In Gate Review B, project team members explicitly asked how other projects deal with the issue.The facilitators then explained the procedures followed in other projects based on their experience and knowledge.
Notably, 10 of the 13 organisational implications resulted from discussions in which references were made to previous projects.The discussions revealed the relevance of a project issue for the broader organisation since the issue was already experienced in other projects in similar ways.
Reflection conditions
The emergence of reflection in the gate reviews was subject to several conditions relating to the opportunity, the ability, and the motivation to reflect (Table 10).
Opportunity to reflect
The facilitators played a critical role in shaping the reflection opportunity in the gate reviews.They did so by asking open questions, providing feedback, articulating their opinion, referring to previous experiences and knowledge, and giving advice.On the one hand, these activities contributed to the progression of reflection processes and gaining higher reflection intensities.On the other hand, the extent of engagement of facilitators, particularly in Gate Reviews A, B, and C, limited their attention to the opinions and perceptions of the project team members.The facilitators often dominated the conversation, a question-and-answer session rather than an open discussion.
Gate Review D additionally underscores the critical role of the facilitators.Here, the support for reflection was lacking.At the beginning of the gate review, the discussion was unstructured because the facilitators did not divide tasks between taking minutes and guiding the dialogue.One facilitator did both tasks making it difficult for him to focus on the discussion.The other facilitator did not actively participated.He had little experience with gate reviews and struggled with guiding the dialogue, asking critical questions, spurring reflection, and tapping the learning potential.For example, one reflection process started very promisingly, with a project team member elaborating that the way of organising projects needs to fit the time pressure associated with wind turbine projects.However, the facilitators did not pick up this chance to explore the organisational systems' assumptions and instead focused on how the project team coped with the situation.The potential implications for the organisation were not exploited.
In Gate Review E, the facilitators chaired the gate review and
Table 9
The role of reflection for learning in the gate reviews.
Table 10
The reflection conditions in the gate reviews.(+) positive influence on reflection; (-) negative influence on reflection.
A. Hartmann et al. strongly emphasised making the gate review a dialogue between participants.They were critical during the review and asked searching questions to get to the bottom of the experience, letting participants reflect more intensely.There was a dialogue between the project team members and the facilitators and amongst the project team members.They questioned each other's interpretations, added relevant information if needed, and provided new perspectives on the experience, resulting in higher reflection intensities.In Gate Reviews A, C, and D, facilitators repeatedly mentioned that the gate review should progress due to time constraints.For example, in Gate Review A, the reflection stopped in three processes, and the facilitators pushed the discussion to the following topic: "We have to move to that topic considering the time left".Although the time constraint limited the reflection opportunity, interviewed participants mentioned that all topics were sufficiently discussed.In Gate Review C, however, project team members felt the dialogue was rushed, and some issues were insufficiently discussed.This resulted in fewer reflection phases and lower reflection intensity per reflection process than in other gate reviews.Here, the facilitators also stated that they were tired due to conducting two gate reviews after each other and, therefore, were less focused.
Ability to reflect
The role of the facilitators already points to differences in the communication pattern between gate reviews.The facilitators in Gate Reviews A, B, C, and D, the dialogue was mainly driven by the facilitators and took place between them and the project team members and less amongst the project team members.This also included that the facilitators mainly provided conclusions with little involvement of the project team members.As a result, particularly in Gate Review D, project team members had a different understanding of project issues and did not agree with all conclusions.In Gate Review B, conclusions were not accounted for and did not lead to changes in the project.In Gate Review D, the facilitators also paid much attention to the project documents and thus lost focus on the dialogue.In Gate Review C, communication was additionally hampered as the facilitators were distracted due to tiredness.However, in this gate review, visual adds such as drawings and examples from other projects helped to reach a common understanding between project team members and facilitators.In Gate Review E, the experienced facilitators participated less directly in the discussion, stimulating and guiding the dialogue amongst project team members.Project team members entered a mutual discussion, challenged each other, and added multiple perspectives leading to a greater extent of reflection.
There are also differences in the openness about mistakes, particularly between Gate Reviews A and B. In Gate Review A, project team members explicitly spoke out their mistakes which contributed to the progress of the reflection process since project team members were willing to understand and solve critical issues.In Gate Review B, the openness to talk about problems was limited.A project team member contained himself during the discussion and awaited the implicit approval of the project leader to elaborate on problems.From the perspective of the interviewed project team member, around 5% of the topics were sugar-coated, making the reflection less genuine and thus of less value.
Motivation to reflect
Project team members of Gate Reviews A and E were highly motivated, as they saw the gate review as an opportunity to discuss critical project issues and receive feedback on improving the project.Their willingness to delve deeper into what went wrong and how to change current practices positively affected covered reflection phases and reflection intensity.In Gate Review A, project team members were considerably open to the facilitators' feedback and suggestions for improvement.In Gate Review E, the project team participants even prepared the gate review in advance and predetermined the topics for discussion.The reflection became more relevant for them as they saw direct benefits from the reflection outcomes.The participants showed awareness of the organisational context, allowing them to question assumptions and actively draw implications for the organisation.In the other gate reviews, project team members actively participated without being critical of their actions and the innate drive to change the ongoing work.One project team member in Gate Review C explicitly stated that conducting a gate review on a well-performing project is less relevant.
Discussion
While previous studies emphasise the importance of reflection practices for project-based learning (Söderlund et al., 2008;Wiewiora, 2023;Duryan, 2023), an in-depth understanding of how reflection processes unfold and under which conditions reflection can become a learning mechanism is lacking.To address this knowledge gap, we disentangled reflection practices and investigated reflection processes, intensities, and conditions in five gate reviews of a Dutch contractor.Our findings suggest that reflection is a situated practice (Lave & Wenger, 1991) in projects and resembles the reflection-for-action mode since the main driver for reflection is the ongoing project.This reflection-for-action primarily creates the potential for learning within the current project.However, while reflecting on relevant project issues, links to previous projects are made, and implications for subsequent projects and the broader organisation are generated.Reflection-for-action in projects also acts as a mechanism for the learning between projects.To become a mechanism for learning within and between projects, reflection processes need to reach later phases and higher intensities.The chance of reaching them increases if project team members are motivated and supported to rethink relevant project issues for the adjustment and change of the ongoing project.We elaborate more on our main findings in the following sections.
Reflection-for-action as a learning mechanism in projects
In our research, gate reviews were the organisational settings in which reflection was triggered.The gate reviews exposed challenging and problematic issues that project team members experienced and perceived as relevant to the ongoing project.Hence, they were generally motivated to discuss them further.Our research suggests that reflection as a collaborative process of making sense of an experienced project issue is initiated when the issue is considered relevant to the project's performance.This reflection mode is neither fully retrospective and detached from the actual project work (reflection-on-action) nor thoroughly entwined with ongoing activities and immediately responsive (reflection-in-action).It instead addresses a relevant issue as part of ongoing project work so that the project team makes sense of the issue and eventually draws conclusions for further actions dealing with it.It thus resembles the reflection-for-action mode (Thompson & Thompson, 2023) situated in the context of an ongoing project.Our findings suggest that reflection-for-action can resolve the reflection dilemma in projects by being close enough to the daily work of the project team and, at the same time, creating a greater sense of managerial control for learning processes.
The potential to learn in and between projects through reflection
The desire of project team members to improve their understanding of a relevant project issue and to plan for further actions on this issue is the reason that in 40% of the reflection processes, implications for the current project were identified as to how the working practices could be changed.However, implications were not restricted to ongoing projects.Albeit, to a lesser extent, in 14% of all reflection processes, implications for the broader organisation were drawn.These implications often emerged from an issue encountered in other projects as well.They were an initial impetus to address this issue at the organisational level rather than planning for concrete actions.Our findings resonate with the project-based learning literature that has pointed to the paradoxical nature of projects for learning (Swan et al., 2010;Bakker et al., 2011).Projects represent working environments that, in the first place, can stimulate reflection for learning within projects but provide fewer incentives to rethink work practices for learning beyond project boundaries.Our findings even show that reflection processes can take place without resulting in any implications or concrete actions for the project or organisation.Reflection is an essential ingredient of interpersonal project work to create the potential for project-based learning (Söderlund et al., 2008) but cannot guarantee that this potential is used.
Our analysis also shows that in 26% of the reflection processes, the participants mobilised experiences made in other projects, and in 8% of the processes, organisational knowledge was activated.Project team members discussed their understanding and framing of issues fed by these prior project experiences and organisational knowledge.This allowed them to interpret their understanding and enrich or adapt their cognitive frames (Crossan et al., 1999Chang et al., 2021).Activating previous project experiences and organisational knowledge also helped identify and justify implications for the project or organisation.It was driven by the aim to improve the performance of the ongoing project (Zhao et al., 2022).In line with Hartmann and Dorée (2015), our research stresses the role of reflection as a learning mechanism through which knowledge from previous projects is enacted to create meaning and understanding and institutionalised in project work practices.Through reflection-for-action, the learning in projects can be interlinked, constituting the learning between projects.
The influence of reflection process and intensity on the potential to learn
Our research suggests that the chance for reflection-for-action to become a mechanism for the learning in and between projects increases if the reflection process progresses to later phases and attains higher reflection intensities.
While in 40% of the reflection processes that achieved the articulating experience phase, implications for the project were drawn, this was the case in 67% of the reflection processes with the drawing collective outcome phase.Likewise, implications for the organisation resulted from only 14% of all reflection processes with the articulating experience phase, whereas 33% of the processes with the drawing collective outcome phase finished with such implications.With the progress of the reflection process, it is also more likely that experiences from other projects and organisational knowledge are mobilised.This particularly holds for the phases collaborative re-evaluation and drawing collective outcome.
The chance for drawing implications for the project or organisation and activating prior experiences also increases with the reflection intensity.Here, project implications relate to descriptive and dialogical reflection intensities, whereas organisational implications are mainly linked to a critical reflection intensity.This is not surprising since critical reflection intensity is characterised by scrutinising project work's underlying assumptions and beliefs (Matsuo, 2019).This in-depth enquiry into a project issue and giving sense to it can also explain that participants referred to prior project experience in 60% of the reflection cycles with critical reflection.
These findings indicate that the intensity of the reflection correlates with the performed reflection phases (Jung & Wise, 2020).Reflection processes that at least covered the phase collaborative re-evaluation showed higher reflection intensities than processes with fewer phases.This also implies fewer processes with higher intensities than lower ones since processes including all phases occurred to a lesser extent.The correlation between the reflection phase and intensity can be mainly traced back to the reflection activities performed in the phase collaborative re-evaluation.Exploring causes and effects, adding different perspectives, and challenging existing interpretations are all essential activities for a deeper consideration of an experienced issue (Fleck & Fitzpatrick, 2010).
Reflection context
Being a situated practice embedded in localised, variegated, and interpersonal project work (Edmondson, 2002, Swan et al., 2010), reflection does not naturally lead to learning.The differences in the extent of reflection between the investigated gate reviews indicate this.The investigated reflection practices were contextually embedded in an interplay of issue relevance, project team motivation, facilitator role, and time.
Relevance and motivation as a key driver for reflection
Our findings show that the perceived relevance of an issue and the project team's motivation to explore this issue are key drivers for initiating and propelling reflection-for-action processes in projects.This combined effect of relevance and motivation also accounted for the achieved reflection phases.If project team members did not see the relevance of exploring an issue further, they mainly remained within the first two phases.The reflection processes in Gate Reviews A and E more often went through all phases than in the other three gate reviews.Both reviews related to projects that were perceived as difficult and problematic.Project team members were not only interested in sharing their experience of an issue.They also tried to identify possible causes for the occurrence of the issue, develop different perspectives on and alternative explanations for the issue, and formulate actions and advice for improving it.Particularly in Gate Review E, the project team prepared for the review by determining the issues they wanted to discuss.It is also unsurprising that most of the implications were drawn in Gate Reviews A and E. Because of the difficulties and problems they faced, participants focused more on potential project changes or improvements than other gate reviews.
The role of the facilitator for reflection
Conducting complete reflection processes and achieving high reflection intensities are not only a matter of relevance and motivation.Reflection-for-action as collaborative practice in projects also needs to be facilitated.This is in line with Chang et al. (2021), who posit that project leadership is essential for making the mental models of project team members explicit and resolving conflicting models.However, the facilitating role in reflection-for-action processes goes beyond merely discussing and evaluating mental models.As our findings suggest, facilitators need the capability to guide the dialogue between project team members and support them in opening up to experienced project issues, revealing their own experiences with these issues, mobilising prior experiences with similar issues, referring to inconsistencies and assumptions in the reasoning of other, and proposing ways of dealing with the issues (Hilden & Tikkamäki, 2013).Fulfilling this facilitating role directly helps project team members enhance their work practices on which reflection occurs (Helyer, 2015).Here, the role does not need to be taken by the leader of a project but can be assumed by other project team members and persons external to the project as well.In the five gate reviews, this role was taken by contractor employees working for central departments and being involved in other organisation projects.Although project leaders might be predisposed to broker learning within project teams (Wiewiora et al., 2020), our results indicate that for reflection to emerge, it is not so much the position of the facilitating person that matters but rather their capability of asking the right questions.
Time for reflection is necessary but not sufficient
Time for learning and reflection is often reoccurring in literature (Keegan & Turner, 2001;Swan et al., 2010;Hartmann & Dorée, 2015).Time also played a role in the investigated gate reviews.However, stopping the discussion did not always mean the reflection process was prematurely interrupted.Issues were often sufficiently discussed before the time constraint was mentioned.Our results indicate that time is a necessary but insufficient condition for project reflection-for-action.
When time is lacking, project team members cannot reflect satisfactorily (Sense, 2004).Nevertheless, when ample time is available, reflection does not necessarily increase.Whether the available time is sufficient depends on the number of relevant issues to be discussed and the extent of this discussion.The more reflection phases and the higher the reflection intensity, the more time is needed.Here, it is again the role of the facilitator to allow for an effective reflection process by sensing the relevance of issues, guiding the reflection process, and highlighting when issues were sufficiently discussed to move on to others.
Conclusions
Although there is consensus amongst scholars that reflection is essential for learning in and between projects, prior research has not further expounded the role of reflection as a learning mechanism in projects.Our study on five gate reviews provides deeper insights into how reflective practices unfold in projects to enable learning and under which condition reflection can become a learning mechanism.Our research particularly suggests reflection-for-action as the reflection mode that can tap into the learning potential of projects and resolve the learning dilemma in projects.Reflection-for-action keeps close contact with the immediate work of project teams while being a manageable practice.However, our research also demonstrates that reflection does not guarantee that learning effectively takes place.For reflection to become a project learning mechanism, the reflection process needs to proceed to later phases and higher reflection intensities.Achieving such a more significant extent of reflection strongly depends on the relevance of the issue at hand, the motivation of project team members to discuss this issue, and the reflection support they receive during the discussion.
Practical implications
Our findings have two managerial implications.First, project-based organisations should give reflection-for-action a place in project work to stimulate learning in and between projects.This does not mean decoupling reflection from daily practices but rather putting more attention to reflection as an ingredient of these practices.The investigated gate reviews show how reflection-for-action can be triggered by discussing project-relevant issues.In general, project meetings and team sessions are organisational settings in ongoing project work in which reflection can be facilitated to increase the understanding of project issues and planning for change or improvement.Here, asking the right questions to guide project teams through the reflection process and achieve higher reflection intensities will be essential.Such questions should create awareness for project issues, bring together different perspectives on these issues, incorporate experiences made in previous projects, and scrutinise taken-for-granted assumptions.
Our second managerial implication then refers to the role of the facilitator in encouraging project team members to consider project meetings and team sessions as opportunities to reflect.In their role, the facilitators should ask the right (critical and searching) questions about relevant project issues to elicit the experience of project team members about the issues, the underlying causes, and whether issues and causes are shared between them.They should also stimulate and guide the discussion amongst project team members and help them make sense of experienced issues by referring to similar experiences in other projects and organisational knowledge and pointing to other possible perspectives on the issue.On a practical note, our conceptualisation and operationalisation of reflection phases and intensities (Table 3 and Table 4) could support facilitators in this respect.
A challenge for project-based organisations is to decide whether a person should be appointed to the facilitating role or whether the role should naturally emerge.The first option might be favourable in settings that more formally check the progress of projects, such as gate reviews and milestone sessions.The second option might be suitable for settings not representing designated project checkpoints, such as regular team meetings.In both cases, the participation of project team members in reflection training can help build the required reflection capabilities and create awareness for reflection benefits.
Limitations and future research
In the presented research, gate reviews were the organisational settings where reflection-for-action occurred.This can be seen as a limitation of our study since gate reviews' "checkpoint" character may induce a strong focus on project performance.Thus, reflection-for-action may become the predominant reflection mode.Other reflection modes could prevail in project settings with a less formal character.Future research could investigate the extent and mode of reflection in these settings and their role for project-based learning.Here, it would be particularly interesting to study regular project meetings, the opportunities they offer for reflection, and the manageability of the reflection process in these settings.The latter points to the role of the facilitator, and future research may examine the extent to which project team members take up this role.
The single case of a Dutch contractor limits the generalizability of our research.In other industries, business processes are often less organised through projects, with organisations using projects strategically to develop and implement new services and products.Our findings would benefit from further research on reflection in these industries and its role for the learning from projects for organisational practices rather than the learning between projects.
Another limitation of our study is its cross-sectional nature.Our research only explicates how reflection practices create the potential for learning in and between projects by enacting knowledge and experiences gained in previous projects and drawing implications for project work beyond the current project's fences.A worthwhile avenue for future research is the more longitudinal exploration of reflection as a practice linking different projects and, by doing so, forming learning trajectories across projects.The extent to which such implications are taken up in the context of subsequent projects and enriched through contextually embedded reflective practices may then further advance our understanding of the effectiveness of reflection as a project learning mechanism.
Participants discuss the experienced issue and reach a shared understanding of it.Discussing and asking questions about what happenedBittner and Leimeister (2013);Knipfer et al. (2013) Justifying the experienced issue and why actions taken were reasonableKrogstie et al. (2013) Reaching an agreement or convergence of what the experienced issue wasBittner and Leimeister (2013);Knipfer et al. (2013);Krogstie, et al. (2013) Collaborative evaluationParticipants critically evaluate the experienced issue by referring to prior experiences and knowledge, detecting patterns, challenge groupthink, and interpreting the meaning of it.Challenging existing interpretations of the experienced issue Prilla et al. (2015); van Woerkom (2003) Adding perspectives for the evaluation of the experienced issue Jung and Wise (2020); Prilla et al. (2015) Considering alternatives of what could have been done Jung and Wise (2020); Prilla et al. (2015) Exploring the causes and effects of the experienced issue Boud et al. (2013); Jung & Wise (2020) Linking an experienced issue to other experiences Boud et al. (2013); Prilla et al. (2015); Tsingos et al. (2015) Linking an experienced issue to existing knowledge, rules, or values Boud et al. (2013); Prilla et al. (2015); Tsingos et al. (2015) Posing searching questions to identify underlying reasons Koole et al. (2011) Drawing collective outcome Participants agree on if and what the satisfactory outcome is of the reevaluation and the implication of it.Showing convergence in the understanding of reflection outcome Daudelin (1996); Prilla et al. (2015) Giving advice or proposing solutions Daudelin (1996); Prilla et al. (2015) Planning for action Koole et al. (2011); Korthagen et al. (2002) Summarising findings and implications Prilla et al. (2015) Translating insights into changed behaviour Koole et al. (2011); Korthagen et al. (2002)A.Hartmann et al.
mistakes (+) Little discussion within the project team (-) Little openness about mistakes (-) Little discussion within the project team (-) Use of graphical material (+) Little discussion within the project team (-) Lack of understanding (-) Little discussion within the project team (-) Effective dialogue (+) Motivation Willingness to improve the project (+) Willingness to improve the project (+) Value of gate review was not seen (-) Active participation of project team (+) Discussion set by the project team (+) Active participation of project team (+)
Table 1
The characteristics of the gate reviews.
Table 2
Data collection methods.
Table 3
Operationalization of the concept 'reflection process'.
Table 4
Operationalization of the concept 'reflection intensity'.
Table 5
Operationalization of the concept 'reflection condition'.
Table 6
The discussed topics in the gate reviews.
Table 8
The achieved reflection intensity in reflection processes. | 2023-07-23T15:08:40.153Z | 2023-07-01T00:00:00.000 | {
"year": 2023,
"sha1": "1027e58de79ac248157b5360a4de054f196508b6",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.ijproman.2023.102494",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "8eb0133d4522d6d07beb6c6e9cff2333e53d6391",
"s2fieldsofstudy": [
"Education",
"Business"
],
"extfieldsofstudy": []
} |
260442855 | pes2o/s2orc | v3-fos-license | A domain free of the zeros of the partial theta function
We prove that for $q\in (0,1)$, the partial theta function $\theta (q,x):=\sum _{j=0}^{\infty}q^{j(j+1)/2}x^j$ has no zeros in the closed domain $\{ \{ |x|\leq 3\} \cap \{${\rm Re}$x\leq 0\} \cap \{ |${\rm Im}$x|\leq 3/\sqrt{2}\} \} \subset \mathbb{C}$ and no real zeros $\geq -5$.
Introduction
The present paper deals with analytic properties of the partial theta function It owes its name to the resemblance between the function θ(q 2 , x/q) = ∞ j=0 q j 2 x j and the Jacobi theta function Θ(q, x) := ∞ j=−∞ q j 2 x j ; "partial" refers to the fact that summation in the case of θ takes place only from 0 to ∞.
We consider the situation when the variable x and the parameter q are real, more precisely, when (q, x) ∈ (0, 1) × R. This function has been studied also for (q, x) ∈ (−1, 0) × R and (q, x) ∈ D 1 × C; here D 1 stands for the open unit disk. For any fixed non-zero value of the parameter q (|q| < 1), the function θ(q, .) is an entire function in x of order 0.
The partial theta function finds various applications -from Ramanujan type q-series ( [29]) to the theory of (mock) modular forms ( [4]), from asymptotic analysis ( [2]) to statistical physics and combinatorics ( [28]). How θ can be applied to problems dealing with asymptotics and modularity of partial and false theta functions and their relationship to representation theory and conformal field theory is made clear in [5] and [3]. The place which this function finds in Ramanujan's lost notebook is explained in [1] and [29]. Its Padé approximants are studied in [25].
A recent interest in the partial theta function is connected with the study of sectionhyperbolic polynomials, i. e. real polynomials with positive coefficients, with all roots real negative and all whose finite sections (i.e. truncations) have also this property, see [24], [8] and [26]; the cited papers use results of Hardy, Petrovitch and Hutchinson (see [6], [27] and [7]). Various analytic properties of the partial theta function are proved in [12]- [23] and other papers of the author.
The analytic properties of θ known up to now, in particular, the behaviour of its zeros, are discussed in the next section. One of them is the fact that for any q ∈ (0, 1), all complex conjugate pairs of zeros of θ(q, .) remain within the domain {Rex ∈ (−5792.7, 0), |Imx| < 132} ∪ {|x| < 18} .
For q ∈ (−1, 0), this is true for the domain {|Rex| < 364.2, |Imx| < 132}, see [20] and [18]. In this sense the complex conjugate zeros of θ never go too far from the origin. It is also true that they never enter into the unit disk, see [23] (but this property is false if q and x are complex, see the next section). In the present paper we exhibit a convex domain which contains the left unit half-disk, which is more than 7 times larger than the latter and which is free of zeros of θ for any q ∈ (0, 1): Theorem 1. For any fixed q ∈ (0, 1), the partial theta function has no zeros in the domain When only the real zeros of θ are dealt with, one can improve the above theorem: Proposition 2. For any q ∈ (0, 1) fixed, the function θ(q, .) has no real zeros ≥ −5.
Before giving comments on these results in the next section we explain the structure of the paper. Section 3 reminds certain analytic properties of θ. Proposition 2 is proved in Section 4. In Section 5 we prove some lemmas which are used to prove Theorem 1; their proofs can be skipped at first reading. Section 6 contains a plan of the proof of Theorem 1. The proofs of the proposition and lemmas formulated in Section 6 can be found in Section 7.
One can make the following observations with regard to Theorem 1 and Proposition 1: (1) It is not clear whether Theorem 1 should hold true for the whole of the left half-disk of radius 3, because |θ(0.71, e 0.5188451144πi )| = 0.0141 . . ., i. e. one obtains a very small value of |θ| for a point of the arc C 1 . This might mean that a zero of θ crosses the arc C 1 for q close to 0.71.
(2) The difficulty to prove results as the ones of Theorem 1 and Proposition 2 resides in the fact that the rate of convergence of the series of θ decreases as q tends to 1 − , and for q = 1, one obtains as limit of θ the rational (not entire) function 1/(1 − x). It is true that the series of θ converges to the function 1/(1 − x) (which has no zeros at all) on a domain larger than the unit disk and containing the domain D, see [9]. Yet one disposes of no concrete estimations about this convergence, so one cannot deduce from it the absence of zeros of θ in the domain D for all q ∈ (0, 1).
We explain by examples why analogs of the property of the zeros of θ to avoid the domain D cannot be found in cases other than q ∈ (0, 1), x ≤ 0: (i) If q is complex, then some of the zeros of θ can be of modulus < 1. Indeed, for q = ρe 3πi/4 , where ρ ∈ (0, 1) is close to 1, the function θ has a zero close to 0.33 . . . + 0.44 . . . i whose modulus is 0.56 . . . < 1. Similar examples can be given for any q of the form ρe kπi/ℓ , k, ℓ ∈ Z * , see [23]. It is true however that θ has no zeros for |x| ≤ 1/2|q|, see Proposition 7 in [10].
(ii) If q ∈ (0, 1), the function θ has no positive zeros, but θ(0.98, .) is likely to have a zero close to 1.209 . . . + 0.511 . . . i (i. e. of modulus 1.312 . . .), see [23]. Conjecture: As q → 1 − , one can find complex zeros of θ(q, .) as close to 1 as possible. One can check numerically that for q close to 0.726475, θ has a complex conjugate couple of zeros close to ±2.9083 . . . i (which by the way corroborates the idea that the statement of Theorem 1 cannot be extended to the whole of the left half-disk of radius 3). Thus a convex domain free of zeros of θ should belong to the rectangle {Rex ∈ (0, 1), |Imx| < 2.9083 . . .}.
(iii) For q ∈ (−1, 0), it is true that the leftmost of the positive zeros of θ tends to 1 + as q tends to −1 + , see part (2) of Theorem 3 in [22]. The function θ(−0.96, .) is supposed to have a couple of conjugate zeros close to the zeros z ± := 0.824 . . . ± 1.226 . . . i (of modulus 1.478 . . .) of its truncation θ • 100 (−0.96, .); when truncating, the first two skipped terms are of modulus 6.57 . . . × 10 −75 and 1.51 . . . × 10 −76 . As q → −1 + , the limit of θ equals (1 − x)/(1 + x 2 ). One can suppose that the zeros, which equal z ± for q = −0.96, tend to ±i as q → −1 + . One knows that for q ∈ (−1, 0), complex zeros do not cross the imaginary axis, see Theorem 8 in [22]. Hence these zeros of θ should remain in the right half-plane and close to ±i. This means that it is hard to imagine a convex domain in the right half-plane much larger than the right unit half-disk and free of zeros of θ.
As for the left half-plane, the truncation θ • 100 (−0.96, .) of θ(−0.96, .) has conjugate zeros 0.769 . . . ± 1.255 . . . i (of modulus 1.473 . . .) about which, as about z ± above, one can suggest that they tend to ±i as q → −1 + . This could make one think that if one wants to find a domain in the left half-plane containing the left unit half-disk and free of zeros of θ, then in this domain the modulus of the imaginary part should not be larger than 1. On the other hand θ(−0.7, .) has a zero close to w 0 := −2.69998 . . . so the width of the desired domain should be < |w 0 |.
Known properties of the partial theta function
In this section we remind first that the Jacobi theta function satisfies the Jacobi triple product from which we deduce the equalities It is clear that Notation 4. (1) When treating the function G we often change the variable x to X := 1/x. To distinguish the truncations of the function θ in the variable x from the ones in the variable t (see Notation 3) we write θ = θ • k + θ • * , where θ • k := k j=0 q j(j+1)/2 x j and θ • * := ∞ j=k+1 q j(j+1)/2 x j , i. e. we use the superscript "bullet" when in the variable x (no index k is added to θ • * ). No superscript is used for the truncations of θ(q, −t + wi) and of G.
(2) We set Remark 5. In the proofs we use the convergence of the series (1) when the parameter q belongs to an interval of the form [0, a], a ∈ (0, 1). When we need to deal with intervals of the form [a, 1], we use the equalities (3) in which the modulus of the term Θ * tends to 0 as q tends to 1 − while the series of G converges uniformly for |x| ∈ [c, ∞) for any fixed c > 1. When in the proof of a lemma or a proposition we use the fact that a certain function in one variable (mainly a polynomial) is increasing or decreasing, we do not give a detailed proof of this, because in all such cases the proof can be given using elementary methods (computation of derivatives and numerical computation of their real roots). We do not give details when proving the absence of critical points of polynomials in two variables in given rectangles. In this text their degree is never too high and the necessary computations are easily performed using MAPLE.
For q ∈ (0, 1), the real zeros of θ (which are all negative) and of any of its derivatives w.r.t. the variable x form a sequence tending to −∞ and behaving asymptotically as a geometric progression with ratio 1/q, see Theorem 4 in [10].
There exists an increasing and tending to 1 − sequence of spectral valuesq j of q such that θ(q j , .) has a multiple (more exactly double) real zero, see [24]. The 6-digit truncations of the first 6 spectral values are: When q passes fromq − j toq + j , the rightmost two of the real zeros of θ coalesce and then form a complex conjugate pair. All other real zeros of θ remain negative and distinct, see Theorem 1 in [10]. The inverse (complex couples becoming double and then two distinct real zeros) never happens. No zeros are born at ∞.
Thus for q fixed, the function θ belongs to the Laguerre-Pólya class L − PI exactly if q ∈ (0,q 1 ]. For q ∈ (q j ,q j+1 ], the function θ is the product of a real polynomial of degree 2j without real zeros and a function of the class L − PI. See the details in [11]. Spectral values exist also for q ∈ (−1, 0), see [15]. The existence of spectral values for complex values of q is proved in [15], see Proposition 8 therein.
Using the above notation and (4) one can write To prove the latter inequality one has to observe that the numbers q m corresponding to factors |K(q m )| from the set Σ (1/3. (see the equalities and inequalities (5) and (4)) and one concludes that ℓ 3 ≤ m 3 + 6. The factors |K(q m )| which have not been mentioned up to now are all of modulus < 1; the corresponding numbers q m belong to the intervals (0, d ′ ) and (t ′′ , u ′′ ). Thus ∞ m=1 |K(q m )| < 2 6 . At the same time This shows that |Θ * (q, −5)| < 10 −4 < 4/25 < | − G(q, −5)| from which the proposition follows.
For q fixed, these quantities are decreasing in ϕ, because such is cos ϕ. Set cos ϕ := − √ 2/2, |x| := 3. The displayed formulas show that from which one deduces the last two claims of the lemma.
In the proofs we need some properties of the functions M := |(1 + qx)(1 + q/x)| and M 0 := (1 − q)M . We remind that we set x = −t + wi, t ∈ [0, w], w = 3/ √ 2. Proof. It suffices to prove the claims of the lemma about the function M . One checks directly for the square of M that One verifies straightforwardly that The discriminant of the trinomial 36(qt) 2 − 149qt + 198 is negative, so this trinomial is positivevalued. For the remaining terms of P , for t ∈ [0, w] (hence t 2 ≤ 9/2), one obtains −18qt 3 + 198q 2 + 36t 2 ≥ −81qt + 198q 2 + 36t 2 which is again a trinomial with negative discriminant. Thus P > 0 and M 2 − M 2 | t=0 ≤ 0 with equality only for t = 0 which proves the first claim of the lemma. To prove its second statement we consider the difference The polynomial V 2 q 2 +V 1 q+V 0 has no crfitical points for (q, t) ∈ [0.6, 0.75]× [1, w]. Its restrictions to each of the sides of this rectangle (i. e. its restrictions obtained for q = 0.6, q = 0.75, t = 1 and t = w) are positive-valued. Hence the difference M 2 − M 2 | t=1 is negative in the given rectangle which proves the second claim of the lemma.
Remark 10. For x = −t + wi, we represent in Fig. 2 the graph of the function for two fixed values of t, namely t = 0 (in solid line) and t = 1 (in dashed line). The two functions The zeros of θ depend continuously on q and no zeros are born at ∞. We prove that for q ∈ (0, 1), there is no zero of θ on the border ∂D of the domain D. For q ∈ (0, 0.5], this follows from the proposition below. We remind that (see Notation 3) Proposition 11. For q ∈ (0, 0.5], the function θ(q, .) has no zeros in the closed rectangle The proof of this proposition and of all lemmas formulated in this section are given in Section 7. The rectangle ∆ contains the domain D, so for q ∈ (0, 0.5], there are no zeros of θ on ∂D. One can observe that for q ∈ (0,q 1 ],q 1 = 0.3092 . . ., there are no complex conjugate pairs of θ (see Section 3), and for q ∈ (q 1 , 0.5], there is exactly one such pair. From now on we assume that q ∈ [0.5, 1). The next lemma explains why no zeros of θ can be found on the arc C 2 hence none on the arc C 3 either.
, the function θ has no zeros.
The remaining case q ∈ [0.6, 0.75] will be subdivided in two cases:
We use the fact that |θ • 4 | ≥ max(|Re(θ • 4 )|, |Im(θ • 4 )|) =: µ. Neither of the functions G R and G I has a critical point with q ∈ I 0 , so G R (resp. G I ) attains its maximal and its minimal value when one of the following conditions takes place: t = 0, t = 3, q = 0 or q = 0.5. | 2022-10-31T01:16:08.716Z | 2022-10-28T00:00:00.000 | {
"year": 2022,
"sha1": "4421b713186af617e0df602e76d8e5ddd64d4e4b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "4421b713186af617e0df602e76d8e5ddd64d4e4b",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
221504356 | pes2o/s2orc | v3-fos-license | p53 deficiency triggers dysregulation of diverse cellular processes in physiological oxygen
Using oncogene-expressing cells to interrogate p53 function under physiological oxygen conditions, Valente et al. show that p53 deficiency drives concurrent dysregulation of a range of cellular processes. These findings highlight the pleiotropic effects of p53 inactivation.
Introduction
The transcription factor p53 is a critical barrier to the development of cancer, as evidenced by three key observations. First, more than half of all human cancers are associated with direct inactivating mutations in TP53 (Hollstein et al., 1991;Levine, 2018). Second, Li-Fraumeni patients, who inherit inactivating mutations in TP53, are predisposed to early-onset cancers, including breast cancers and sarcomas (Hollstein et al., 1991). Finally, p53-null mice succumb to cancer with 100% penetrance (Donehower et al., 1992;Jacks et al., 1994;Kaiser and Attardi, 2018). Intriguingly, despite 40 yr of research on p53, the specific mechanisms by which p53 suppresses tumorigenesis remain incompletely understood.
The best-characterized function of p53 is as a transcription factor. In this role, p53 can elicit widespread transcriptional changes in response to various cellular stresses such as DNA damage, oncogene activation, and hypoxia, to promote specific cellular responses (Mello and Attardi, 2018;Vousden and Prives, 2009). The most studied p53 cellular responses are apoptosis and cell cycle arrest. Apoptosis is triggered by p53 in response to stressors such as acute DNA damage, via transcriptional up-regulation of proapoptotic genes such as the Bcl-2 family members Puma and Noxa. p53 can also limit cellular proliferation by inducing G 1 or G 2 arrest.
For example, at the G 1 /S boundary, p53 promotes transient cell cycle arrest in response to genotoxic damage by up-regulating the p21 CDK inhibitor, a measure that is thought to facilitate DNA repair (Karimian et al., 2016). It is through these canonical responses that p53 was proposed to act as a "guardian of the genome," ensuring that upon acquisition of DNA damage, cells would transiently arrest to repair this damage or, alternatively, undergo apoptosis to remove damaged cells from the organism, thereby suppressing oncogenic transformation (Lane, 1992).
Several studies have suggested that p53 suppresses tumorigenesis through responses other than acute DNA damageinduced cell cycle arrest or apoptosis. First, a few reports suggested that the pathological response to acute DNA damage by p53, involving widespread induction of apoptosis, is dispensable for p53-mediated tumor suppression (Christophorou et al., 2006;Efeyan et al., 2007;Hinkal et al., 2009). Expanding on this idea were seminal studies using either knock-in mice bearing alterations to p53 that partially impaired its transcriptional capacity (p53 25,26 Jiang et al., 2011] and p53 3KR [Li et al., 2012b]) or gene-targeted mice lacking the canonical p53 target genes p21, Puma, and Noxa (Valente et al., 2013). Cells from these mice displayed defective p53-dependent induction of apoptosis and cell cycle arrest upon acute DNA damage, but the mice were nonetheless resistant to spontaneously arising tumors (Li et al., 2012b;Valente et al., 2013) and cancer in oncogene-driven mouse cancer models Jiang et al., 2011). Together, these studies suggest that processes other than p53 responses to acute DNA damage are critical for tumor suppression or, alternatively, that in the absence of these responses, other processes can compensate to impede tumor development (Mello and Attardi, 2018).
A natural result of this revised view of p53-mediated tumor suppression has been increasing emphasis on understanding noncanonical p53 functions. Specifically, beyond regulating proliferation and apoptosis, p53 has been reported in specific settings to regulate additional cellular processes such as metabolism and stemness (Charni et al., 2017;Kaiser and Attardi, 2018). However, it is unclear whether the ability of p53 to regulate each cellular behavior is context dependent, with a specific p53-regulated process being fundamental for tumor suppression only in a particular tumor type. For example, p53-mediated tumor suppression in large T antigendriven choroid plexus tumors is associated with oncogene-triggered apoptosis, whereas p53-induced apoptosis and DNA repair programs are critical for the suppression of Eµ-Myc-driven B cell lymphomas (Eischen et al., 2001;Garrison et al., 2008;Hemann et al., 2004;Janic et al., 2018;Michalak et al., 2009;Yin et al., 1997). Although tumor suppression has been ascribed to one or two specific p53 cell biological functions in such cancer models, a systematic evaluation of a wide range of p53 functions has not been performed within one specific cellular context. Determining how broadly p53 regulates diverse cellular processes in a particular context is a first step toward understanding the cellular basis of p53-mediated tumor suppression. Indeed, the complexity of the p53 transcriptional program, with alterations in the expression of hundreds of target genes, suggests that p53 might simultaneously regulate a variety of cellular functions during transformation suppression (Andrysik et al., 2017;Fischer, 2019).
Here, we examined to what extent the effects of p53 loss are pleiotropic, using an in vitro oncogene-expressing primary mouse embryonic fibroblast (MEF) model system in which p53 plays a critical role in suppressing transformation. Although transformation suppression by p53 in this setting has been associated predominantly with the induction of apoptosis (Lowe et al., 1993;Soengas et al., 1999), we sought to interrogate the role of p53 in regulating a range of other cell biological functions in this context. Importantly, we performed these experiments in physiological (5%) oxygen tensions to more closely model in vivo conditions. By leveraging a spectrum of assays for different cellular processes, we revealed that p53 regulates an array of diverse cellular processes in this context, several of which were apparent only under physiological oxygen conditions. These findings support the intriguing notion that p53-mediated tumor suppression is a complex coordinated process reliant on modulation of numerous cellular programs.
Results
Establishing a platform to interrogate global p53 functions To interrogate the capacity of p53 to globally regulate a variety of cellular processes during transformation suppression, we examined E1A;Hras G12V oncogene-expressing MEFs, as they provide a tractable model in which different cell biological assays can readily be performed and in which p53 plays a potent role as a transformation suppressor. Moreover, unlike many human cancer cell lines, these are primary, early-passage cells into which oncogenes are retrovirally transduced, and they therefore retain intact p53 signaling pathways. Of note, although previous studies have suggested that transformation suppression in these cells is due to apoptosis in response to cellular stress signals (Lowe et al., 1994;Soengas et al., 1999), we sought to determine whether other cellular processes are also regulated simultaneously by p53. To best model the oxygen tensions that most cells encounter in vivo, which range between 2 to 8% for most cells, we cultured cells in physiological oxygen (5% O 2 ) rather than the standard atmospheric 21% O 2 conditions.
To establish isogenic WT and p53-deficient cell lines, we generated early-passage MEFs from H11 Cas9 mice, in which Cas9 is constitutively expressed from the H11 promoter (Chiou et al., 2015). Three independent MEF lines were transduced with E1A-and HRas G12V -expressing retroviruses, and then infected with lentiviruses expressing one of two sgRNAs targeting exon 4 of Trp53 (sgp53) or a nontargeting control (sgNTC) sgRNA (Fig. 1,A and B;and Fig. S1 A). We thus established a panel of early-passage, isogenic, polyclonal p53 WT (sgNTC) and p53-deficient (sgp53) E1Aand HRas G12V -expressing cell lines (three sgNTC and six sgp53 cell lines). We confirmed p53 deficiency by DNA sequencing of the p53 locus ( Fig. S1 B), immunoblotting and immunofluorescence analyses of p53 protein levels (Fig. 1,C and D;and Fig. S1 C), and Western blot or quantitative RT-PCR (qRT-PCR) analysis of p53 target gene expression (Fig. 1,C and E;and Fig. S1,C and D).
To confirm known p53-dependent phenotypes in this model system, we first assayed transformation suppression using soft agar assays, a robust in vitro surrogate of in vivo tumorigenicity (Lin et al., 1998). In 5% O 2 , E1A;Hras G12V MEFs targeted with sgp53 exhibited significantly greater cell colony formation than sgNTC controls, confirming that p53 inactivation in these cells enhanced their tumorigenic capacity (Fig. 1,F and G;and Fig. S1 E). We next examined apoptosis in response to two distinct stresses: acute DNA damage and serum starvation. Whereas sgNTC E1A;Hras G12V MEFs exhibited significant apoptosis in response to acute DNA damage or serum starvation, sgp53-targeted cells were protected from cell death induced by these stimuli, validating the p53-dependent cell death response to different stressors in E1A;Hras G12V MEFs ( Fig. 1 H and Fig. S1 F). We next sought to understand which additional p53 downstream pathways might also be regulated during transformation suppression by performing a panel of phenotypic assays examining various cellular processes.
p53 does not dampen cellular proliferation in oncogeneexpressing MEFs The canonical responses of p53 to stress stimuli include not only induction of apoptosis, but also inhibition of cellular proliferation. Although the best-characterized p53 function in this regard is in driving cell cycle arrest or senescence in response to a specific stress signal, p53 can also simply dampen proliferation rates (Tyner et al., 1999). We thus sought to determine whether p53 can regulate oncogene-driven cell proliferation in this cellular model. We found no significant difference in the proliferative rate between sgNTC-and sp53-targeted E1A:Hras G12Vexpressing MEFs (Fig. 1, J and K). Although p53 is classically considered a regulator of the G 1 -S transition, expression of E1A in this cellular model inactivates retinoblastoma (Rb) family member-mediated inhibition of cell cycle progression, the primary mechanism by which p53 is thought to inhibit cell proliferation (Deng et al., 2005;Narita et al., 2003). Thus, while p53 is capable of triggering apoptosis in E1A:Hras G12V -expressing MEFs, it does not clearly inhibit cell proliferation in this model. p53 regulates ploidy in E1A;Hras G12V MEFs A hallmark of dysregulated G 1 -S progression triggered by p53 deficiency is genomic instability, an observation that earned p53 the title of "guardian of the genome" (Lane, 1992). p53 is thought to preserve genomic integrity by inducing G 1 arrest when cells have sustained DNA damage or have undergone aberrant mitosis with consequent ploidy anomalies, known as the tetraploidy checkpoint (Fujiwara et al., 2005;Ganem et al., 2014;Lanni and Jacks, 1998;Liu et al., 2004). Given the lack of cell cycle phenotype with p53 inactivation in E1A;Hras G12V MEFs, we sought to determine whether p53 deficiency still promotes genome destabilization. To this end, we first performed metaphase spreads on cells of each genotype grown in 5% O 2 and quantitated chromosome number (Fig. 2, A-C). Cells targeted with sgp53 exhibited a significantly higher proportion of cells with >40 chromosomes than sgNTC controls, including both tetraploid and polyploid cells (Fig. 2, A-C). To confirm this finding in cells undergoing cell division (not metaphase arrested), we used the FUCCI (fluorescent ubiquitination-based cell cycle indicator) Figure 1. Generation of WT and p53-null E1A; Hras G12V ;H11 Cas9 MEFs as a platform to dissect p53 function. (A and B) Schematic illustrating the strategy (A) and chronology (B) for the generation of three isogenic WT (sgNTC) and six p53-deficient (sgp53) E1A;Hras G12V MEF lines. (C) Immunoblot of p53 and its targets, p21 and Mdm2, after 8-h doxorubicin (dox; 0.2 µg/ml) treatment. n = 3 cell lines/sgRNA. Gapdh is a loading control. (D) Representative immunofluorescence image of p53 in sgRNA-targeted E1A; Hras G12V MEFs treated with 0.2 µg/ml dox for 8 h. DAPI marks nuclei. Scale bar, 30 μm. (E) qRT-PCR analysis of p53 target gene expression relative to β-actin in untreated sgRNA-targeted E1A;Hras G12V MEFs. n = 3 cell lines/sgRNA, in triplicate. Data are mean ± SD; ***, P < 0.0001, two-way ANOVA, Dunnett's multiple comparison test. (F) Representative images of soft agar assay of sgRNA-targeted E1A;Hras G12V MEFs. Scale bar, 3.5 mm. (G) Average colony number ± SD in soft agar assay. n = 3 cell lines/sgRNA, in triplicate, three to five independent experiments. *, P < 0.05, one-way ANOVA, Bonferroni's multiple comparison posttest. (H and I) Mean viable (AnnexinV/PI-negative) E1A;Hras G12V MEFs ± SD after dox treatment (0.2 µg/ml; H) or serum starvation (I) for 24 h. n = 3 cell lines/sgRNA, three to five independent experiments. *, P < 0.05, one-way ANOVA, Bonferroni's multiple comparison posttest. (J) LUNA cell counter proliferation analysis of sgRNA-targeted E1A; Hras G12V MEFs starting 24 h after plating. n = 3 cell line/sgRNA. Data are mean fold change in cell number ± SD. (K) Doubling time from nonlinear regression analysis of growth curves. CI, confidence interval, n = 3 cell lines/sgRNA. For A-K, all experiments were performed in physiological (5%) oxygen. cell cycle marker system, which differentially marks cells in G 1 and G 2 /M phase (Sakaue-Sawano et al., 2008). Together with DNA content analysis, the proportion of normal diploid cells in G 2 /M can be distinguished from abnormal tetraploid cells and polyploid cells in G 1 (Fig. 2 D). We observed a significantly higher percentage of both G 1 tetraploid and polyploid cells in sgp53-targeted MEFs than in sgNTC MEFs, demonstrating that p53 maintains normal ploidy in E1A;Hras G12V MEFs, even without regulating proliferation rates (Fig. 2 E).
Next, to investigate the types of abnormal mitotic events in sgp53-targeted cells that might drive altered ploidy, we performed time-lapse, live-cell imaging analysis of mitosis. E1A; Hras G12V sgp53-targeted MEFs exhibited an increased proportion of abnormal mitotic events (defined here as multipolar spindle formation or failed cytokinesis) relative to sgNTC cells (Video 1 and Fig. 2, F and G). One outcome arising from these mitotic defects is the generation of cells with more than one nucleus ( Fig. 2 H). Specifically, while any cell with more than one nucleus in sgNTC MEFs tended to be only binucleate, p53-deficient cells exhibited both binucleate and multinucleated cells, with a small proportion of these multinucleated cells displaying >10 apparent nuclei (Fig. 2 I). Interestingly, after pulsing with BrdU, ∼70% of these p53-deficient bi-and multinucleate cells displayed BrdU positivity, compared with only ∼40% of the binucleate cells in sgNTC cells (Fig. 2,J and K). This finding suggests that in the absence of p53, these highly abnormal cells are more likely to attempt to continually divide, supporting the idea of a failed tetraploidy checkpoint in these cells. Together, these results demonstrate that, despite being unable to dampen proliferation, p53 remains capable of regulating mitotic fidelity in E1A; Hras G12V MEFs, perhaps by direct regulation of mitotic events and/or clearance of cells that have undergone abnormal mitosis. These studies thus underscore the importance of p53 in faithful genome propagation. Data are mean ± SD, n = 3 cell lines/sgRNA; *, P < 0.05, two-way ANOVA, Dunnett's multiple comparison test. For A-K, all experiments were performed in physiological (5%) oxygen.
p53 promotes DNA repair in response to acute DNA damage in E1A;Hras G12V MEFs Beyond maintaining genomic integrity by regulating ploidy, p53 has also been shown to promote different types of DNA repair (Williams and Schumacher, 2016). Interestingly, the capacity of p53 to regulate DNA repair has recently been suggested to be critical for p53-mediated tumor suppression in lymphoid cells (Janic et al., 2018;Valente et al., 2013). To determine whether p53 function in E1A;Hras G12V MEFs is also linked to the repair of damaged DNA, we harvested WT and p53-deficient E1A;Hras G12V MEFs at various time points after treatment with 2 Gy γ-irradiation (IR) and immunostained cells with γH2AX antibodies to detect foci that form at sites of double-strand breaks (Fig. 3 A). We found no significant difference in the percentages of cells exhibiting high nuclear γH2AX fluorescence in either the basal state or the early response to acute DNA damage (1 h after IR) in cells with different p53 statuses. In contrast, sgp53-targeted E1A;Hras G12V MEFs exhibited significantly higher γH2AX staining 6 h after IR than sgNTC cells, suggesting delayed induction of DNA repair (Fig. 3, B and C). Both sgNTC-and sgp53-targeted cells showed little evidence of cell death 6 h after IR (not depicted), suggesting that differences in γH2AX levels did not arise from differences in apoptosis after irradiation. By 24 h after IR, γH2AX levels in both sgNTC-and sgp53-targeted MEFs returned to baseline levels (Fig. 3, B and C), indicating that factors other than p53 may contribute to late DNA repair responses. These results indicate that p53 plays a critical role in enhancing early DNA repair responses to acute DNA damage in E1A;Hras G12V MEFs.
p53 loss does not significantly impact induction of ferroptosis in E1A;HRas G12V MEFs Recently, the ability of p53 to promote ferroptosis, an irondependent form of cell death that culminates in toxic lipid peroxidation, has been suggested to be critical for p53-mediated tumor suppression (Jiang et al., 2015b). Ferroptosis can be induced by inhibiting cystine import via the cystine/glutamate antiporter system x c − using the small molecule erastin2 and can be blocked by small-molecule inhibitors of lipid peroxidation, such as ferrostatin-1 (Dixon et al., 2012). We examined whether p53 deficiency in E1A;Hras G12V MEFs altered sensitivity to erastin2induced ferroptosis. Initially, we performed FACS-based cell viability assays in 5% O 2 , to determine the sensitivity of these cells to erastin2-induced ferroptosis. After 16 h of treatment, sgNTC E1A;Hras G12V MEFs exhibited decreased viability relative to sgp53-targeted cells (Fig. 4 A). However, while cotreatment of cells with ferrostatin-1 reduced erastin2-induced lipid peroxidation, it failed to rescue erastin2-induced cell death in sgNTC cells (Fig. 4 A). These findings suggest that erastin2 induces nonferroptotic cell death in E1A;Hras G12V MEFs in 5% O 2 .
To further probe the role of p53 in ferroptosis, we shifted our studies to atmospheric (21%) O 2 , under which erastin2-induced ferroptosis has been well characterized. We monitored ferroptosis in erastin2-treated E1A;Hras G12V MEFs expressing the nuclear-localized mKate2 live-cell marker and incubated with the dead cell marker SYTOX Green (Fig. 4 B). Erastin2 induced robust cell death in a dose-dependent manner in both sgNTC-and sgp53-targeted E1A;Hras G12V MEFs (Fig. 4, C and D), and cell death was rescued by cotreatment with ferrostatin-1, consistent with the induction of ferroptosis (Fig. 4 E). Upon comparing the dose-response curves over time ( Fig. 4 F), and EC 50 values ( Fig. 4 G), no significant difference was observed between sgp53 and sgNTC cells, suggesting that p53 does not regulate sensitivity to erastin2-induced ferroptosis in E1A;Hras G12V MEFs.
To probe why E1A;Hras G12V MEFs grown in 5% O 2 displayed different sensitivity to erastin2-induced ferroptosis from that of cells cultured in 21% O 2 , we analyzed basal lipid peroxidation. Interestingly, both p53-expressing and p53-deficient cells in 5% O 2 exhibited less basal lipid peroxidation than in 21% O 2 (Fig. 4 H). Thus, lower basal peroxidation in 5% O 2 may present a barrier for the induction of ferroptosis, leading cells to engage another cell death pathway upon cystine deprivation, a notion consistent with erastin2 triggering apoptosis in some conditions where ferroptosis is blocked (Huang et al., 2018a;Huo et al., 2016). These findings also underscore the importance of examining cellular phenotypes in physiological oxygen conditions. p53 restrains glycolysis and regulates nucleotide metabolism in E1A;Hras G12V MEFs The capacity of p53 to inhibit the Warburg effect, by dampening glycolysis and promoting oxidative phosphorylation, has been proposed as a mechanism by which p53 suppresses cellular transformation (Vousden and Ryan, 2009). Most studies investigating the role of p53 in metabolism have been performed in atmospheric 21% O 2 , whereas oxygen availability in vivo ranges between 2% and 8% O 2 for most cells. Given our findings highlighting the importance of oxygen tension for the induction of ferroptosis, we sought to determine the role of p53 in metabolic regulation in both atmospheric and physiological oxygen tensions.
To examine the role of p53 in glucose metabolism in E1A; Hras G12V MEFs, we performed [U-13 C]-glucose tracing experiments in either 5% or 21% O 2 followed by liquid chromatography (LC)/mass spectrometry (MS) to determine labeled metabolite levels ( Fig. 5 A). In 5% O 2 , sgNTC E1A;Hras G12V MEFs displayed enhanced glycolysis and decreased TCA cycle activity relative to sgNTC cells grown in 21% O 2 , with significantly higher abundance of total and 13 C-labeled lactate (Fig. 5 B), decreased percentage of (m+2) ion abundance for several TCA intermediates like fumarate and malate (Fig. 5 C), and lower (m+2) citrate: (m+3) lactate ion ratios (Fig. 5,D and E;sgNTC [5%] versus sgNTC [21%]; P < 0.05), indicating a relative suppression of pyruvate oxidation at 5% O 2 . Notably, p53 was able to enhance pyruvate oxidation to produce citrate in 5% O 2 , evidenced by a significantly higher (m+2) citrate:(m+3) lactate ratio in sgNTC cells than in sgp53(2) cells, whereas p53-dependent differences in 21% O 2 were not significant (Fig. 5,D and E). Together, these findings suggest that p53 demonstrates a more pronounced regulatory role in pyruvate oxidation and tumor cell metabolism in physiological oxygen and highlights the importance of using appropriate oxygen tensions during cell culture in vitro to accurately model in vivo cellular function.
The effects of p53 on cellular metabolism extend beyond glycolysis and oxidative phosphorylation (Vousden and Ryan, 2009). To gain a more comprehensive view of how p53 modulates metabolism in 5% O 2 , we performed targeted LC-MS/MS metabolomics analysis on sgNTC-and sgp53-targeted E1A; Hras G12V MEFs ( Table S1). Metabolite set enrichment analysis of metabolites with abundance changes of at least twofold upon p53 loss indicated a strong nucleotide signature, involving both purineand pyrimidine-related metabolites (Fig. 5,G and H). Notably, in addition to decreased nucleoside/tide levels, sgp53-targeted cells exhibited significant increases in levels of glutamine and glycine, both of which are precursors for nucleotide metabolism but are also involved in other metabolic pathways such as serine and one-carbon metabolism (Fig. 5 G). Collectively, these results confirm that p53 restrains alterations in glucose metabolism typical of the Warburg effect, primarily through a rebalancing of glycolytic and oxidative metabolism. Moreover, p53 influences other aspects of metabolism, such as nucleotide availability, in E1A;Hras G12V MEFs.
p53 restrains migration and invasion in E1A;Hras G12V MEFs Although roles for p53 in inducing apoptosis, maintaining genomic stability, and regulating metabolism may be central for restraining initial tumor growth, p53 may also modulate tumor progression. The ability of WT p53 to modulate behaviors paramount for metastasis, including migration and invasion, has been shown only in limited contexts (Gadea et al., 2007;Muller et al., 2011). Interestingly, in live-cell imaging experiments of E1A;Hras G12V MEFs grown in 5% O 2 ( Fig. 2 F), we observed that sgp53-targeted cells exhibit more pronounced lamellipodia (membrane protrusions at the leading edges of motile cells thought to be critical for driving cellular migration) than sgNTC cells (Video 2; Krause and Gautreau, 2014). Hence, we analyzed cellular migration by performing Boyden chamber Transwell assays on sgNTC-and sgp53-targeted cells grown in 5% O 2 . Although both sgNTC-and sgp53-targeted E1A;Hras G12V MEFs migrated through the Transwell filter, sgp53-targeted cells exhibited approximately threefold greater migration than sgNTC cells over 24 h (Fig. 6, A and B). Notably, these experiments were performed without any growth factor or nutrient gradient, suggesting that even without a specific cue, WT p53 restrained cellular migration in this context. In contrast, while a similar trend was observed in cells cultured in 21% O 2 , differences between sgNTC-and sgp53-targeted cell migration were not significant (see Fig. S2 A, no nutrient gradient), suggesting that the capacity of p53 to regulate migration is dampened under atmospheric oxygen conditions. Next, we sought to determine whether WT p53 also modulates invasiveness in this model system. We performed 3Dcollagen matrix invasion assays on sgNTC-and sgp53-targeted cells grown in 5% and 21% O 2 . In 5% O 2 , both sgNTC-and sgp53targeted cells formed small colonies within the collagen matrix. However, while sgNTC cells remained as small spheroids, sgp53targeted cells penetrated into the surrounding collagen matrix, forming star-like, invasive structures ( Fig. 6 C). Quantitation of invading colonies indicated there was a dramatic increase in the invasive capacity of sgp53-targeted cells (∼70% invasive colonies) relative to sgNTC cells (<5% invasive colonies; Fig. 6 D). Together, these results demonstrate that WT p53 plays a critical role in dampening both the migratory and invasive potential of G12V MEFs treated with erastin2 (era2) ± ferrostatin-1 (fer-1) in 5% O 2 . n = 2 cell lines/sgRNA. Viability data are mean ± SD; *, P < 0.05, one-way ANOVA, Bonferroni's multiple comparison test. Peroxidation data are mean percent FL1 + FL2 + cells ± SD in 2-3 cell lines/sgRNA. *, P < 0.05, Student's t test. (H) Basal lipid peroxidation in 5% and 21% O 2 . Data are the mean percentage of FL1 + FL2 + cells ± SD of 3 cell lines/sgRNA; **, P < 0.05, Student's t test. For D-G, data are mean ± SD of 3 cell lines/sgRNA in three independent replicates. For EC 50 doses, the dose parameter was log transformed, and the data were fitted with a sigmoidal four-point curve with Hill slope constrained to 1 using Prism 7 (GraphPad).
E1A;Hras G12V MEFs in physiological oxygen conditions. In striking contrast, p53-deficient cells grown in 21% O 2 failed to display any invasive behavior (Fig. 6, E and F), again reinforcing the need for appropriate oxygen tensions during cell culture to accurately model p53 function in vitro. Collectively, our results demonstrate that p53 deficiency in neoplastic cells can cause pleiotropic dysregulation of a variety of cellular processes.
The effect of oxygen tension on classic p53-dependent phenotypes Although the ability of p53 activity to regulate metabolism and invasion was markedly different in 5% and 21% O 2 , it remained unclear whether classic p53 pathways were similarly impacted by altering oxygen tension. We therefore performed apoptosis and proliferation assays on E1A;Hras G12V MEFs cultured in 5% or 21% O 2 . We found that apoptosis induced by acute DNA damage and serum deprivation was similar in magnitude and p53 dependence in both 5% and 21% O 2 (Fig. S2 B). In contrast, whereas proliferation was accelerated in physiological conditions, p53 loss had no significant impact in either oxygen tension (Fig. S2 C). Thus, the consequences of altering oxygen tension are phenotype specific, with p53-dependent noncanonical responses appearing more affected. It also remained unclear how oxygen tension might affect tumorigenicity in vitro. To address this question, we assessed the impact of altering environmental oxygen on tumorigenic potential using soft agar assays. Although sgp53(2) cells formed significantly more colonies than sgNTC cells in both 5% and 21% O 2 (Fig. S2, D and E), sgp53(2) cells grown in 21% O 2 formed fewer colonies than cells in 5% O 2 . Thus, p53-deficient E1A;Hras G12V MEFs grown in 5% O 2 are significantly more tumorigenic than the same cells grown in 21% O 2 , suggesting that studies performed under atmospheric oxygen conditions may underestimate the contribution of p53 action in transformation suppression. Together, these findings further highlight the need for use of appropriate oxygen tension during cell culture in vitro to accurately model in vivo cellular function.
Valente et al.
Journal of Cell Biology noncanonical phenotypes in isogenic WT and p53-deficient cells in another model under 5% O 2 conditions. We used the human HCT116 colon carcinoma cell line, with the dual purpose of examining p53 action in a carcinoma context as well as in human cells (Bunz et al., 1998). We found first that HCT116 cells lacking p53 underwent significantly less apoptosis in response to serum starvation, but not DNA damage (Fig. S3 A) than p53-proficient HCT116 cells. As in the E1A;Hras G12V MEFs ( Fig. 1 J), TP53 deletion did not significantly impact the proliferative rate of HCT116 cells (Fig. S3 B). We then assessed p53 noncanonical functions and observed that TP53 deletion significantly increased the number of polyploid HCT116 cells relative to TP53-proficient HCT116 cells (Fig. S3 C). Moreover, in [U-13 C] glucose tracing experiments, HCT116 cells lacking p53 exhibited increased abundance of total and labeled pools of the glycolytic intermediates glucose-6phosphate and phosphoenolpyruvate (Fig. S3 D), suggesting an induction of glycolysis upon TP53 loss. Interestingly, TP53deficient cells had lower lactate ion counts (Fig. S3 D), higher total and (m+2) ion counts of early TCA cycle intermediates ( Fig. S3 E), and an increased (m+2) citrate:(m+3) lactate ion ratio (Fig. S3 F) relative to TP53-proficient cells, all suggestive of increased pyruvate oxidation. This is surprising given the described role for p53 in promoting oxidative respiration and could suggest that under physiological oxygen conditions, p53 inhibits TCA cycle activity in this setting. Finally, we assessed the impact of p53 loss on the invasive capacity of HCT116 cells and observed that TP53 loss promoted a small but nonsignificant increase in the percentage of invading colonies in 3D-collagen matrix assays (Fig. S3, G and H). Together, these findings demonstrate that TP53 deficiency in HCT116 cells promotes multiple alterations in cell behavior, supporting the idea that pleiotropy of p53 action is conserved in carcinomas and in human cells.
p53 induces diverse transcriptional programs under physiological oxygen conditions Collectively, our findings demonstrate an expansive role for p53 in regulating varied cellular processes in oncogene-expressing cells under physiological oxygen conditions ( Fig. 7 A). To develop a broader understanding of the cellular pathways transcriptionally regulated by p53 in this setting, we performed gene expression analysis on early-passage sgNTC-and sgp53-targeted E1A;Hras G12V MEFs grown in 5% O 2 (Fig. 7 B). RNA sequencing (RNA-seq) analysis revealed that p53 elicits widespread transcriptional changes in E1A;Hras G12V MEFs, with 2,520 genes differentially expressed between sgNTC-and sgp53-targeted cells (Fig. 7, B and C; and Table S2). Given that our RNA-seq dataset was generated in 5% O 2 , while most studies on p53 have been performed at 21% O 2 , we first sought to determine how well our dataset aligned with the classic p53 transcriptional program ( Fig. 7 D). We indeed observed a strong p53 dependence for classic p53 target genes, such as Cdkn1a, Bax, and Mdm2. Next, we sought to determine if our dataset could provide new insight into how p53 directly regulates the phenotypes observed in E1A;Hras G12V MEFs. By overlapping this RNA-seq dataset with a p53 chromatin immunoprecipitation sequencing (ChIP-seq) dataset we previously generated in MEFs (Kenzelmann Broz et al., 2013), we identified 569 p53-dependent genes that are also bound by p53. Of these likely direct targets, 226 are induced or repressed by p53 >1.5-fold ( Fig. 7 E and Table S2). Literature analysis of these 226 genes uncovered strong p53dependent transcriptional signatures relating to p53-dependent phenotypes in E1A;Hras G12V MEFs, including apoptosis, genomic fidelity, metabolism, and migration (Fig. 7, F-I). Some categories, such as apoptosis, largely comprised classic p53 target genes ( Fig. 7 F), suggesting that these pathways are well characterized. For genome fidelity, we identified genes whose characterized functions relate to processes that p53 is thought only to impact indirectly, such as mitosis (e.g., Psrc1, Mapre3, and Dyrk3; Fig. 7 G; Ban et al., 2009;Jang and Fang, 2011;Rai et al., 2018). For other processes, such as metabolism and migration/ invasion, we identified various genes that might contribute to MEFs migrating through Boyden chamber filter and stained with Giemsa. Scale bar, 100 µm. (B) Mean fold change in migrating E1A;Hras G12V MEFs ± SD, expressed relative to counterpart sgNTC control cell line. n = 3 cell lines/ sgRNA; *, P < 0.05 **, P < 0.005, one-way ANOVA, Bonferroni's multiple comparisons test. (C and E) Representative images of sgNTC and sgp53(2) E1A;Hras G12V ;H11 Cas9 cultured in a 3D-collagen matrix in 5% (C) or 21% (E) O 2 . WGA marks cell membranes, and DAPI marks nuclei. Scale bar, 0.2 mm. (D) Percentage invading colonies in 5% O 2 . Data are mean ± SD, n = 3 cell lines/ sgRNA; ***, P < 0.0001, one-way ANOVA, Bonferroni's multiple comparisons test. (F) Percentage invading colonies in 21% O 2 . Data are mean ± SD, n = 3 cell lines/sgRNA; ns, not significant (P > 0.05), Mann-Whitney unpaired t test.
the phenotypes we observed but not previously annotated as direct p53 targets (Fig. 7, H and I;Fischer, 2019). The E1A;Hras G12V MEF platform can therefore serve as a tool to identify novel p53regulated genes in diverse p53-dependent processes.
p53 regulates actin dynamics through RhoD We next sought to determine whether our platform could reveal underappreciated aspects of p53 biology using functional annotation of the 569 p53-bound and regulated genes. Using Enrichr, we identified canonical p53 signatures, as well as noncanonical signatures, such as chromatin/nucleosome remodeling and endosome formation (Fig. 8 A; Chen et al., 2013;Kuleshov et al., 2016). Of relevance to our findings demonstrating a role for p53 in inhibiting cell migration and invasion, we observed numerous signatures relating to actin dynamics, stress fibers, and the cytoskeleton, and identified a list of 57 p53-bound and regulated genes implicated in actin-related processes (Fig. 8, A and B). To determine whether these signatures reflected a clear functional outcome in these cells, we examined F-actin in sgNTC-and sgp53-targeted E1A;Hras G12V MEFs. We observed gross alterations in F-actin structure, with a significantly increased percentage of cells with more than five stress fibers per cell in sgp53-targeted cells (40-50%) than in sgNTC cells (<5%; Fig. 8, C and D). Notably, no significant difference was observed in the overall level of F-actin fluorescence between sgNTC-and sgp53-targeted cells, suggesting that differences in stress fiber number cannot be explained by altered levels of F-actin (Fig. 8 E). Instead, functional p53 is associated with reduced stress fibers.
To identify p53 target genes that might be involved in regulating this stress fiber phenotype, we inspected the list of actinrelated genes (Fig. 8 B). On this list, we identified three members of the Rho GTPase family known to modulate stress fiber formation: RhoD, RhoV (up-regulated), and RhoE (down-regulated). To hone in on the most relevant family member, we performed meta-analysis of five mouse and five human published RNA-seq and ChIP-seq datasets and observed that RhoD was the Rho GTPase most consistently bound and regulated by p53 across cell types and species (Fig. S4). Inspection of mouse and human p53 ChIP-seq datasets from doxorubicin-treated MEFs and human keratinocytes (Kenzelmann Broz et al., 2013;McDade et al., 2014) revealed p53-binding peaks containing p53 Valente et al.
Journal of Cell Biology consensus sites within 10 kb of both the mouse and human RhoD loci (mouse, in the promoter; human, within intron 1; Fig. 9, A and B). Moreover, we validated p53-dependent expression of RhoD in E1A;Hras G12V MEFs by qRT-PCR analysis (Fig. 9 C). Together, these results suggest that RhoD is a bona fide p53 target gene in both human and mouse cells. To determine whether p53-mediated expression of RhoD can indeed inhibit stress fiber formation, we overexpressed RhoD in p53-deficient E1A;Hras G12V MEFs. Overexpression of FLAG-hRHOD significantly decreased the number of stress fibers relative to cells overexpressing HA-GFP (Fig. 9, D and E). These studies thus highlight the regulation of actin dynamics, specifically inhibition of stress fiber formation mediated by RhoD, as an underappreciated aspect of p53 function in cells undergoing cellular transformation. Collectively, our results suggest that p53 governs a complex network of cellular processes in E1A;Hras G12V MEFs, with p53 loss impacting apoptosis, DNA repair, genomic stability, multiple aspects of metabolism, migration, invasion, and actin dynamics (Fig. 9 F). The ability of p53 to coordinately regulate these processes may therefore be integral to its capacity to suppress oncogenic transformation.
Discussion
Here, to gain insight into global p53 function, we examined the ability of p53 to govern a host of cellular processes during transformation suppression. Previous studies characterizing p53 cellular functions have typically described one particular cellular function for p53 in a given context, without a comprehensive analysis to determine whether p53 can act broadly to modulate numerous cellular behaviors in that particular setting. Thus, it remained unclear whether each cellular function that p53 regulates is relevant in a specific context or whether p53 might regulate a range of cellular processes in a particular tumor suppression setting. To test the latter model, we used oncogeneexpressing MEFs, expressing or lacking p53, as an in vitro platform to perform a systematic analysis of the impact of p53 loss. Importantly, by using this primary cell-based system, we Valente et al.
Journal of Cell Biology mitigated any effects of accrued mutations typical of cancer cell lines that might confound analyses, and we helped to ensure that p53 signaling pathways remain intact. Consistent with previous studies, we found that p53 loss enhanced anchorageindependent cell growth and protected cells from apoptosis induced by different stressors, validating this model as a tractable system to investigate p53 functions associated with transformation suppression. Beyond these established phenotypes, we now reveal dramatic pleiotropy with p53 deficiency, with widespread alterations in cellular behavior, supporting the idea that in tumor suppression, p53 acts through coordinate regulation of many processes. An important facet of our study was the analysis of phenotypes at physiological oxygen. With the exception of p53-driven senescence, which is rescued in physiological oxygen (Parrinello et al., 2003), p53 cellular responses in vitro in physiological oxygen have not been well described. Remarkably, some dramatic phenotypes were revealed specifically in low oxygen conditions, including migration and invasion phenotypes observed with p53 deficiency. Thus, by cataloging phenotypes in 5% O 2 , which more closely models in vivo conditions, we observe that the contribution of p53 to tumor suppression may be more complex than previously thought.
Among the cellular phenotypes we examined were those described in recent studies implicating specific noncanonical p53 functions in tumor suppression. For example, ferroptosis was proposed as a central mechanism in p53-mediated tumor suppression (Jiang et al., 2015b;Li et al., 2012b). This notion is, however, controversial, with studies showing that p53 is capable of both potentiating and suppressing ferroptosis (Stockwell et al., 2017). Interestingly, in our model, p53 expression did not affect the induction of ferroptosis at 21% O 2 . Our observations therefore reinforce the idea that the role of p53 in ferroptosis is cell type specific and suggest further that ferroptosis does not universally explain p53 tumor suppression. Similarly, the capacity to promote DNA repair was proposed to be a critical mechanism by which p53 suppresses tumorigenesis (Janic et al., 2018;Valente et al., 2013). Supporting the idea that p53 promotes DNA repair, we observed delayed DNA doublestrand break repair in response to γ-IR in p53-deficient cells. While not necessarily affecting cellular proliferation or survival per se, defective DNA repair upon p53 loss could fuel malignancy by facilitating the accrual of additional mutations during tumor progression.
Beyond contributing to DNA double-strand break repair in our system, p53 exhibits an important role in maintaining chromosomal stability, as we observed significant increases in the numbers of tetraploid, polyploid, and aneuploid cells upon p53 inactivation. The capacity of p53 to regulate mitotic fidelity is thought to largely rely on p21-mediated cell cycle arrest (Kuffer et al., 2013;Lanni and Jacks, 1998). However, we observed no clear effect of p53 loss on proliferation in our system, Transcription start sites are marked by arrows. Red triangles mark significant "called" peaks. Predicted p53 response elements in each peak are indicated, with red denoting nucleotides in the conserved C(A/T)(A/T)G core. Spacers between two half sites and number of mismatches (denoted in lower case) are indicated. R = purines A or G; W = A or T; Y = pyrimidines C or T. (C) qRT-PCR analysis of RhoD and p21 gene expression in sgNTC and sgp53(2)-targeted E1A;Hras G12V MEFs. n = 2 cell lines/sgRNA. Data are presented as mean ± SD; ****, P < 0.0001, two-way ANOVA, Sidak's multiple comparison test. (D) Representative images of phalloidin-stained stress fibers in p53-deficient E1A; Hras G12V MEFs overexpressing HA-tagged GFP or FLAGh-RHOD in 5% O 2 . DAPI marks nuclei. Scale bar, 20 µm. (E) Percentage of cells with more than five stress fibers in sgp53(2)-targeted E1A;Hras G12V MEFs overexpressing HA-GFP or FLAG-h-RHOD in 5% O 2 . Data represent mean ± SD of n = 3 cell lines. **, P < 0.01, unpaired t test with Welch's correction. (F) The transcription factor p53 modulates a variety of diverse cellular processes in oncogene-expressing MEFs, which may be critical for its capacity to suppress transformation in these cells.
Valente et al.
Journal of Cell Biology presumably because expression of E1A blocks Rb-mediated arrest, the pathway through which p21 acts (Narita et al., 2003). Our findings therefore suggest that the regulation of polyploidy in E1A;Hras G12V MEFs occurs through Rb-and cell cycle arrestindependent mechanisms. Intriguingly, our RNA-seq analysis identified several highly up-regulated genes whose encoded proteins have reported roles in mitosis, suggesting that p53 may influence mitotic events directly in these cells. Alternatively, p53 might regulate ploidy though elimination of polyploid cells by apoptosis. Indeed, the Hippo pathway component LATS2 promotes cell death in polyploid E1A-and Hras G12V -expressing human fibroblasts by enhancing p53 activity (Aylon et al., 2010;Ganem et al., 2014). Together, these results highlight an important cell cycle arrest-independent contribution of p53 to the maintenance of genomic fidelity during transformation suppression, another means of impeding malignant progression. Our study has revealed p53-regulated metabolic processes that may contribute to limiting malignancy under physiological conditions. Previous reports in 21% O 2 have shown that p53 regulates glucose metabolism primarily by inhibiting the Warburg effect, a reprogramming from oxidative respiration to glycolysis characteristic of cancer cells (Vousden and Ryan, 2009). Similarly, we observed that sgp53-targeted E1A;Hras G12V MEFs in 5% O 2 exhibited enhanced lactate accumulation and a decreased propensity to convert pyruvate to citrate relative to sgNTC cells. Notably, we observed marked differences in glucose metabolism between 5% and 21% O 2 tensions, with lactate production from glucose being highly favored under physiological oxygen conditions and citrate production favored in atmospheric oxygen. Moreover, the capacity of p53 to inhibit metabolic changes associated with the Warburg effect was greatly dampened in 21% O 2 . Thus, previous studies in 21% O 2 may have underestimated p53's contribution to inhibition of the Warburg effect, again highlighting the value of using physiological oxygen tensions during cell culture to accurately model in vivo cell behavior.
In addition to glucose metabolism, p53 regulates other metabolic pathways such as fatty acid oxidation and serine and onecarbon metabolism (Jiang et al., 2015a;Vousden and Ryan, 2009). Metabolomics analysis on E1A;Hras G12V MEFs in 5% O 2 revealed that pyrimidine and purine metabolism were severely impacted by p53 loss, supporting several studies documenting altered nucleotide levels upon p53 loss (Huang et al., 2018b;Maddocks et al., 2013). The decreased pyrimidine and purine levels we observe with p53 loss may reflect impaired nucleotide synthesis, defective salvage and uptake pathways, or enhanced utilization of nucleotides-an observation consistent with the need to synthesize an entire genome's worth of extra DNA in tetraploid cells. Together, our results underscore the complexity of p53's regulation of metabolism, with p53 rewiring multiple aspects of glucose and nucleotide metabolism in E1A; Hras G12V MEFs.
Although dysregulation of the aforementioned cell processes might contribute to the growth of tumors, we also identified functions regulated by p53 that could be more relevant for metastatic spread. Mutation of TP53 in human cancers has been correlated with enhanced aggressiveness and metastatic capacity, but the role of p53 in modulating cell migration and invasion has more consistently been linked to p53 gain-offunction rather loss-of-function (Muller et al., 2011). However, some reports suggest a role for WT p53 in dampening both cell migration and invasion, through multiple mechanisms, including inhibition of podosome and filopodia formation and regulation of cell spreading (Kawauchi and Wolf, 2014;Muller et al., 2011). Of note, in our RNA-seq and ChIP-seq analyses, where actin-related functions were among the top molecular signatures, the previously described p53 target genes Rad and Rap2B (Di et al., 2015;Hsiao et al., 2011) encoding actin regulators were either absent or only mildly enriched. Instead, we identified the RhoGTPase, RhoD, as a key component of p53dependent regulation of the actin cytoskeleton, with p53dependent induction conserved between mouse and human cells and RhoD overexpression potently suppressing stress fiber formation in p53-deficient E1A;Hras G12V MEFs. Regulation of different types of actin filament assemblies is critical for multiple cell processes, such as mitosis and cytokinesis, establishing polarity, and intracellular trafficking (Pollard and Cooper, 2009;Tojkander et al., 2012). Altered expression of actinbinding proteins, including those regulating stress fiber formation, has been associated with enhanced proliferation, invasion, and metastasis (Liang et al., 2011;Stevenson et al., 2012;Tavares et al., 2017). Transcriptional regulation of RhoD and genes encoding other actin-binding proteins by p53, as identified in our RNA-seq analyses, could thus form a critical node in tumor suppression, with the complex interplay between these processes promoting an antitumor program.
It will be important in the future to determine how broadly p53 deficiency drives global phenotypic changes in different cellular contexts. As a proof-of-concept, we show that HCT116 human colon carcinoma cells exhibit dysregulation of multiple cellular processes upon p53 loss, including apoptosis, maintenance of ploidy and glucose metabolism. However, the spectrum of cell processes impacted by p53 deficiency was substantially dampened in HCT116 cells, suggesting selection against p53 signaling during long-term culture. These observations therefore reinforce the value of using a primary cell-based transformation model system, such as E1A;Hras G12V MEFs, where the immediate effects of p53 deficiency can be directly assessed. Our HCT116 studies nonetheless support the notion that pleiotropy in p53 function is conserved between species and differentiation states.
Collectively, our findings provide clear evidence that, beyond merely regulating cellular expansion, in this case through apoptosis, p53 can simultaneously regulate numerous other processes, including genome stabilization, DNA repair, metabolism, migration, invasion, and actin dynamics. These observations suggest that during tumor suppression, p53 may govern myriad processes to maintain tissue homeostasis. It will be important to determine whether cooperative regulation of all or only some of these cellular processes is essential for p53 to block tumorigenesis. Such concerted effects may explain why p53, rather than its target genes, is the most frequently mutated gene in human cancer: a single missense mutation in p53 can trigger an array of pro-tumorigenic changes in nascent cancer cells. It is interesting to consider the interplay between defects in different processes that could amplify the negative outcomes of p53 loss. For example, loss of p53 increases ploidy and decreases nucleotide availability, which could affect DNA replication by inducing replication fork stalling, leading to increased DNA damage (Bester et al., 2011). These effects would then be further amplified due to impaired DNA repair in cells lacking p53, leading to enhanced mutational rates and promoting more aggressive disease. In addition, although our study uncovers the potential of p53 to regulate many diverse aspects of cellular behavior, the relevance of loss of these functions with p53 mutation in cancer may depend on tumor stage or microenvironmental cues. Understanding the complex spectrum of cellular activities regulated by p53 in each cell type during transformation suppression will provide insight into the cellular pathways that are best targeted in drug development for different cancer types, either alone or in a synthetic lethal approach.
To confirm CRISPR/Cas9 targeting of p53, forward and reverse primers were designed to amplify the region flanking both sgRNAs' targeting sites in Exon 4 (p53 forward 59-GGACTGCAG GGTCTCAGAAG-39; and p53 reverse 59-CTGAAGAGGACCCCC CAAAT-39), and Sanger sequencing was performed. Sequencing reads were analyzed using interference of CRISPR edits (ICE) algorithm (Synthego) to determine the success of targeting. All sgp53-targeted cell lines exhibited knockout scores of >80 (Fig. S1 B). One sgp53(1) targeted cell line (line 2, embryo 1,588.5) exhibited expression of a protein migrating with p53 ( Fig. S1 C) but was unable to induce expression of p21 and Mdm2 (Fig. S1 D), indicating it was functionally null for p53. Moreover, the cell line did not significantly differ from the two other completely null sgp53(1) cell lines in any assay performed (Fig. S1, A-F). To confirm on a broader level that this cell line was functionally p53 null, we chose this sgp53(1) guide for our RNA-seq analyses and observed no or minimal differences from the two completely null sgp53(1) cell lines when assessing p53-target gene expression (Fig. 7 D), again confirming the functionally null status of this line.
If prioritization of a single sgRNA was required, e.g., metabolomics, glucose-tracing, and actin and oxygen tension analyses, we chose to use the sgp53(2) guide given that three of three cell lines were completely p53 null, thus providing the cleanest system to assess the impacts of p53 loss (Fig. S1 C). Early passages of targeted cells (passages 2-3) were grown in bulk and frozen for future experiments. All phenotypic analyses were performed within 3 wk of thawing cells (<passage 15) to limit acquisition of phenotypes that were secondary to p53 deficiency.
Physiological versus atmospheric oxygen tension analyses
Frozen vials of sgNTC and sgp53(2) E1A;Hras G12V MEFs (generated under 5% O 2 conditions) were thawed and pelleted. Cells were resuspended in medium, and each vial was split in two separate plates. Cells were then incubated in 5% or 21% O 2 incubators (5% CO 2 , 37°C) and allowed to equilibrate for a minimum of 72 h (at least one passage) before phenotypic analyses.
Immunofluorescence
Cells were grown on coverslips for 24 h before the beginning of the experiment. Coverslips were harvested and fixed with 4% PFA, permeabilized with 0.02% Triton X-100 in PBS, and blocked in 5% BSA with the primary and secondary antibodies listed in the relevant methods sections. Coverslips were then mounted using ProLongGold antifade reagent with DAPI (P36931; Invitrogen), unless stated otherwise. For detection of p53 protein expression, primary rabbit anti-p53 (CM5; Novogene, 1:2,000) and secondary Alexa Fluor 546 goat anti-rabbit IgG (A-11035; Invitrogen, 1:250) antibodies were used.
Soft agar assay
Anchorage-independent growth was assessed using soft agar assays. Briefly, 1.5 ml of phenol-free complete DMEM supplemented with 0.5% low melt agarose (BP160; Fisher) and 10 µg/ ml gentamicin was aliquoted into six-well plates (in triplicate) and allowed to set at RT. Next, trypsinized, washed, and pelleted cells were resuspended in phenol-free complete DMEM supplemented with 0.3% noble agar and 10 µg/ml gentamicin at a final density of ∼5,000 cells/1.5 ml. 1.5 ml of cells were then overlaid, in triplicate, onto prepared wells and allowed to set at RT before being overlaid with 1 ml complete DMEM. Cells were incubated at 37°C and 5% CO 2 and either 5% or 21% O 2 for 2 wk, and medium was refreshed once a week. Colonies were visualized by incubation with Giemsa stain (48900; Sigma-Aldrich) for 15 min at RT, then incubation overnight at 4°C. Plates were scanned using an Epson Perfection 3490/3590 scanner, and images were captured at 600-dpi resolution using EpsonScan software. To quantitate colonies, a 400 × 400-pixel square was isolated from the center of each well, and the number of colonies counted using ImageJ (National Institutes of Health).
Cell viability assays
Cells were treated with doxorubicin (0.2 µg/ml) or low-FBS DMEM (0.1% FBS) for 24 h. For 0.1% FBS experiments, cells were washed 3× with warm PBS to remove any residual FCS before addition of 0.1% FBS medium. Cells were harvested by trypsinization and stained with Annexin-V-FITC (BioLegend, 1:40) and propidium iodide (PI; Promocell, 1 µg/ml). Flow cytometric analysis on a BD LSR Fortessa flow cytometer was used to assess cell viability, with events captured using BD FACSDiva software and sample data analyzed using FlowJo 10.5.3 analysis software (FlowJo). Cell viability is expressed relative to DMSO or 10% FBS medium-treated controls.
Proliferation analysis 50,000 cells were plated in six-well plates in duplicate. At 24, 48, 72, and 96 h, duplicate wells were trypsinized, pelleted, resuspended in 1 ml of PBS, and counted using a LUNA II Automated cell counter. The t = 24 h time point served as a starting cell count to control for any differences in seeding potential between cell lines. Fold change in cell number was calculated by dividing the average cell count at each time point (t = 48, 72, and 96 h) by the t = 24 h cell count.
Metaphase spreads
Cells were treated with 10 µg/ml KaryoMAX Colcemid Solution (Gibco) for 2 h. Trypsinized cells were then incubated in 0.56% KCl at RT for 20 min. Cells were fixed in ice-cold 3:1 methanol: glacial acetic acid solution (5 min) and then pelleted on low speed (500 rpm) three times. Cells were dropped on glass slides from ∼1-m height, dried and stained with 3% Giemsa stain (Promega), and coverslips were mounted. Metaphase spreads were captured using a Leica DM6000B bright-field microscope using a Leica 100×/1.4 oil-immersion objective. Images were captured using a Hamamatsu C11440-42U digital camera and Leica Application Suite X (LASX) software. Chromosome number was quantitated in >20 cells per sgRNA per cell line.
Analysis of ploidy using the FUCCI cell cycle marker system To generate sgNTC-and sgp53-targeted E1A;Hras G12V MEF lines expressing the FUCCI system, 293AH cells were transfected with 2 µg retroviral packaging vectors and 4 µg hCDT1-mKO2 or hGeminin-mAG retroviral constructs (Sakaue-Sawano et al., 2008) in 500 µl Opti-MEM (Gibco) mixed with 20 µl Lipofectamine 2000 reagent (Invitrogen) in 480 µl Opti-MEM. A 1:1 hCDT1/hGEM ratio of viral supernatant was used to transduce sgNTC-and sgp53-targeted E1A;Hras G12V cells. Transduced cells were cultured for 1 wk before analysis. 100,000 cells were plated into six-well plates, and after 24 h cells were trypsinized, pelleted, resuspended in 1 ml complete DMEM containing DyeCy-cleViolet reagent (Genesee, 1:1,000), and incubated at 37°C for 30 min before flow cytometric analysis on a BD LSRFortessa X-20 flow cytometer using BD FACSDiva software to capture events. To quantify G 1 tetraploid and polyploidy, samples were analyzed using FlowJo 10.5.3 software. Live cell populations were gated on forward/side scatter scatterplots; live cells were then plotted on a mKO 2 /mAG scatterplot, and cells positive for either marker were gated (FUCCI + cells). DNA content of FUCCI + cells was then plotted on a histogram using DyeCycleViolet fluorescence, and gates were drawn around cells exhibiting ≥2N DNA content (>2N cells). Finally, >2N cells were plotted on an mKO 2 /mAG scatterplot. G 1 tetraploid cells were defined as cells having >2N DNA expressing hCDT1-mKO 2 , and polyploid cells were defined as having >4N DNA content.
Live-cell imaging of mitosis and migration Cells in complete DMEM minus phenol red were plated in Ibidi µ-slide eight-chamber glass-bottom culture slides precoated with poly-D-lysine (5 µg/cm 2 ). Cells were imaged by phase contrast at intervals of 5 min for 16 h on a Leica DMi8 inverted microscope set on a 63×/1.4 magnification oil objective using a Leica DFC9000 GT digital camera and LASX software. 10 regions of the culture chamber were imaged and analyzed per cell line, with >45 cells analyzed for all lines. Normal mitotic events were defined by cells balling up and then splitting into two daughter cells, whereas abnormal mitotic events were determined by the following criteria: multipolar mitosis (cell splits or attempts to split into >2 cells); mitotic slippage and failed cytokinesis (cell balls up but then flattens without splitting into daughter cells, or attempts to split but then fails), bi/multinucleation (cells undergo mitosis and daughter cells have more than one nucleus), or bi/multinucleate recovery (cells with more than one nucleus ball up and split into two or more cells with daughter cells having one or more nuclei).
Quantitation of multinucleated cells
Cells were grown on coverslips at low density in complete DMEM for 24 h. Cells were then fixed, permeabilized, and stained with Alexa Fluor 488 wheat germ agglutinin (WGA; W11261; Fisher, 1:1,000) to define cell boundaries, and then mounted on slides using ProLongGold antifade reagent with DAPI (P36931; Invitrogen) to visualize DNA. Cells were imaged on a Leica DMi8 inverted fluorescence microscope using a Leica 63×/1.4 oil objective, with an 8 × 8 tile scan captured using a Leica DFC9000GT digital camera and LASX acquisition software. The number of nuclei per cell was then manually quantified.
Quantitation of DNA repair by γH2AX foci resolution assay Cells were irradiated with 2 Gy IR using a 137 Cs source. Unirradiated cells served as controls. At 1, 6, and 24 h after IR, cells were subjected to immunofluorescence analysis (see above) using antibodies against phosphorylated histone H2AX-Ser139 (JBW-301; Millipore, 1:1,000) and fluorescein goat anti-mouse IgG (FI-2000;Invitrogen, 1:250). Cells were visualized using a Leica DM6000B fluorescent microscope with a 40×/0.85 dry objective. Images were captured using a Hamamatsu C11440-42U digital camera and LASX software. Five images/time point were captured and analyzed using ImageJ. Quantification of total γH2AX fluorescence per nucleus was performed rather than counting individual γH2AX foci to account for differences in the total number, size, and intensity of γH2AX foci between different cells. To control for potential differences in the total size of nuclei between WT and p53-deficient cells, γH2AX fluorescence values were normalized to the nuclear area by region of interest gating on DAPI fluorescence, and the percentage of cells exhibiting high γH2AX fluorescence was quantitated. To set the threshold cutoff for γH2AX high cells, we quantitated the γH2AX fluorescence of sgNTC-targeted cells 1 h after 2 Gy IR and then performed statistical analysis (descriptive statistics algorithm, Prism 8) to determine the upper 75th percentile quartile. Any γH2AX fluorescence value in the 75th percentile quartile was then determined to be γH2AX high . This threshold (sgNTC 1 h after treatment) was then applied to all cell lines and time points. Data represent three independent cell lines/sgRNA, and >50 cells were analyzed per cell line and time point.
Ferroptosis analysis
Erastin2 (compound 35MEW28, reported in Dixon et al., 2014) was synthesized by Acme Bioscience (Palo Alto, CA), and Ferrostatin-1 was obtained from Cayman Chemicals. Both drugs were resuspended in DMSO and stored at −20°C before use. For ferroptosis analysis using PI at 5% O 2 tension, cells were treated with 10 or 100 nM erastin2 ± 1 µM ferrostatin-1 for 16 h. Cells were harvested by trypsinization and then stained with PI (Promocell, 1 µg/ml). Flow cytometric analysis of cell viability was performed using a BD LSR Fortessa flow cytometer with events captured using BD FACSDiva software. Sample data were analyzed using FlowJo 10.5.3 analysis software (FlowJo). Ferroptosis susceptibility at 21% O 2 tension was assayed using STACK (scalable time-lapse analysis of cell death kinetics; Forcina et al., 2017). E1A;Hras G12V sgNTC-or sgp53-targeted MEFs were transduced with a nuclear localized NucLight Red lentiviral construct (NLR, IncuCyte) encoding a nuclear-localized mKate2 protein, and positively transduced cells were selected using puromycin selection. mKate2 + E1A;Hras G12V sgNTC-or sgp53-targeted MEFs were seeded in replicate in 96-well plates at a density of 15,000 (sgNTC) or 10,000 (sgp53(1) and sgp53(2)) cells per well. Lower cell densities for sgp53(1) and sgp53(2) cells were used to ensure equal cell density given that sgp53-targeted cells exhibit a higher total area and seeding efficiency than sgNTC-targeted cells. The next day, each replicate plate was treated with erastin2 or a vehicle control in a 4point 10-fold series of doses (100 nM down to 0.1 nM). Each replicate plate was then cotreated with either 1 µM ferrostatin-1 or vehicle. SYTOX Green viability dye (Life Technologies) at a final concentration of 22 nM was also added to each well of all plates. Cells were imaged at t = 0, 4, 8, and 24 h using an In-cuCyte live cell analyzer (Essen BioScience). mKate2 + , SYTOX Green + , and double-positive mKate2 + /SYTOX Green + objects were counted using IncuCyte ZOOM Live-Cell Analysis System software. Lethal fraction scores were calculated for each time point (Forcina et al., 2017) with the following modification: double-positive (mKate2 + /SYTOX Green + ) cells were subtracted from the counts of live mKate2 + cells at all time points.
Lipid peroxidation flow cytometric analysis E1A;Hras G12V MEFs were cultured in 5% or 21% O 2 for 72 h before harvesting by trypsinization. Cells were then stained with the BODIPY 581/591 C11 lipid peroxidation sensor (D3861 Thermo Fisher Scientific) for 30 min at 37°C (5% or 21% O 2 ) before flow cytometric analysis on a BD LSRFortessa X-20 flow cytometer using BD FACSDiva software to capture events. Live cells were plotted on PE-Texas Red versus FITC dot plots, with PE Texas Red + /FITC + cells serving as the positive population. Unstained cells served as a gating control.
[U-13 C]glucose tracing Cells were equilibrated in RPMI 1640 with 10% dialyzed FBS (Thermo Fisher Scientific) for 48 h before the assay. For oxygen tension experiments, cells were thawed into this medium and allowed to equilibrate in 5% or 21% O 2 for 72 h before plating for analysis. Cells were washed twice with warm sterile PBS before [U-13 C]glucose RPMI medium (10% dialyzed FBS) lacking glucose, serine, and glycine (TEKnova) and reconstituted with [U 13 C]glucose (2 g/liter), serine (0.03 g/liter), and glycine (0.01 g/liter) was added to each plate. At 6 h, medium was removed, and plates were washed twice with ice-cold PBS before extraction with 325 µl of 80:20 acetonitrile:water on ice for 15 min. Cells were scraped off plates, sonicated for 30 s with a Biorupter 300 sonicator (Diagenode), and spun down at 1.5 × 10 4 rpm for 10 min. 200 µl supernatant was taken out for immediate LC/electrospray ionization MS/MS analysis.
Quantitative LC/electrospray ionization MS/MS analysis of [ 13 C]glucose-labeled cell extracts was performed using an Agilent 1290 UHPLC system equipped with an Agilent 6545 quadrupole time-of-flight mass spectrometer. A hydrophilic interaction chromatography method with a BEH amide column (100 × 2.1 mm internal diameter, 1.7 µm; Waters) was used for compound separation at 35°C with a flow rate of 0.3 ml/min. Mobile phase A consisted of 25 mM ammonium acetate and 25 mM ammonium hydroxide in water, and mobile phase B was acetonitrile. The gradient elution was 0-1 min, 85% B; 1-12 min, 85% B → 65% B; 12-12.2 min, 65% B → 40% B; 12.2-15 min, 40% B. After the gradient, the column was reequilibrated at 85% B for 5 min. The overall runtime was 20 min, and the injection volume was 5 µl. Agilent quadrupole time-of-flight was operated in negative mode, and the relevant parameters were ion spray voltage, 3,500 V; nozzle voltage, 1,000 V; fragmentor voltage, 125 V; drying gas flow, 11 liter/min; capillary temperature, 325°C; drying gas temperature, 350°C; and nebulizer pressure, 40 psi. A full scan range was set at 50 to 1,600 m/z. The reference masses were 119.0363 and 980.0164. The acquisition rate was 2 spectra/s. Isotopologue extraction was performed in an Agilent Profinder B.08.00 (Agilent Technologies). Retention time of each metabolite was determined by authentic standards. The mass tolerance was set to ±15 ppm, and retention time tolerance was ±0.2 min. Natural isotope abundance was corrected using Agilent Profinder software (Agilent Technologies). For normalization of ion counts, cell pellets were vacuum dried, and then protein concentration was determined using the Pierce bicinchoninic acid protein assay kit (Thermo Fisher Scientific), according to manufacturer's instructions.
Metabolomics analysis 3 × 10 6 sgNTC-targeted and 2 × 10 6 sgp53(2)-targeted E1A; Hras G12V ;H11 Cas9 MEFs were plated in 10-cm plates in complete DMEM. Different seeding densities were used to account for differences in seeding efficiency and cell size between sgNTC and sgp53(2) cells, ensuring that cells were analyzed when growing exponentially. Cells were analyzed when 80% confluent. At 24 h, cells were fixed and lysed by incubation in 80% methanol on dry ice. Methanol-extracted samples were then processed by the Children's Medical Center Research Institute Metabolomics Core Facility at UT Southwestern. Targeted LC/ MS/MS using an AB QTRAP 5500 liquid chromatograph/triple quadrupole mass spectrometry system (AB SCIEX) and data analysis were performed as previously described (Kim et al., 2017;Mullen et al., 2014). Relative metabolite abundances were determined by normalizing peak areas to total ion current. Unpaired Student's t tests were used to determine any statistically significant difference in metabolite concentrations. Pathway enrichment analysis on significant hits was performed using Metaboanalyst 3.0 (Chong et al., 2018).
Transwell Boyden chamber migration assay Boyden chamber assays were performed with complete DMEM (10% FBS) in the lower compartment of the Transwell plate and 25,000 cells in 500 µl of complete DMEM (10% FBS) in the top compartment (Corning 12-well control inserts, 8-µM pores). Cells were incubated for 24 h before nonmigrating cells were removed, and inserts were fixed in 4% PFA (15 min), washed three times with PBS, and stained with 0.1% crystal violet (Sigma-Aldrich) for 30 min. Washed (H 2 O) and dried membranes were then covered with coverslips, and cells were Valente et al. Journal of Cell Biology visualized by bright-field microscopy on a Leica DM6000B microscope using a 20×/0.8 dry objective. Images were captured using a Hamamatsu C11440-42U digital camera and LASX acquisition software. Five images in random positions were captured per insert (excluding the regions closest to the insert edge). Cells per 20× objective field were then quantified.
Collagen 3D matrix invasion assay A single-cell suspension of 5,000 cells in 100 µl DMEM with 20% FBS was mixed with 100 µl rat tail collagen I (Corning) on ice. 50 µl cell/collagen suspension was then plated into the wells of a 96-well plate in triplicate. Collagen was allowed to polymerize for 30 min at RT before plates were incubated at 37°C, 5% CO 2 , and 5% O 2 for 1 h. Collagen plugs were overlaid with 100 µl complete DMEM and incubated for 5 d. Excess media was removed, and collagen plugs were fixed in 4% PFA for 1 h before plugs were permeabilized in 0.02% Triton X-100 in PBS for 30 min. Plugs were washed with PBS three times before being stained with WGA and DAPI (BioLegend, 1:1,000) in PBS overnight at 4°C. Collagen plugs were washed three times in PBS before being plated onto glass slides on which paraffin wax was used to create a hydration barrier, and excess PBS was then added to maintain hydration of the collagen plug during imaging. Collagen plugs were overlaid with a coverslip and sealed with nail polish. Quantitation of invading/noninvading colonies was performed by manually scanning the entire collagen plug by eye using a Leica DMi8 inverted fluorescence microscope, set on a 40×/0.85 dry objective. To create representative images, collagen plugs were imaged on a DM6000B microscope using a 40×/0.85 dry objective, with a Z-stack of the entire colony captured. Images were captured using a Leica DFC9000 GT digital camera and processed using LASX software and the 3D-deconvolution and maximum-projection algorithms.
RNA-seq expression analysis
For RNA-seq, 10 6 cells (three sgNTC and three sgp53(1) cell lines) were plated in complete DMEM and incubated for 24 h at 37°C, 5% CO 2 , and 5% O 2. The sgp53(1) sgRNA was used for this RNAseq analysis to determine the functionality of the protein expressed in embryonic line 2, which migrated at the same size as p53 on SDS-PAGE. Analysis focusing on canonical p53 target genes indicated that this line did not significantly differ from the other two completely null sgp53 (1) (Dobin et al., 2013) was used to align the reads to the mouse genome (mm10), and DESEQ2 (Love et al., 2014) was used for differential expression analysis. A volcano plot showing highly significant genes (q value ≤0.05, fold-change ≥1.5) was generated. The q value refers to the P value adjusted for false discovery rate. Genes with a q value of ≤0.05 were used for all subsequent analysis. All RNA-seq data are available in the GEO database, accession no. GSE136355. Previously published p53 ChIPseq results from primary MEFs treated with 0.2 µg/ml doxorubicin were used to define p53-bound genes and were defined as genes that display p53 binding within 10 kb of the gene. (Kenzelmann Broz et al., 2013), available on the GEO database, accession no. GSE46240. To identify novel p53-direct target genes, we cross-referenced our RNA-seq dataset with the TargetGeneReg database, derived from a meta-analysis of human and mouse p53 expression profiling datasets (Fischer, 2019). Heatmaps were generated using Heatmapper (Babicki et al., 2016). To determine the cell processes in which the 226 genes up-or down-regulated by p53 >1.5-fold had been previously been implicated, annotations on GeneCards, the human gene database (Stelzer et al., 2016), were used to bin genes into functional categories. Categorization was then refined by literature analysis on PubMed. Pathway enrichment analysis was performed using Enrichr (Chen et al., 2013;Kuleshov et al., 2016) using the KEGG 2019 Human, KEGG 2019 Mouse, and GO Cell Component databases. Meta-analysis of RhoGTPase expression in human and mouse RNA-seq and ChIP-seq datasets was derived from the following studies: Allen et al., 2014;Kenzelmann Broz et al., 2013;Lee et al., 2010;Li et al., 2012a;McDade et al., 2014;Menendez et al., 2013;Nikulenkov et al., 2012;Tanikawa et al., 2017;Tonelli et al., 2015;Wang et al., 2014.
Actin structure analysis For actin structure analysis, cells were grown on coverslips for 24 h, fixed, and permeabilized as described above. Cells were stained with Alexa Fluor 488 Phalloidin (a kind gift from Matt Footer, Stanford University, Stanford, CA; 1:1,000) for 1 h at 37°C in a humidified chamber. Cells were visualized using a Leica DMi8 inverted microscope with a 63×/1.4 oil-immersion objective. Z-stack images were captured and processed using a Leica DFC9000GT digital camera, LASX acquisition software, and 3Ddeconvolution and maximum-projection algorithms. Total cell Phalloidin fluorescence was quantified using ImageJ. Briefly, using the Alexa Fluor 488 channel, region of interest gates were drawn around individual cells, and total cell fluorescence and cell area were measured. To account for differences in cell size between sgNTC-and sp53-targeted cells, phalloidin fluorescence values were normalized to total cell area. Stress fiber number was analyzed manually. Total cell phalloidin fluorescence and stress fiber analysis represents the combined analysis of >30 cells per cell line.
Statistical analysis
Unless otherwise stated, all statistical analyses were performed using GraphPad Prism 8.
Online supplemental material Fig. S1 shows sgp53 CRISPR targeting sites and validation studies on the nine cell lines used in this study. Fig. S2 shows comparative analysis of cell behavior under physiological (5%) or atmospheric (21%) oxygen conditions. Fig. S3 shows phenotypic analysis of the consequences of p53 loss in human HCT116 colon carcinoma cell lines. Fig. S4 shows meta-analysis of RhoGTPase expression in mouse and human RNA-seq and ChIP-seq datasets. Video 1 shows normal and abnormal mitotic events in sgNTCand sgp53-targeted E1A;Hras G12V MEFs in 5% O 2 . Video 2 shows 2D migration of sgNTC-and sgp53-targeted E1A;Hras G12V MEFs in 5% O 2 . Table S1 shows metabolomics analysis of sgNTC-and sgp53(2)-targeted E1A;Hras G12V MEFs generated in 5% O 2 . Table S2 shows the differentially expressed genes between sgNTCand sgp53(1)-targeted E1A;Hras G12V MEFs grown in 5% O 2 . Two tables are provided online in Excel files. Table S1 shows the metabolomics dataset: sgNTC-and sgp53(2)-targeted E1A;Hras G12V MEFs grown in 5% O 2 for 24 h. Table S2 shows the RNA-seq dataset from E1A;Hras G12V MEFs in 5% O 2 , annotated to show genes that had p53 ChIP peaks in the Kenzelmann Broz et al. (2013) ChIP seq dataset. Figure S4. Meta-analysis of p53 binding and p53-dependent RhoGTPase expression in human and mouse RNA-seq and ChIP-seq datasets. Metaanalysis of RhoD, RhoV, and RhoE in published in vitro and in vivo mouse and human RNA-seq and ChIP-seq datasets examining p53 binding and p53-regulated expression. UP/DOWN indicates gene expression was enriched/repressed relative to controls; BOUND indicates p53-binding peaks. Gray boxes refer to studies in which the gene was not found. | 2020-09-06T13:05:49.946Z | 2020-09-04T00:00:00.000 | {
"year": 2020,
"sha1": "e6808d43bc122924161907807a1b17ee7a5d206d",
"oa_license": "CCBYNCSA",
"oa_url": "https://rupress.org/jcb/article-pdf/219/11/e201908212/1049434/jcb_201908212.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "f2a727c352570b7f1ab59d9b225a8df12b8760af",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
197286398 | pes2o/s2orc | v3-fos-license | SYNTHESIS AND SPECTROSCOPIC PROPERTIES OF 9-SUBSTITUTED BENZ[g]INDOLES
-Pbotolysis of I, 1'-(1,s-naphthy1ene)-di-IH-l,2,3-trim in methanol has given new benz[g]indoles with a triazole ring at 9-position. Similar photolysis of l-(8-dimethylamino-l-naphthyl)-lH-l,2,3-triazoles also gives new benz[g]indoles with a dimethylamino group at 9-position. The spectral properties of these compounds were studied in comparison with those of corresponding benz[g]indoles obtained from the similar photolysis of 1-(1-naphthy1)-1H-1,2,3-triazoles. Since the substituent at 9-position and the pyrrole moiety exist in close proximity in peri-position of the naphthalene ring, unique properties such as the strong intramolecular hydrogen bonding and the restricted rotation of C(sp2)-N(sp3) single bond were observed in the 9-substituted benz[g]indoles .
1. 2,3-triazoles (2) .'According to the X-Ray diffraction study of 1, the naphthalene ring is distorted due to the repulsion between the two triazole rings.'The photoreaction of 1 has also been studied, and proved to give the new heteroaromatic system compounds, indolo[6,7-g]indoles(3).' By comparing with the spectrum of corresponding benz[g]indoles (4), which are generated by the photolysis of 2, an interesting intramolecular hydrogen bonding was observed in 3.In this note, we report on the preparation of 9-substituted benz[g]indoles (5 and 6) by the similar photolyses of 1,s-disustituted naphthalenes with the triazole ring and on the results of the spectroscopic analyses of these compounds.
RESULTS AND DISCUSSION
Photoreaction of tetraethyl 1,l' -(I,S-naphthy1ene)-di-IHl,2,3-triazole-4,5-dicarboxylate ( l a ) in methanol was followed by HPLC using ODS column and methanollwater as an eluent.As a result, 15 min of irradiation of l a afforded indolo [6,7-glindole derivatives (3a), which was produced by the elimination of nitrogen molecules from the two triazole rings as reported b e f ~r e .~ Irradiation of l a for 2 min, however, gave intermediate products, diethyl 9-(1H-1,2,3-triazol-4,5-diethoxycarbonyl-I-y1)benz[g]indole-2,3-dicarboxylate (Sa), with 3 a and the starting material.Similar photoreactions took place in the isopropyl ester analogue (lb), and 5 b was obtained after 2 min irradiation as well as the indolo[6,7-glindole derivative (3b).In the case of 2 , similar reactions were observed on photolyses and IH-benz[g]indoles (4) were formed almost quantitatively after 15 min irradiation.Several spectroscopic data showed that diethyl 9-dimethylaminobenz[g]indole-2.3-dicarboxylate(6a) was formed by the photoreaction from 7 a .The amount of 6 a became maximum after 60 min irradiation.The longer irradiation caused further photolyses, and the amount of 6 a gradually decreased.Yield of 6a, 1270, is notably low compared with that of 3 a and 4a, 70% and 95%.respectively.The rates of the photolysis of l a were slower than that of diethyl l-(l-naphthyl)-1H-l,2,3-trimle-4,5-dicarhoxylate (2a), and much slower rate was observed in the case of diethyl I-(8-dimethyl~no-I-naphthyl)-lH-1,2,3-triazole-4,5dicarboxylate (7a).It took 2 min, 10 min, and 90 min for 2a, l a , and 7a, respectively, to disappear completely in the photolysis.In photolysis of isopropoxycarbonyl derivative (7b) in methanol, 6 b was formed similar slow rate and low yield (16 %).The lower reactivity of 7 was caused probably by the proximity of the adjacent dimethylamino group, which inhibits the cyclization.Spectroscopic data for Sa, Sb, 6a, and 6 b are listed in Table 1 together with those for corresponding 4a and 4 b.In the 'H NMR spectra of 5 a , the N-H signal was observed at a. 2.3 ppm up-field compared with that of 4a.The reason of this extremely up-field shift of NH signal in 5 a is due to an anisotropic effect of the triazole ring as previously described.'Probably, the triazole ring in 5 a is perpendicular to the benz[g]indole ring because of steric repulsion between the triazole ring and the pyrrole moiety.Then, the triazole ring in 5 a faces toward the NH hydrogen in short distance, which makes the large up-field shift as described above.The methyl signal of 0.83 ppmin 5 a was assigned toethoxycarbonyl group at 5-position of the triazole ring.The signal appeared in very up-field compared with the other three methyl signals of 5a.The anisotropic effect of the benz[g]indole ring influences the shift as the hydrogen locates above the plane of the benz[g]indole ring.Contrary to the case of 5 a , the NH signal of 11.81 ppm in 6 a appeared at significantly low field compared with that o f 4 a (10.52 ppm).The reason of such lower field shift is considered to be the intramolecular hydrogen bonding between the lone pair of the nitrogen in the dimethylamino group and the NH hydrogen in the pyrrole moiety in 6a.In the IR spectra, the N-H stretching frequencies of 4 a and 5a were nearly equal, 3348 cm,' and 3346 c m ' , respectively.While in the case of 6 a , the frequency was remarkably high as 3381 cm'.This indicates the existence of the strong intramolecular hydrogen bonding in 6 a in accordance with the result of NMR.The longest wavelength bands of 5 a and 6 a showed slightly red shift compared with that of 4 a in the W spectrum.
Pronounced differences, however, could not be seen for the UV spectra among them.The similar spectroscopic trend as described above is also obtained in the compounds with isopropoxy These differences in the activation parameters between 5 b and 2 b are caused by the neighboring rigid pyrrole ring, which restricts the rotation of the triazole ring in 5b.
In order to know similar dynamic bebaviour of 6 a and 6b, their 'H NMR spectra were measured at the low temperature as 173 K.However, no significant changes could be seen in the spectra.The signal of the dimethylamino group showed one sharp singlet peak and was not yet split even at this low temperature.The followings are supposed to be the reasons of these results: (1) The rotational barrier is very low for the C-N bond between the diethylamino group and the benz[g]indole ring.
(2) Though the rotation is restricted, the difference between the chemical shifts for the two N-methyl groups is very small.Usually, a rotational barrier is low about the C-N bond between a dimethylamino group and an aromatic ring, but if there exist bulky substituents in close proximity, the barrier becomes relatively high.For example, the two N-methyl hydrogens are already non-equivalent at room temperature in 7 a and the activation energy (AG' ) for the rotation about the diethylamino group is 17.5 kcal/mol at 371 K,' while in N,N-dimethylaniline, the activation energy is 5.1 kcdmol at 133 K.6 In 6b, the neighboring pyrrole ring as in the case of 5 b may hinder the rotation of the dimethylamino group.On the other hand, the existence of intramolecular hydrogen bonding in 6 is expected between the lone pair of the dimethylamino group and NH hydrogen in the pyrrole ring as described before in the 1R discussions.
Me
In this case, the conformation of the dimethylamino group in 6 is shown in Figure 2, which is different from that in 7 a obtained by X-Ray diffraction study.'The magnetic environments of the two methyl groups have nearly the same conformation in 6 .Even if the rotation of the dimethylamino group is restricted, the differences of the chemical shifts may be too small to be detected in NMR spectra.
As described above, novel 9-substituted benz[g]indoles (5,6 ) are synthesized by the photolysis of the naphthyltriazoles, and their spectroscopic properties are investigated.As a result, interesting phenomena, especially the existence of the intramolecular hydrogen bonding and the restricted rotation about the 9substituents, were found because of the functional groups in close proximity in peri-position of the naphthalene ring.A solution of 1 (0.22 mmol) in 400 mL of methanol under nitrogen atmosphere was irradiated with a Ushio 450 W high-pressure mercury lamp through a quart well at 30-35 "C for 2 min.Then, the solvent was evaporated, and the reaction residue was chromatographed over silica gel (ether and hexane as an eluent) to give 5 together with starting material (1) and indoloindole derivative (3).
Dynamic NMR Amalyses
Calculation for the complete line shape analysis of ' H NMR spectra was carried out with a FACOM M-380 computer using the computer program EXNMRO, which is based on well established theoretical models?Theoretical spectra were calculated to get a best fit with the observed spectra by varying the exchange rate constants.The activation parameters were obtained using the Eyring equation."
carbonyl group such as 5 b 2 K
Figure 1.Temperaturedependent 'H NMR spectra benzlglindole rings produce the molecular of the methyl region in 5b.asynlmetry at the low temperature.
Figure 2 .
Figure 2. Conformation of the dimethylamino group in 6 and 7 | 2019-04-06T13:07:27.693Z | 1999-05-01T00:00:00.000 | {
"year": 1999,
"sha1": "5cd0735af49e8d3487c02e0ead761f1abded317c",
"oa_license": null,
"oa_url": "https://doi.org/10.3987/com-98-8467",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "ad9d6bc70256c7c1cb87a17785476fc3fdcbf6f5",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
5857273 | pes2o/s2orc | v3-fos-license | Cost-Effectiveness of Thrombolysis within 4.5 Hours of Acute Ischemic Stroke in China
Background Previous economic studies conducted in developed countries showed intravenous tissue-type plasminogen activator (tPA) is cost-effective for acute ischemic stroke. The present study aimed to determine the cost-effectiveness of tPA treatment in China, the largest developing country. Methods A combination of decision tree and Markov model was developed to determine the cost-effectiveness of tPA treatment versus non-tPA treatment within 4.5 hours after stroke onset. Outcomes and costs data were derived from the database of Thrombolysis Implementation and Monitor of acute ischemic Stroke in China (TIMS-China) study. Efficacy data were derived from a pooled analysis of ECASS, ATLANTIS, NINDS, and EPITHET trials. Costs and quality-adjusted life-years (QALYs) were compared in both short term (2 years) and long term (30 years). One-way and probabilistic sensitivity analyses were performed to test the robustness of the results. Results Comparing to non-tPA treatment, tPA treatment within 4.5 hours led to a short-term gain of 0.101 QALYs at an additional cost of CNY 9,520 (US$ 1,460), yielding an incremental cost-effectiveness ratio (ICER) of CNY 94,300 (US$ 14,500) per QALY gained in 2 years; and to a long-term gain of 0.422 QALYs at an additional cost of CNY 6,530 (US$ 1,000), yielding an ICER of CNY 15,500 (US$ 2,380) per QALY gained in 30 years. Probabilistic sensitivity analysis showed that tPA treatment is cost-effective in 98.7% of the simulations at a willingness-to-pay threshold of CNY 105,000 (US$ 16,200) per QALY. Conclusions Intravenous tPA treatment within 4.5 hours is highly cost-effective for acute ischemic strokes in China.
Introduction
Stroke accounts for 301 million disability-adjusted life-years, making it the first leading cause of death and imposing significant disease and economic burden in China [1]. Randomized controlled trials and large observational studies have demonstrated the effectiveness with acceptable safety of intravenous recombinant tissue-type plasminogen activator (tPA) for patients within 4.5 hours after onset of acute ischemic stroke [2][3][4][5][6].
Economic studies conducted in North America, Europe and Australia showed tPA given within 4.5 hours is cost-effective or even cost-saving in the long term [7][8][9][10][11][12][13][14][15]. However, all of these studies were conducted in developed countries; the findings from these countries may not be relevant in developing countries due to their differences in demographics, healthcare systems and payment coverage. Compared with developed countries, the low-and middle-income countries suffer higher mortality burden of stroke [16], and have lower percentage of patients with ischemic stroke treated with tPA [17]. On the other hand, most developed countries, such as United States, implement prospective payment system based on diagnosis related groups [18,19]. While in China, the payments are based on each clinical service. Particularly, the drug costs account for a large part of total costs for Chinese stroke patients. They pay CNY 8,197 (US 1,261) just for 70 mg tPA alone [20]. Is the expensive tPA still cost-effective and should it be widely generalized in developing countries like China? Economic analysis of tPA in developing countries is urgent.
Little is known about verifying the cost-effectiveness of tPA treatment in China. We sought to determine the cost-effectiveness of tPA within 4.5 hours after onset of acute ischemic stroke, using the data from the Thrombolysis Implementation and Monitor of acute ischemic Stroke in China (TIMS-China), which is a nationwide prospective registry of thrombolytic therapy with intravenous tPA in patients with acute ischemic stroke between May 2007 and April 2012. TIMS-China has recruited 1,440 consecutive tPA treated patients from 67 centers [21].
Model Overview
We adhered to the recommendations of the Panel on Costeffectiveness in Health and Medicine [22], including (1) components belonging in the numerator and denominator of a costeffectiveness (C/E) ratio; (2) measuring terms in the numerator of a C/E ratio; (3) valuing health consequences in the denominator of a C/E ratio; (4) estimating effectiveness of interventions; (5) incorporating time preference and discounting; and (6) handling uncertainty. A combination of decision tree and Markov model ( Figure 1) was developed to simulate the long-term (30 years) costeffectiveness of tPA treatment versus absence of tPA treatment within 4.5 hours after the onset of stroke. Our study was based on data from 1128 patients with acute ischemic stroke who received intravenous tPA within 4.5 hours in TIMS-CHINA [21]. The base case of model was a cohort of 100,000 patients (39% female), with mean age of 63 years old, arriving at hospital within 4.5 hours after onset of stroke, whose clinical and demographic characteristics are same as patients enrolled in TIMS-CHINA study (Table 1). Total costs and quality-adjusted life-years (QALYs) gained with each alternative were estimated for each health state at 90 days from the index events and then estimated annually for the remainder 30 years. This analysis was conducted from the perspective of healthcare payers, including the government, medical insurance and patients.
Input Parameters
The baselines of patients and their outcomes in the three time windows (0-1. 5, 1.5-3, 3-4.5 hours after the onset of stroke) of the tPA group, were obtained from the observed data directly drawn from the TIMS-CHINA study database (Table 2). Odds ratios of the favorable functional outcome (modified Rankin Scale (mRS) 0-1), death and symptomatic intracerebral hemorrhage (sICH) in each time window were derived from a pooled analysis of the European Cooperative Acute Stroke Study Trial (ECASS), the Alteplase Thrombolysis for Acute Noninterventional Therapy in Ischemic Stroke Trial (ATLANTIS), the National Institute of Neurological Disorders and Stroke (NINDS), and the Echoplanar Imaging Thrombolytic Evaluation Trial (EPITHET) [23]. Odds ratio of sICH within 0-1.5 hours were assumed to be the same as that of within 1.5-3 hours. Favorable functional outcome, death and sICH rates in the non-tPA treatment group were obtained from the observed data of the tPA group and the odds ratios of previous study [23]. The proportional distribution of patients remaining in categories of mRS 2-3 and mRS 4-5 at 90 days in the control group was assumed to be the same as that in tPA group.
Recurrent rates of stroke and mortality rates of recurrent strokes in years after the first 90 days were estimated from the China National Stroke Registry (CNSR, a nationwide registry for patients with acute cerebrovascular events in China between September 2007 and August 2008, recruiting 21,902 consecutive patients from 132 hospitals in China) [24]. We further assumed an increase in stroke recurrence rates by 1.019-fold per life year, according to the relative risk estimated from patients of ischemic stroke in CNSR.
Age specific non-stroke mortality rates were derived from the most recent published census of China and adjusted by the causes of death of 2010 reported in the China Health Statistics Yearbook 2012 [25,26]. Disability status were assumed to affect survival rate, therefore the final age specific non-stroke death rates in the model were adjusted according to the mRS-specific death hazard ratios [7,27]. Patients remaining alive after stroke recurrence were assumed to be reallocated equally among categories of equal and greater disability [12]. For example, patients with minor or moderate disability who had a recurrent stroke and survived were allocated equally among minor or moderate disability category and severe disability category.
Costs
The total costs including both out-of-pocket costs and reimbursements, were converted to 2011 Chinese Yuan Renminbi (CNY) using the medical care component of consumer price index [26]. The average cost of single hospitalization after stroke was obtained from CNSR and the China Health Statistics Yearbook 2012 [26]. Annual post-hospitalization costs (such as inpatient and outpatient rehabilitation, ambulatory care and second prevention costs) were also obtained from CNSR. We estimated the additional costs of tPA treatment and occurrence of sICH after thrombolysis using the data from CNSR and TIMS-CHINA. Indirect economic costs such as lost work productivity were not included in this study. All costs and utilities were discounted by 3% per year [12].
Health States
Patients could undergo transitions between four disability states according to functional outcome based on mRS: no disability (mRS 0-1), minor or moderate disability (mRS 2-3), severe disability (mRS [4][5] and dead (mRS 6) [28]. At the end of each cycle, patients could remain in their current health state, transit to a lower level health state due to recurrent stroke, or die due to a recurrent stroke or a non-stroke cause (see Figure 1). Outcome Assessment Health outcomes were measured in QALYs by multiplying years of life by utility scores derived from the literature. Utility estimates were based on published utility values stratified by mRS category [12,29,30]. Economic outcomes were measured in cumulative direct medical costs associated with stroke. The incremental cost-effectiveness ratio (ICER) was calculated by dividing the incremental costs by the incremental QALYs. We modeled outcomes and costs over the short-term (2 years) and the long-term (30 years). Using the willingness-to-pay threshold recommended by the Commission on Macroeconomics and Health of World Health Organization [31], the intervention was considered cost-effective if the ICER was less than CNY 105,000 (3x GDP per capita of China in 2011 [26], US 16,200) per QALY gained.
Sensitivity Analysis
The robustness of the model was tested by means of one-way sensitivity analyses for all variables across plausible ranges, which were obtained from the literature ( Table 2, Table 3). To evaluate the impact of the parameter and stochastic uncertainty in all variables varied simultaneously, a probabilistic sensitivity analysis was performed using Monte Carlo simulation in Ersatz v1.3 (a bootstrap add-in for Microsoft Excel for Windows; EpiGear International Pty Ltd, Brisbane, Australia). We assumed the probabilities and utilities followed a beta distribution, and costs followed a lognormal distribution. The simulation was run 10,000 times to capture stability of the results. A scatter-plot and a costeffectiveness acceptability curve were developed to represent uncertainty. Sensitivity analyses were only applied to the long-term (30 years) results. Table 4 shows the outcomes, costs and ICER calculated in the short term (1, 2 years) and in the long term (30 years). In the basecase scenario, for a 63-year-old patient with acute ischemic stroke, tPA treatment would be cost-ineffective in the first year, but become cost-effective from the second year onwards, using the threshold of CNY 105,000 (3x GDP per capita of China in 2011, US 16,200) as the willingness-to-pay per QALY. After 2 years, tPA treatment gained 0.101 QALYs at an additional cost of CNY 9,520 (US 1,460), yielding an ICER of CNY 94,300 (US 14,500) per QALY gained. In the long term (30 years), tPA treatment gained 0.422 QALYs at an additional cost of CNY 6,530 (US 1,000), yielding an ICER of CNY 15,500 (US 2,380) per QALY gained.
Sensitivity Analysis
One-way sensitivity analysis showed the results in the long term are robust. Tornado diagrams illustrated the effect of varying input parameters on the long term ICER (Figure 2). Overall, the ICER was most sensitive to odds ratio of favorable functional outcome at day 90 within 1.5-3 hours and annual post-hospitalization costs (mRS 2-5). If odds ratio of favorable functional outcome at day 90 within 1.5-3 hours increased to 2.40 from 1.12, the ICER of tPA treatment would drop to CNY 9.653/QALY from CNY 40,667/ QALY. If annual post-hospitalization cost of disabling stroke (mRS 2-5) increased from CNY 2,592 to CNY 12,959, the ICER would decrease from CNY 37,014/QALY to CNY 8,057/QALY. In contrast, the ICER was relatively insensitive to odds ratio of sICH at day 90 and one-time hospitalization costs.
Results of the probabilistic sensitivity analysis in the long term are shown in Figure 3. Among the 10,000 simulation runs, there was a 14.4% chance that tPA treatment turned to be less costly and more effective than non-tPA treatment. tPA treatment was cost-effective in 98.7% of the simulations at a willingness-to-pay threshold of CNY 105,000 (3x GDP per capita of China in 2011, US 16,200) per QALY, and still remained cost-effective in 79.2% of the simulations at a threshold of CNY 35,100 (1x GDP per capita of China in 2011, US 5,400) per QALY. The costeffectiveness acceptability curve of tPA treatment is shown in Figure 4.
Discussion
Our study indicated that tPA treatment for acute ischemic stroke within 4.5 hours was cost-effective not only in the short term (2 years), but also highly cost-effective in the long term (30 years) in China. A patient with acute ischemic stroke treated with tPA within 4.5 hours gain an ICER of CNY 15,500 per QALY, which was below 1x GDP per capita of China in 2011 (CNY 105,000). By contrast, cost effectiveness analyses conducted in some developed countries demonstrated tPA treatment was cost-saving in the long term [7][8][9][10]. We also found that the ICER is less sensitive to the additional costs of tPA treatment, but more sensitive to odds ratio of favorable functional outcome at day 90 within 1.5-3 hours and annual post-hospitalization costs (Figure 2). The reason might be that tPA treatment is highly effective in functional outcome for acute ischemic stroke (OR = 1.64 for 1.5-3 hours [23]), and also dramatically influence the long-term utility and costs, which may offset the costs of tPA treatment.
The differences between China and the developed countries may influence the cost-effectiveness of tPA treatment. First, unlike the diagnosis related payment system in most developed countries [18,19], China implements payment system according to the clinical services. The stroke-care related payments in China included medical care covered by government, medical insurance for urban workers, rural cooperative medical care, commercial medical insurance, plus patient own expense, in the proportions of 7.2%, 53.6%, 16.9%, 0.7% and 21.6%, respectively, according to CNSR (unpublished data). The out-of-pocket costs were only 10% to 20% of the total costs if the payment was covered by government. For urban workers with medical insurance, their average 3-month hospital and medication costs due to stroke were CNY 16,525 (US 2,361), while the out-of-pocket costs were CNY 14,478 (US 2,068) in 2006 [32]. However, for the patients without coverage from the government or medical insurance, they had to pay more out-of-pocket costs, with little support from state. No matter which payment resources that the stroke patients had, the total costs depended on the service and medicine prescribed, with huge variation in hospitals and geographical regions in China. In brief, Chinese patients must pay for all the costs first, and then search for corresponding reimbursement after the therapy process finished. For the patients without tPA treatment, their one-time hospitalization cost was CNY 9,526 if they had mRS 0-1, while if they had tPA treatment, the cost due to tPA alone was CNY 8,197 (US 1,261) in China [20]. Therefore, the cost of tPA in China were relatively higher than that in developed countries. Second, several surveys showed the lack of organized rehabilitation and decreased adherence to secondary prevention of ischemic stroke in China after discharge [33][34][35][36]. Therefore, the long-term post-hospitalization care costs might be similar between the tPA treatment group and the control group in China. Due to the facts above, clinically, it is very important to estimate the cost-effectiveness in China. And these facts might partly explain the difference in ICER between China and developed countries.
This study has several limitations. First, the costs used in the model may not be accurate for each component of tPA related treatments, tPA drugs, extra MRI scanning, extra consultant and nursing services, change in length of stay, and so on [10].
However, we think the overall costs were recorded accurately in the total hospitalization cost. We estimated the additional costs of tPA treatment on the basis of the difference of the total hospitalization costs of sICH-free patients with tPA treatment in TIMS-CHINA study and the hospitalization costs of patients without tPA treatment in CNSR. We calculated the additional costs of sICH applying the same rationale. Second, in China, the family plays an important role in both acute and post-acute care. This informal caregiving can be quite expensive. However, it was not considered in our model because of lack of corresponding data. Third, our model focused on the influence of tPA for acute ischemic stroke, and functional status and costs as a result of other causes, such as recurrence of intracranial hemorrhage, myocardial infarction, congestive heart failure, were not included in this model. In addition, the improved functional status after rehabilitation or other improvements was not considered in the model for lack of organized rehabilitation in China and lack of authentic data available on efficacy of rehabilitation or other improvements. Fourth, the efficacy of tPA treatment was based on the pooled analyses from the studies in developed countries (ECASS, ATLANTIS, NINDS, and EPITHET). Also, the utilities references were from Western literatures. We applied the efficacy of tPA treatment and the utilities to the model of Chinese patients, but did not know if these data fit Chinese patients. However, we ran sensitivity analysis and found that the results were robust for these parameters across plausible ranges. Although these limitations would have led to under-or over-estimation of the true costeffectiveness of tPA treatment in China, it is unlikely to make a significant differential impact on the overall results of our study because our findings were robust as shown in the sensitivity analyses.
Conclusions
The intravenous tPA within 4.5 hours for patients with acute ischemic stroke is highly cost-effective in China. Our study provides with constructive information on medical resource allocation for stroke treatment in China in future, and may also be a reference to other developing countries. | 2018-04-03T03:04:18.292Z | 2014-10-20T00:00:00.000 | {
"year": 2014,
"sha1": "36887135f7468f8c1727f51599a2288caffc5017",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0110525&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "36887135f7468f8c1727f51599a2288caffc5017",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
98556638 | pes2o/s2orc | v3-fos-license | European CO budget and links with synoptic circulation based on GEOS-CHEM model simulations
The European carbon monoxide (CO) budget is studied in relation to the synoptic circulation throughout 2001, using the nested-grid configuration of the GEOS-CHEM global model and CO measurements from 31 rural background stations. To meet the aims of this study, a seasonal circulation type (CT) classification is developed for the Northern Hemisphere based on mean sea-level pressure (SLP) fields, as well as two upper atmospheric levels, over a 60-yr period. The highest contribution to the European surface CO concentrations is attributed to regional anthropogenic sources (up to ~80%), which become more important under the prevalence of anticyclonic circulation conditions. The corresponding contribution of the long-range transport (LRT) from North America (up to 18%) and Asia (up to 20%) is found highest (lowest) in winter and spring (summer and autumn). The transport of the CO towards Europe in winter is more intense under cyclonic circulation, while both cyclonic and anticyclonic patterns favour LRT during other seasons. Occasionally (mainly in winter and spring), LRT contribution is higher than the regional one (up to 45%). In the free troposphere, the LRT contribution increases, with the largest impact originating from Asia. This flow is favoured by the intense easterly circulation in summer, contributing up to 30% in the Eastern Mediterranean during this season. On the other hand, the regional contribution in the upper levels decreases to 22%. The contribution of CO chemical production is significant for the European CO budget at all levels and seasons, exceeding 50% in the free troposphere during summer.
Introduction
The estimation of carbon monoxide (CO) concentrations is a complex problem, depending strongly on fossil fuel, biofuel and biomass burning emissions, the oxidation of methane (CH 4 ) and non-methane volatile compounds (NMVOCs) and the hydroxyl radical (OH) (Allen et al., 1996;Kanakidou et al., 1999;Holloway et al., 2000;Duncan and Logan, 2008). The atmospheric circulation also has a dominant role on the pollutant levels since several CO pollution events and large-scale horizontal gradients have been associated with the prevailing atmospheric conditions (Chung et al., 1999;Wang et al., 2004a;Liu et al., 2006;Drori et al., 2012).
Due to the CO lifetime in the troposphere (30Á90 d), the pollutant can be transported in continental scales through several pathways (Holloway et al., 2000;Li et al., 2002;Liu et al., 2003;Duncan and Bey, 2004;Huntrieser and Schlager, 2004;Liang et al., 2004;Weiss-Penzias et al., 2004;Auvray and Bey, 2005;Drori et al., 2012). The transatlantic long-range transport (LRT) is particularly *Corresponding author. email: aprot@phys.uoa.gr To access the supplementary material to this article, please see Supplementary files under Article Tools online. Tellus B 2013. # 2013 A. P. Protonotariou et al. This is an Open Access article distributed under the terms of the Creative Commons Attribution-Noncommercial 3.0 Unported License (http://creativecommons.org/licenses/by-nc/3.0/), permitting all non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited. favoured due to the relatively short distance between North America and Europe. The transport of air from North America towards Europe takes place in the troposphere throughout the year, following the general circulation over the North Atlantic (Wild and Akimoto, 2001;Stohl et al., 2003aStohl et al., , 2003b. In the lower troposphere (LT), these transport paths are determined by the strength and the position of the Azores High, in combination with the Icelandic Low (Li et al., 2002;Auvray and Bey, 2005;Christoudias et al., 2012). In the free troposphere, the transatlantic LRT takes place when air masses from the North American surface are lifted to the upper levels through mid-latitude cyclones and convection; then, the pollutants are transported towards Europe, governed primarily by the westerly circulation and the jet streams (Stohl, 2001;Cooper et al., 2002;Li et al., 2002Li et al., , 2005Trickl et al., 2003;Huntrieser and Schlager, 2004). Similarly, the LRT from Asia towards Europe occurs throughout the year when mid-latitude cyclones and deep convection lift the pollutants into the free troposphere (Liu et al., 2003;Auvray and Bey, 2005). Another export pathway from Asia that directly affects Europe is observed from the end of May until the end of August and is related with an upper easterly current extending to the west across South Arabia and North Africa (Barry and Chorley, 2003;Auvray and Bey, 2005). As a result, the Asian pollution has frequently been detected not only in the upper troposphere (UT) but also in the LT over the Eastern Mediterranean (Lelieveld et al., 2002;Lawrence et al., 2003;Roelofs et al., 2003;Scheeren et al., 2003;Traub et al., 2003;Drori et al., 2012). On the other hand, the transport from the rest of the world towards Europe is not favoured due to the presence of the Inter-tropical Convergence Zone (ITCZ). Some pollution of African origin has been detected mostly in the Mediterranean region, while transport from Australia and South America to Europe has rarely been reported (Stohl et al., 2002;Roelofs et al., 2003;Kallos et al., 2006Kallos et al., , 2007. The combustion of the fossil fuels and the biofuels in Asia, North America and Europe dominates the surface CO distribution in the Northern Hemisphere (NH) (Fig. 1, Table 1). The majority of the anthropogenic sources are located in a mid-latitude belt between 308N and 658N, where the population density and the anthropogenic activity are high (Schultz and Bey, 2004). Africa and South America represent the world regions with the highest biomass burning emissions globally. However, it should be noted that the biomass burning activity is high also in the Mediterranean and Eastern Europe during the summer. Moreover, the CO chemical production by the oxidation of CH 4 and NMVOCs is also significant for the pollutant's concentration levels within the troposphere.
The global chemical transport models (CTMs) have proved to be suitable tools to reproduce the observed CO in long scales (Kanakidou and Crutzen, 1999;Holloway et al., 2000;Tanimoto et al., 2009). Regarding the European domain, simulations of the global CTMs MOZART-2 (Pfister et al., 2004) and MATCH-MPIC (Fischer et al., 2006) revealed that the predominant contribution to the surface CO levels over Europe is attributed to the regional Fig. 1. Global CO anthropogenic surface emissions (in molec/cm 2 /s). Black boxes depict the geographical regions considered in GEOS-CHEM tagged CO analysis; 1: Europe, 2: North America, 3: Asia, 4: Africa, 5: South America, 6: Oceania. emissions. On the contrary, it was found that most of the CO in the European middle troposphere (MT) and UT is transported from Asia and North America. These results are consistent with those of other studies that emphasised on the Eastern Mediterranean (Lelieveld et al., 2002;Lawrence et al., 2003;Drori et al., 2012).
In order to track the CO origin, its molecules can be tagged according to the type and the location of its primary emission sources and the chemical production. This tagging technique has been implemented in the global CTM GEOS-CHEM (Bey et al., 2001a(Bey et al., , 2001b and its nested-grid application as well (Wang et al., 2004a(Wang et al., , 2004b. Previous studies have employed the nested-grid configuration of this model in several world regions such as Asia (Wang et al., 2004a(Wang et al., , b, 2009bChen et al., 2009), North America (Fiore et al., 2005;Li et al., 2005;Park et al., 2006;Wang et al., 2009a;Zhang et al., 2011) and Europe (Protonotariou et al., 2010). The nesting results indicated that the representation of the pollutants improves in comparison to the global model, particularly for certain regions (e.g. high emission intensity, complex terrain) and time periods (e.g. pollution events).
Global simulations of the GEOS-CHEM model have previously been employed to study transport of O 3 and CO towards Europe (Li et al., 2002;Auvray and Bey, 2005;Guerova et al., 2006), but the European CO concentrations have not systematically been studied in relation to the synoptic circulation. In the present study, an analysis of the CO budget within the European troposphere in relation to the atmospheric circulation is carried out for the 1-yr period of 2001, based on the nested-grid application of the GEOS-CHEM. To this aim, a recently developed circulation-pattern classification scheme over the NH is presented and the contribution of direct surface emissions from all continents and the chemical production are estimated in the LT, the MT and the UT during winter and summer of 2001 based on tagged CO simulations. Furthermore, based on Principal Component Analysis (PCA), the LRT contribution towards Europe is examined in relation to the atmospheric circulation at three station sites, where the surface CO measurements are available for the examined period (Air Quality Database of the European Environmental Agency; http://www.eea.europa.eu/data-andmaps/data/airbase-the-european-air-quality-database-1).
The GEOS-CHEM model description
GEOS-CHEM is a three-dimensional global atmospheric CTM (Bey et al., 2001b) developed by the Atmospheric Chemistry Modelling Group of Harvard University (http:// acmg.seas.harvard.edu/). In this study, the nested-grid configuration of the model's version 07-01-02 is applied over Europe (Protonotariou et al., 2010). Assimilated meteorological data from the Goddard Earth Observing System (GEOS) of the NASA Global Modelling and Assimilation Office (http://gmao.gsfc.nasa.gov) are employed in the model, based on a terrain-following sigma coordinate system with 30 vertical levels up to 0.01 hPa. Moreover, natural and anthropogenic emissions with no seasonal variation are included (Bey et al., 2001b), introducing about 14% higher (lower) concentrations than the annual mean in winter (summer) (Duncan and Bey, 2004).
In this work, the tagging technique is applied in the model, considering 16 CO tracers (Table 2). In order to apply the nested-grid configuration, first a 2-yr (2000,2001) global simulation (48 )58 grid-resolution) is performed. As 1-yr spin up is suggested, hourly boundary conditions (BCs) are saved around the nesting domain of Europe (208WÁ 458E, 228NÁ748N) during the second run-year. The BCs are then implemented around the European domain for the nested-grid simulation (18 )18 grid-resolution).
Classification of circulation types
In the present study, an automated map pattern classification is developed following the methodology by Kostopoulou and Jones (2007). The obtained circulation catalogue describes the main seasonal circulation types (CTs) over the NH at sea level pressure (SLP) and at selected levels (500 hPa, 200 hPa). Distinct synoptic patterns are produced over an extended European area, and the atmospheric circulation is studied on a daily basis at the three atmospheric levels. The 'environment-to-circulation' approach (Yarnal, 1993) is used to study the influence of atmospheric circulation on the CO concentrations over Europe for 1 yr. The classification scheme serves as a tool to provide the prevailing atmospheric circulation for each day of the study year in order to assess the CO levels and the LRT contribution over Europe. Towards this purpose, each representative day of year 2001 is assigned to one of the derived synoptic circulation patterns, and the simulated daily CO concentrations are grouped based on the prevailing CTs. Similarly, the regional and the LRT contributions to the European CO budget are calculated for each day of the year and the results are interpreted based on the prevailing CTs. More specifically, gridded geopotential height reanalysis daily data at the three levels from the National Centers for Environmental Prediction/National Center for Atmospheric Research (NCEP/NCAR; Kalnay et al., 1996;Kistler et al., 2001) are used as inputs to the synoptic map pattern classifications. The spatial coverage is 2.58 in latitude by 2.58 in longitude, providing a global grid of 144)73 points (908NÁ908S, 08EÁ357.58E). As this study emphasises on the prevailing NH circulation patterns with a special interest over the European territory, 60-yr daily data from 1947 to 2007 are utilised for a region covering most of the NH (08NÁ708N, 908WÁ908E).
The first step towards the circulation classification is accomplished employing an eigenvector-based approach. A rotated PCA (Wilks, 1995) is carried out (S-mode, using the correlation matrix), which reduces the number of the original data to a smaller number of principal components (PCs). Each PC includes a positive and a negative phase both representing atmospheric classification modes. Correspondingly, half of the circulation patterns are associated with a prominent anticyclonic centre (denoted by ''') and the remaining half with a cyclonic centre (denoted by ' ('), which govern the atmospheric conditions over the study region. Finally, each day of the original data is assigned to one mode according to the absolute maximum component scores. The methodology adopted to determine the mappattern classification is described in greater depth in Kostopoulou and Jones (2007).
Observations for the model evaluation
In this study, measurements of CO concentrations during 2001 from 31 rural background stations, located in Austria, France, Germany, Italy, Poland, Switzerland and the Netherlands (Fig. 2a), are employed. Moreover, a PCA is applied with daily mean averages of the CO-modelled concentrations from the 31 stations for the study year, in order to classify the stations into groups with common characteristics. As a result, the original data dimension is reduced into a smaller number of PCs (Protonotariou et al., 2010). The PCA yielded three PCs (PC1, PC2, PC3), accounting for 80% of the total variance as suggested by the literature (Jolliffe, 1993). It is found that each PC component defines a sub-region ( Fig. 2b and Appendix S1) with common characteristics in relation to emission intensity, topographical features and geographical position. More specifically, the first sub-region (PC1, capturing 33. 7% of the total variance in the original dataset) includes the stations located in the north-western part of the examined area. This station area, characterised by flat terrain and low altitudes, is close to strong anthropogenic sources. As a result, the mean emission rate is significantly high (up to Â50% higher than the other two sub-regions). In the second sub-region (PC2, which explains 24.8% of the total variance), the stations are located in the southern part of the study region. This station area, located at a long distance from the major anthropogenic sources, is characterised by complex topography and a mean altitude of Â400 m. The third sub-region (PC3, accounting for 22.6% of the total variance) includes stations located in the eastern part of the examined area, with lower background emissions in comparison to the other two groups and at similar mean altitude with PC2. For each PC region, the LRT contribution is calculated on an annual and a Table 2. CO tagged tracers considered in GEOS-CHEM according to emission sources, geographical regions and chemical production seasonal basis and the circulation patterns that favour the transport paths are investigated.
Circulation patterns' analysis
In order to minimise the potential influences of seasonality and to provide a detailed analysis of the atmospheric circulation schemes in each season, the PCA of the circulation patterns is undertaken on a seasonal basis (Table 3). Fourteen CTs represent the atmospheric circulation at SLP over the NH in winter (December, January, February), declared by the acronym W_CTi (Winter Circulation Type, where i091,7 denotes the corresponding PC number). Sixteen CTs (i091,8) are associated with the main modes of atmospheric circulation in each of the next three seasons, that is, in spring (March, April, May, SP_CTi), summer (June, July, August, SU_CTi) and autumn (September, October, November, A_CTi). Correspondingly, 12 CTs are obtained that represent the atmospheric circulation at 500 hPa in winter (W 500 _CTi, i091,6) and autumn (A 500 _CTi, i 091,6) and 16 in spring (SP 500 _CTi, i0 91,8) and summer (SU 500 _CTi, i091,8). Regarding the 200 hPa field, 12 CTs are recognised as the dominant circulation patterns for winter and summer (W 200 _CTi and SU 200 _CTi; i091,6), and 10 CTs for spring and autumn (SP 200 _CTi and A 200 _CTi; i 091,5). A schematic representation of the main CTs at the three levels is given in Appendix S2. The reliability of the classification is assessed by comparing the derived CT classes with daily synoptic charts of atmospheric circulation (at SLP and the two upper levels), using statistical analysis and visual comparison with NCAR/NCEP daily mean composites (http://www. esrl.noaa.gov/psd/data/composites/day/).
The CO budget at surface
The CO surface concentrations over Europe are mainly driven by the regional anthropogenic emissions, which contribute up to 79 and 68% to the European CO levels at mid-latitudes in North Europe during winter and summer, respectively (Table 4 and Appendix S3, Fig. S3 Fig. S4-1). Furthermore, the CO accumulation close to the surface is related to the prevalence of frequent anticyclonic circulation patterns over Europe during both seasons (e.g. W_CT1', SU_CT7', Fig. 3a). On the contrary, the CO FFEU contribution at the southern latitudes in winter does not exceed 35% (Fig. S4-1a), as the regional emissions in South Europe are lower than those in Northern Europe. Moreover, the prevailing anticyclonic circulation patterns during the cold period (e.g. W_CT1', W_CT2', W_CT7') do not favour transport from the highly polluted North European regions towards the south. However, the European contribution to Southern Europe increases during summer locally, mainly over sea, exceeding 40% (Fig. S4-1b). These high CO levels are related with the transport of the pollutant from North to South Europe under the prevalence of common summer CTs (e.g. SU_CT79). The CO concentrations are lower over the land owing to the deep mixing height during this season.
The CO production from the regional biomass burning is evident only during the warm period (not shown), contributing up to 34 and 20% to the pollutant's local concentrations in Eastern and Southern Europe, respectively. Particularly in Greece, the prevailing northerly/ northeasterly Etesian winds during summer favour the CO transport from the Northern and Eastern Europe under the prevalence of representative summer flow patterns (e.g. SU_CT7'). Through this transport path enhanced CO concentrations from the burning of agricultural residences in Eastern Europe were transported towards Greece in August 2001 (Lelieveld et al., 2002;Balis et al., 2003;Salisbury et al., 2003;Tombrou et al., 2009). The impact of the North American anthropogenic emissions to the European surface CO concentrations is more profound in winter (Fig. S4-2a). The CO FFNA contribution is higher at the western part of the European continent between 358N and 458N (up to 18%), where air masses are channelled under the prevalence of most anticyclonic and cyclonic patterns during this season (W_CT1 (, W_ CT2 (, W_CT3', W_CT4', W_CT59, W_CT69, W_ CT79). Some of these patterns (W_CT4', W_CT5 (, W_ CT69) also favour CO FFNA to be transported towards the Mediterranean region. In summer (Fig. S4-2b), the CO FFNA contribution is weaker (up to Â12%), attributed to the lower anthropogenic emissions in North America during this season. Moreover, the less organised atmospheric circulation in summer does not favour this transport path. In particular, the LRT takes place mainly towards the northern/northwestern parts of Europe, as the westerly winds over the North Atlantic turn to southwesterly for most of the summer CTs (e.g. SU_CT7'). On the other hand, the largest contribution of the Asian anthropogenic tracer is apparent on the eastern borders of the European domain in winter (34%, Fig. S4-3a) as the prevailing circulation patterns (e.g. W_CT1') do not favour the intrusion further into the European continent. The CO FFAS contribution is also pronounced at the western/southwestern parts of Europe in winter (up to Â20%) and in the Eastern Mediterranean in summer (up to Â15%), attributed to the westerly circulation and the extension of the Asian thermal Low over the Aegean Sea, respectively. The North American and the Asian biomass burning contributions are found up to 5% (not shown), due to low fire activity in 2001 (EC, 2002;Kasischke et al., 2005;Yurganov et al., 2005;Huang et al., 2009). The contribution of the anthropogenic sources from the rest of the world to the European CO surface concentrations is negligible (not shown), mainly because the northeastly Trade winds and the ITCZ prevent the air intrusion from the Southern Hemisphere (SH) into the NH. Similarly, although the fire intensity in the SH contributes significantly to the global CO budget (Table 1), such signals are not transmitted towards Europe. Very small contributions of the North African anthropogenic and biomass burning emissions (up to 5%) are observed in winter (not shown), when the atmospheric conditions favour this transport path (e.g. W_CT6').
The methane oxidation is the largest chemical production process of CO. The highest contribution is found at the southern regions where the solar radiation is intense, reaching 22 and 32% in winter and summer, respectively (not shown). Moreover, CO CH4 is enhanced at the high northern latitudes in summer, attributed to the increased CH 4 production in the permanent ice-covered regions during this season. Among the NMVOCs, the isoprene oxidation contributes the highest CO concentrations (up to 18% at the densely vegetated eastern parts of Europe in summer), as it consists more than 40% of their total emissions (Miyoshi et al., 1994;Paulot et al., 2009). Similarly, the CO production by other NMVOCs (monoterpenes, methanol, acetone) is highest in summer (up to 11, 6 and 2%, respectively), attributed to the high b) 500 hPa temperatures and the increased solar radiation that enhance the photochemical activity during this season.
The contribution of the CO anthropogenic emissions (CO chemical production) calculated by the GEOS-CHEM model is up to Â30% (Â15%) higher (lower) than those estimated by MOZART-2 (Pfister et al., 2004). These discrepancies are mainly attributed to the different emission inventories included in GEOS-CHEM (Wang et al., 1998;Duncan et al., 2003) and MOZART (Horowitz et al., 2003;Pe´tron 2003), as well as to the different horizontal grid resolution (18 )18 versus 2.88)2.88) and vertical layers (30 sigma levels up to 0.01 hPa versus 28 sigma levels up to 2 hPa) considered in the two global models.
The CO budget at rural background stations in Europe
In order to further investigate the LRT contribution to the European CO concentrations at SLP, a more detailed examination is performed based on the PC analysis. To this aim, the annual LRT contribution from each continent's anthropogenic emissions (CO FFNA , CO FFAS and CO FFRW ) as well as their total sum is presented for the three PC station regions, together with the regional (CO FFEU ) contribution (Table 5, Fig. 4). On an annual basis, the LRT contributions to the three PC regions are found comparable. A slightly higher contribution is evident in PC2 and PC3, partly attributed to the fact that these subregions, which are characterised by relatively lower regional emissions, locate at high altitudes, where the LRT contribution increases. Moreover, a statistical analysis is also presented for the three PC regions. It is found that the model adequately simulates the observations at all regions. The best performance is achieved in PC1 (Mean Observation, M O 0217.8 ppbv; Mean Bias, MB0(4.1 ppbv; Mean Error, ME 032.9 ppbv; correlation coefficient, R 2 0 0.83), attributed to the fact that this region is characterised by flat terrain, which is well depicted by the model. The largest bias and error are found in PC2 (M O 0208.4 ppbv, MB 0(28.0 ppbv, ME 049.1 ppbv, R 2 00.57 ppbv), as the representation of the complex topography in the model's coarse grid is probably not sufficient.
In order to define the atmospheric conditions that favour the LRT towards the study regions, the analysis is extended on a seasonal basis. To this aim, the contribution of the anthropogenic emissions of the major continental sources (North America and Asia) to the total CO concentration, as well as their sum (CO FFLRT ), is calculated in percentage ([CO x /CO total ] *100, where x 0CO FFNA or CO FFAS ) at each PC region. This analysis is performed in relation to the prevailing circulation patterns on a seasonal basis, together with their frequency of occurrence during the study year (Tables 6Á9). Moreover, in order to assess how well the source contributions are linked to the CTs, the standard deviation of the North American and Asian contributions has been calculated for all CTs (not shown). It should be noted that the circulation patterns depict the general characteristics of the atmospheric circulation and they do not represent the real wind direction and velocity (or pressure values), which in turn determine the source contribution to the study area. Therefore, it is expected that there will be some variability in the results. In most cases nevertheless, it was found that the LRT contribution is satisfactorily linked with most of the circulation patterns, with the standard deviation in the source contributions estimated between 4 and 11.8% of the contribution itself.
3.3.1. Winter. The LRT contribution on the surface CO concentrations at the three PC regions during winter reaches Â30%, with the Asian contribution (18.1%) being higher than the North American contribution (12.8%) under all CTs (Table 6). The highest CO FFLRT contribution is found in PC3 (29.6%) under the prevalence of the cyclonic pattern W_CT5 (. This CT with relatively low frequency of occurrence in 2001 (FO: 5.6%, Table 6) is associated with the extension of the Azores High over the Western Mediterranean and the formation of a deep low eastwards to PC3. Under these conditions, the southwesterly winds over the North Atlantic turn southeastwards before arriving to the study area. Similarly, the LRT contribution in PC1 is highest (28.6%) under the prevalence of the same cyclonic pattern, exceeding up to Â10% the regional contribution on some days (Fig. 4a). As Table 6 shows, the LRT towards PC1 and PC3 is most intensive mainly when cyclonic circulation prevails over Europe during winter. However, it should be mentioned that enhanced LRT contribution (28.1%) can also occur under the prevalence of the less frequent anticyclonic type W_CT4' (FO: 2.2%), formed when the well-organised Azores High extends northeastwards, reaching the study regions.
The highest LRT contribution in PC2 (27.7%) is observed under the prevalence of the cyclonic circulation type W_CT2 (. This relatively frequent pattern (FO: 8.9%) is associated with westerly winds over the North Atlantic that channel the air masses towards the study area. This flow is established when the well-organised deep Icelandic Low and the Azores High are formed northwestwards and southwestwards of PC2, respectively. Under the prevalence of this circulation pattern, the LRT contribution in PC2 can exceed by up to 21.8% the regional one (Fig. 4b). Moreover, the LRT contribution exceeds the regional one by up to 45.9% under the cyclonic pattern W_CT4 (, when a deep low develops westwards of PC2. Under these conditions, winds over the North Atlantic shift from northwesterly to southwesterly before arriving to the study area. 3.3.2. Spring. The CO FFLRT contribution in spring (31.1%) is at the same level as in winter. Similarly to winter, the Asian contribution (19.2%) is higher in comparison to the North American (13.5%) for all CTs (Table 7). In particular, the highest LRT contribution is accumulated in PC2 under the prevalence of the anticyclonic pattern SP_CT6'. This less frequent CT in 2001 (FO: 2.2%) is associated with the formation of the deep Azores High southwestwards of PC2. Under these conditions, the prevailing southwesterly winds over the North Atlantic turn to northwesterlies before arriving at the PC2 region. In this case, the CO FFLRT contribution can exceed the CO FFEU contribution by up to 36% (Fig. 4b).
Moreover, the LRT contribution can be up to Â45% higher than the regional one under the prevalence of the more frequent cyclonic patterns SP_CT1 (,SP_CT3 ( and SP_CT4 ( (FO: 6.5,15.2 and 6.5%,respectively). Under these conditions, the deep Icelandic Low and the Azores High are formed north/northwestwards and southwestwards of PC2 respectively, inducing westerly winds towards the study area.
The largest LRT accumulation in PC1 (27.8%) and PC3 (25.4%) takes place under the prevalence of the cyclonic patterns SP_CT2 ( (FO: 7.6%) and SP_CT7 ( (FO: 1.1%). These CTs are related with the formation of a low pressure centre over the Mediterranean, which combined with the high pressures westwards of PC1 and PC3, induces northerly and southwesterly winds, respectively, towards these areas. Moreover, it is noticed that the CO FFLRT contribution in PC3 exceeds by up to 14.6% the CO FFEU levels (Fig. 4c) under the prevalence of SP_CT4' (FO: 15.2%) and SP_CT6 ( (FO: 12%).
3.3.3. Summer. The LRT contribution in summer (up to 22.1%) is lower than in winter and spring. In this case, the Asian contribution (12.9%) is higher in comparison to the North American contribution (10.4%) for most CTs (Table 8). The largest contribution is observed in PC1 under the prevalence of the anticyclonic pattern SU_ CT3'. This relatively frequent type (FO: 7.6%) is associated with the extension of the deep Azores High over Western Europe, steering the southwesterly winds over the North Atlantic southeastwards before arriving to the area. Under these atmospheric conditions, the LRT contribution in PC1 and PC2 can exceed the regional one (up to 8 and 16.8%, respectively, Fig. 4a, b). Similarly, the highest contribution in PC3 (19.2%) is observed under the same anticyclonic pattern. Enhanced LRT contributions in PC3 (18.3%) and PC1 (21%) are also observed under the cyclonic patterns SU_CT1 ( and SU_CT8( respectively. Moreover, the highest LRT contribution in PC2 (20.6%) is found under the latter CT. Under these conditions, the formation of a low pressure centre to the north and a high pressure system in the North Atlantic induces northwesterly winds towards the study area. A contribution of Â20% is also observed in PC2 under the anticyclonic patterns SU_CT3' and SU_CT4'.
3.3.4. Autumn. The LRT contribution in autumn is evidently lower than in winter and spring (and slightly lower than in summer). Moreover, contrary to the other seasons, the North American contribution is higher in comparison to the Asian for most CTs (Table 9). The highest LRT (19.9%) and CO FFNA contributions (10.9%) are found in PC1 under A_CT1 ( (FO: 8.8%). This cyclonic pattern is associated with the formation of a deep cyclonic centre over Scandinavia, extending northwards over the study area. The combination of this system with the high pressure southwards establishes westerly winds towards PC1. Enhanced LRT contribution (19.6%) is also observed in PC1 under the highly frequent anticyclonic pattern A_CT3' (FO: 19.8%). Similarly, the highest LRT contribution in PC3 (18.6%) occurs under the same anticyclonic and cyclonic patterns.
The highest LRT contribution in PC2 (18.8%) is found under the A_CT5' (FO: 7.7%). This anticyclonic circulation pattern is associated with an extended well-organised high pressure system over Europe. In addition, the LRT contribution in PC2 becomes important (up to 17.8%) under the more frequent A_CT7' and A_CT8 ( patterns (FO: 11%). Only few LRT exceedances are observed in autumn 2001, with CO FFLRT being higher by up to 5.4% than CO FFEU (Fig. 4b) mainly under the prevalence of A_CT1 ( and A_CT8 (.
The CO budget in the free troposphere (MT and UT)
The CO concentrations at the upper levels over Europe are significantly lower than those over the surface, decreasing by up to Â70% over highly polluted regions (Fig. S3-1c to f). The CO levels in the MT during winter (Fig. S1-1c) increase from south (up to Â110 ppbv) to north (up to Â130 ppbv). This latitudinal distribution is attributed to the longer CO photochemical lifetime (due to the lower solar radiation) and to a more intense LRT under the prevalence of frequent winter CTs (e.g. W 500 _CT19, Fig. 3b). The computed summer levels are in general lower than Â100 ppbv (Fig. S3-1d). However, higher concentrations are calculated in Eastern Europe due to strong convection. This well-known upward flow over this region (Duncan and Bey, 2004) is also evident from the most frequent summer CTs at SLP (Fig. 3a). The CO concentrations at 200 hPa are in general lower than 100 ppbv, except over Eastern Europe where they can be supported by the convective mixing ( Fig. S3-1e, f).
The chemical production is the major source of CO in the free troposphere (not shown), exceeding 50% in the UT during summer (Table 4). Moreover, the LRT in the MT/UT is more profound than in the LT. The largest contribution originates from Asia, reaching 25% in Western Europe and at higher latitudes in winter ( Fig. S6-3a, c). This distribution reflects the transport paths followed by the pollutant under the influence of the prevailing winter CTs in the MT (e.g. W 500 _CT19, Fig. 3b) and UT (W 200 _CT59, Fig. 3c). In summer, CO FFAS in the MT/ UT (Fig. S6-3b, d) is in general lower than in winter.
However, higher contributions are observed in the UT over the Eastern Mediterranean ( Â30%), when a ridge extends over the greater area (e.g. SU 200 _CT1'). These conditions develop easterly winds that transfer CO FFAS towards the Eastern Mediterranean and North Africa. The North American contribution is higher at high latitudes in the MT during winter (up to 18%, Fig. S6-2a). Moreover, the CO FFNA contribution in the UT reaches 14% over the Iberian Peninsula and the Western Mediterranean during summer (Fig. S6-2d). This distribution is favoured by the westerly/southwesterly circulation that is established under the influence of the prevailing CTs at these heights (e.g. SU 500 _CT3 (, SU 500 _CT5 (, SU 200 _CT1', SU 200 _ CT2'). The anthropogenic and biomass burning emissions signals from Africa are detected only at the southern parts (up to 6%, not shown) when a ridge over the Mediterranean Sea and North Africa transfers the air masses towards Southern Europe (W 200 _CT3', W 200 _ CT5'). On the contrary, the impact of the regional anthropogenic sources decreases significantly, contributing up to 18 and 10% in the MT and the UT during winter, respectively ( Fig. S6-1). However, as already mentioned, there are some regions over Eastern Europe, where CO FFEU reaches 22% in the UT due to strong convection in summer.
Similarly to the surface, some differences are found between GEOS-CHEM and MOZART-2 results at 500 hPa and 200 hPa. Moreover, the contribution of the anthropogenic emissions (and the chemistry production) in GEOS-CHEM is up to Â20% higher (lower) than MATCH-MPIC results over Western Europe in MT (Fischer et al., 2006). These discrepancies are attributed to the global models' configurations (i.e. grid resolution, emission inventories, OH concentrations).
Conclusions
In the present study, the European CO budget was examined in relation to the prevailing atmospheric conditions in 2001, based on the nested-grid simulations of the GEOS-CHEM global model. To this aim, a seasonal CT classification was developed for the NH at the SLP and two atmospheric levels in the MT and the UT, over a 60-yr period. It was found that the regional anthropogenic emissions have significant impacts on the European CO levels in the LT, contributing to the surface CO budget up to Â80%, depending on the season and the atmospheric conditions. Particularly in winter, the anticyclonic circulation patterns over Europe favoured the pollutant's accumulation close to the sources. In summer, the prevailing northerly winds favoured the pollutant's transport of anthropogenic or biomass burning origins from Northern and Eastern Europe southwards, increasing the CO levels in Southern Europe.
The transport of the anthropogenic pollution from North America and Asia towards the European LT was favoured by the westerly circulation, contributing up to 18Á20% each to the CO surface concentrations over Europe in winter. On the contrary, the less organised atmospheric circulation in summer, in conjunction with the lower anthropogenic emissions, limited these contributions to 12Á15% during this season. The Asian and the North American contributions at three regions in Europe, where CO measurements were available at 31 rural background stations for the study year, were found highest (lowest) in winter and spring (summer and autumn). In winter, the LRT at SLP was intense mainly under the prevalence of cyclonic circulation patterns. During the other seasons, the pollutant's transport towards Europe was enhanced for several cyclonic and anticyclonic patterns. The Asian tracer contribution was found higher than the North American in winter, spring and summer under most CTs (but lower in autumn). Events where the LRT contribution is higher than the regional one by up to Â45% were detected at all station sites mainly in winter and spring. The LRT contribution increased in the free troposphere, with the Asian anthropogenic sources' contribution being higher than the North American in most cases. In particular, the Asian tracer reached Â30% over the Eastern Mediterranean in the UT during summer, favoured by the prevailing easterlies. In the MT, this contribution was 25% during winter in Western Europe and at higher latitudes, reflecting the transport pathways followed by the pollutant under the influence of the prevailing winds. Similarly, the North American contribution was highest during winter (summer) in the MT (UT) at the western parts of the continent, reaching 18% (14%). On the other hand, no significant amount of CO originated from the remaining parts of the world. Low biomass burning signals from Africa were detected over Southern Europe in the UT (6%). The regional anthropogenic emissions' contribution in the free troposphere was lower than the surface, contributing 18% (10%) in the MT (UT). However, higher regional contribution was found over Eastern Europe in summer (22%) due to strong convection. The contribution of the CO chemical production was high at all levels and seasons, exceeding 50% in the UT during summer. Quantitative differences between GEOS-CHEM and other global models' results were attributed mainly to different models' configurations. | 2019-04-06T00:42:24.267Z | 2013-04-25T00:00:00.000 | {
"year": 2013,
"sha1": "f954705bd540e23bf5e8fc5e29ec1a830c82ba4e",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3402/tellusb.v65i0.18640",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "4d610f323ad5d82143457d2f0499734276f2ecf7",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
251444565 | pes2o/s2orc | v3-fos-license | Renal and Urological Disorders Associated With Inflammatory Bowel Disease
Abstract Renal and urinary tract complications related to inflammatory bowel disease (IBD) have been relatively understudied in the literature compared with other extraintestinal manifestations. Presentation of these renal manifestations can be subtle, and their detection is complicated by a lack of clarity regarding the optimal screening and routine monitoring of renal function in IBD patients. Urolithiasis is the most common manifestation. Penetrating Crohn’s disease involving the genitourinary system as an extraintestinal complication is rare but associated with considerable morbidity. Some biologic agents used to treat IBD have been implicated in progressive renal impairment, although differentiating between drug-related side effects and deteriorating kidney function due to extraintestinal manifestations can be challenging. The most common findings on renal biopsy of IBD patients with renal injury are tubulointerstitial nephritis and IgA nephropathy, the former also being associated with drug-induced nephrotoxicity related to IBD medication. Amyloidosis, albeit rare, must be diagnosed early to reduce the chance of progression to renal failure. In this review, we evaluate the key literature relating to renal and urological involvement in IBD and emphasize the high index of suspicion required for the prompt diagnosis and treatment of these manifestations and complications, considering the potential severity and implications of acute or chronic loss of renal function. We also provide suggestions for future research priorities.
Introduction
Besides the gastrointestinal tract, inflammatory bowel disease (IBD) can also manifest in extraintestinal organs, contributing significantly to morbidity and mortality. 1,2 These extraintestinal symptoms of IBD are divided into extraintestinal complications and extraintestinal manifestations (EIMs). Extraintestinal complications refer to manifestations that are direct or indirect sequelae of intestinal inflammation. 1 In contradistinction, EIMs have been defined as "an inflammatory pathology in a patient with IBD that is located outside the gut and for which the pathogenesis is either dependent on extension/translocation of immune responses from the intestine, or is an independent inflammatory event perpetuated by IBD or that shares a common environmental or genetic predisposition with IBD." 3 They likely represent a composite of systemic inflammation, autoimmune susceptibility, and metabolic and nutritional derangement. Extraintestinal manifestations are due to an inflammatory process occurring outside the gut but are related to the underlying diagnosis of IBD; their clinical spectrum varies from mild, transient disease to severe, disabling complications. The reported frequencies of EIMs in IBD range from 6% to 47%; the heterogeneity in reported prevalence is likely due to the variability in definitions used for EIMs and because patients can be affected by multiple EIMs. 1 Almost any organ can be affected, but involvement of the joints, skin, eyes, liver, and biliary tract are the most commonly described EIMs. 1 Renal complications in IBD (Table 1) have received much less attention despite early studies reporting kidney involvement in nearly 25% of IBD patients. 4,5 In these early reports, nephrolithiasis, obstructive uropathy, and fistula formation between the bowel and urinary tract were the most common occurrences.
Renal parenchymal involvement in IBD has also been described in the form of glomerulonephritis, tubulointerstitial nephritis, and amyloidosis. However, its true prevalence is not clear because systematic analyses are lacking. 6 In recent years, the use of more potent drugs for treating IBD has increased the potential for nephrotoxicity, further highlighting the importance of this topic. 7 Parenchymal renal involvement can affect any or all of the glomerular, tubular, or interstitial compartments. It is the purpose of this review to report and evaluate the key data in the literature on renal involvement in IBD. We emphasize the high index of suspicion required for the prompt diagnosis, treatment, and ideally prevention of these manifestations and complications; we also highlight some practical considerations in clinical practice.
Renal stone development in the context of IBD is a longreported association dating back to the 1970s. 5,[8][9][10] Historical series report more than 5 times the prevalence of nephrolithiasis in IBD patients compared with the general population. 9,11 It has been reported that up to 38% of IBD patients may develop asymptomatic nephrolithiasis. 12 Recently, in a prospective cohort of 2323 IBD patients from Switzerland, Fagagnini et al reported a prevalence of 4.6% and 3% for nephrolithiasis on imaging in those with Crohn's disease (CD) and ulcerative colitis (UC), respectively. 13 Multivariate analysis revealed that male sex, disease activity, history of bowel surgery, NSAID intake, and a lack of physical activity were all associated with the development of renal stones. Similarly, in a cohort of 3104 IBD patients from Mississippi, 6% and 6.7% of UC and CD patients developed urolithiasis, respectively. 14 It is typical to find either visible or invisible hematuria in renal stone disease; however, not all such hematuria is reliably stone mediated, as both gross and microscopic hematuria can be a manifestation of other kidney and ureteric pathologies. 15 The association between nephrolithiasis and bowel surgery in IBD patients is well established and may occur in patients following ileostomy formation. 16 Torricelli et al evaluated the impact of extensive surgery (total proctocolectomy and either end-ileostomy or ileal pouch-anal anastomosis) on the urine profile, serum biochemistry, and stone composition in IBD patients. 17 In their case-control study, low urinary volume and hypocitraturia were risk factors associated with nephrolithiasis in IBD patients who underwent total proctocolectomy compared with kidney stone formers without IBD. Calcium oxalate and uric acid stones were most frequent. In the setting of fat malabsorption and subsequent steatorrhea caused by extensive active small bowel inflammation or bowel resection, luminal calcium binds free fatty acids, thereby decreasing the calcium that is available to bind and excrete oxalate in the stool. The resulting increase in intestinal absorption of oxalate leads to so-called "enteric hyperoxaluria" and calcium oxalate stone formation in the kidneys. 18 Other mechanisms may be involved in oxalate stone formation. Oxalobacter formigenes degrades dietary oxalate, and its decolonization in the gut may lead to the hyperabsorption of oxalate. Oral administration of Oxalobacter decreases urinary oxalate concentration. 18 Low urinary levels of antilithogenic substances such as magnesium and citrate also play a role in renal calculi formation in IBD. 10 Magnesium and citrate replacement should ideally aim to correct urinary rather than serum levels towards normal. 10 Other preventative measures include a diet low in oxalate and fat, and pyridoxine supplementation. 19 Oral cholestyramine increases oxalate and decreases citrate excretion. 20 The toxic effects of oxalate on renal epithelial and tubular cells cause oxalate nephropathy with persistent hyperoxaluria and, together with stone formation, constitute a major but rare contributor to the development of chronic kidney disease (CKD). 19,21 Uric acid supersaturation of the urine, which promotes uric acid stone formation, is aided by low urinary pH resulting from loss of alkali in diarrheal stool and diminished urine volumes (especially after colonic resection) in IBD. 10,18,22,23 Preventative measures include reduction in dietary purine intake, a high fluid intake to maintain a urine output of 2 to 3 litres, and alkalinization of the urine. Xanthine oxidase inhibitors such as allopurinol inhibit uric acid synthesis and uricosuria. Oral potassium citrate also helps prevent uric acid stone recurrence. 24 Varda et al highlight the importance of prompt diagnosis to ensure appropriate treatment of IBD patients with renal stones. 25 Using data from the Nationwide Emergency Department Sample (2006-2009), they studied a cohort of over 3.5 million patients seeking care for urolithiasis at emergency departments in the United States, of whom 14 352 patients had concomitant IBD. Patients with IBD with urolithiasis were more likely to develop urinary tract infections, acute kidney injury, sepsis, end-organ failure, and to require hospital admission compared with those without IBD.
Unenhanced computed tomography of the kidneys, ureters, and bladder (CT KUB) is the diagnostic method of choice in the acute setting, benefitting from high sensitivity and specificity while also being a quick and safe examination to perform. 26 Low-dose CT conferring less than 3 millisieverts (mSv) of diagnostic medical radiation is now used universally, affording a sensitivity and specificity of 96% and 95%, respectively. 27 There is increasing interest in adopting ultra-low radiation techniques that may be particularly suitable in following up patients with urolithiasis. 28 This is pertinent in IBD where the cumulative exposure to diagnostic medical radiation is high. 29 Timely and accurate diagnosis is imperative to ensure that appropriate treatment is initiated. This may involve a conservative approach with lifestyle and dietary modifications, or extracorporeal shock wave lithotripsy (ESWL) and/or surgery for treating larger stones, where there is a heightened risk of significant urinary tract obstruction. The overarching aims are to decrease the risk of stone formation and associated complications,
Practical Considerations
Nephrolithiasis, comprising mainly oxalate and uric acid stones, is associated with IBD; although the wide reported range of prevalence is due to the lack of a consistent and robust reference standard in these studies ( Table 2). A thorough clinical history and examination, in conjunction with urinalysis and a low threshold to perform imaging studies, should ensure that clinically relevant nephrolithiasis is diagnosed early (Figure 1). Low-dose unenhanced CT KUB is the modality of choice because of its excellent performance characteristics and low radiation burden. A multidisciplinary team approach that includes a urologist and nephrologist (if there is evidence of CKD) is essential.
Penetrating Crohn's Disease Involving the Renal Tract
Transmural inflammation in CD predisposes to bowel perforation and fistula formation, which occurs in about 10% of patients during long-term follow-up. 30 Adherence of inflamed intestine to the bladder wall may cause erosion and fistulization in the form of colo-and entero-vesical fistulae in 2% to 4% of Crohn's patients. 31,32 Fistula formation may be preceded by subacute small bowel obstruction if there is coexistent intestinal stricturing. 33 Entero-vesical fistulae are often associated with intrapelvic abscess development. Presenting clinical features may be pneumaturia, urinary tract infection, and fecaluria-or a combination thereof. 34,35 Comprehensive evaluation of suspected entero-vesical fistula typically requires the utilization of several imaging modalities to precisely define the fistulous connection, the presence of an abscess, and the exclusion of coexistent bowel stricturing. Historically, plain abdominal x-ray, barium enema, and intravenous urography were used, but these have been superseded by cross-sectional imaging. 36, 37 Cystoscopy helps identify a possible fistulous tract, but the findings are often nonspecific, failing to identify a fistula in up to 65% of cases. 36 Ultrasound, which is safe and well-tolerated, is Table 2. Unanswered clinical and research priorities to better understand the renal and urological complications and manifestations of IBD.
Studies to determine if monitoring certain IBD patients for nephrolithiasis is worthwhile
Dedicated studies to establish the association between IBD and urological malignancy
Development of biomarkers for tubular and glomerular pathology (valuable in all situations where GFR is at risk)
Devise an evidence-based strategy for the monitoring of renal function for IBD patients Safety trials to understand the nephrotoxic effects of drugs used to treat IBD, focussing on biomarker or genetic clues to susceptibility Standardised evidence-based approach for monitoring renal function in patients taking 5-ASAs useful in the diagnosis of colo-vesical fistulae, 38 and its yield may be improved by the administration of oral contrast before performing the study. [39][40][41] However, it is highly operatordependent and may not offer the anatomical detail afforded by other modalities. 29 Computed tomography (CT) and magnetic resonance imaging (MRI), which are now considered the gold standard, benefit from providing a multiplanar 3D representation of the fistula. This is invaluable in planning appropriate management and offers a preoperative road map for surgical intervention.
Computed tomography findings of colo-vesical fistulae include intravesical air in the absence of recent instrumentation (eg, urinary bladder catheterization), focal bladder wall thickening, and the presence of contrast in the urinary bladder that was administered either orally or via the rectum ( Figure 2). Contrast-enhanced CT is highly sensitive in detecting fistulae, is fast to perform, and is the first-line investigation in many centers. However, a major drawback is that it confers exposure to diagnostic medical radiation, an important consideration in patients with IBD who often present at a young age and require repeated abdomino-pelvic imaging over many years, exposing them to high cumulative levels of radiation. 29 Magnetic resonance imaging allows the accurate depiction of fistulous tracts with the advantage of being radiation-free. It offers superior soft-tissue resolution compared with CT and has similarly high sensitivity and specificity. 42 T2-weighted imaging demonstrates high signal fluid within the fistulous tract and detects associated fluid collections and inflammation within the bladder wall. T1-weighted imaging provides anatomical detail about the adjacent viscera that is useful when a surgical approach is contemplated. Some centers utilize MRI as their first-line investigation for colo-vesical fistulae, although its high cost and lack of widespread availability are limiting factors. 31,42 Endoscopy has a very low sensitivity for detecting a fistulous tract but is used perioperatively if there is concern for a malignant etiology.
Where technically feasible, radiological intervention with percutaneous drainage is usually favored over surgery in the first instance to mitigate the requirement for stoma formation when the definitive operation to repair the fistula is undertaken. 32 In a retrospective cohort, a study of 97 CD patients with entero-vesical fistula reported that over a median follow-up time of almost 3 years, only antitumour necrosis factor alpha (anti-TNFα) agents were associated with remission without the subsequent requirement for surgery. 31 Overall, around 66% of IBD patients with an enterovesical fistula ultimately proceed to surgery despite medical therapy. 32
Practical Considerations
Diagnosis of entero-vesical fistulae can be challenging, as the onset can be insidious and nonspecific. Therefore, having a high index of suspicion is important. Presentation may only be with fever and abdominal pain, without the classical features such as pneumaturia and fecaluria. Early multispeciality discussion involving a gastroenterologist, surgeon, radiologist, and pathologist is recommended to devise an optimal, individualized management plan. Multimodality imaging is often required, and management may necessitate both medical and surgical approaches including anti-TNF treatment ( Figure 2).
Cancer
Malignancy originating in the kidneys and urinary tract is over-represented in IBD patients, with a 5-fold increase in the relative risk compared with the general population. 43,44 A strong link has been established between cigarette smoking and urological malignancy in CD patients but not in those with UC. 45 Interestingly, in a recent meta-analysis, IBD was not associated with an increased risk for bladder cancer; but in the CD subgroup, there was a trend towards an increased bladder cancer risk, indicating marginal significance. 46 In a study of nearly 19 500 patients with IBD, 16 patients developed urological malignancy. In a multivariate analysis, thiopurine use was associated with a 3-fold increased risk of urinary tract cancers. 47 In a Chinese cohort of 1609 IBD patients, the risk of developing malignancy, including renal and urinary bladder carcinoma, was higher in patients suffering from elderly-onset IBD (60 years and older). 48 This is an area where there remains many unanswered questions, and further research is needed to better understand the association between IBD and urinary tract malignancy ( Table 2).
Drug-related Nephrotoxicity
Tacrolimus, 5-aminosalicylate (5-ASA), and TNF-α inhibitor use have been implicated in progressive renal impairment. Although differentiating between drug-related side effects and deteriorating kidney function due to EIMs can be difficult (Table 3).
5-aminosalicylates
Often, 5-ASAs are used to treat active disease and help maintain remission in UC; even though there is a lack of evidence for their efficacy, they continue to be prescribed widely in CD. 49 There has been considerable debate around the entity of 5-ASA nephrotoxicity, with many contradictory series in the literature. [50][51][52][53] A recent retrospective cohort and nested case-control study using primary care data from the United Kingdom that included 35 601 patients with either UC or CD found that exposure to 5-ASAs was not associated with a risk of nephrotoxicity. 54 Rather, the study found that active inflammatory disease, duration of disease, coexisting cardiovascular disease, and the use of established nephrotoxic drugs were independently associated with the development of nephrotoxicity. Nephrotoxicity related to 5-ASA in IBD patients is rare and occurs in an idiosyncratic manner independent of 5-ASA dose, making proof of causality difficult. 51 A large international study identified patients with likely 5-ASA-induced nephrotoxicity from 89 centers. 51 Five cases were categorized as definite 5-ASA-induced nephrotoxicity, having had a second episode of acute kidney injury when rechallenged with the drug. A further 146 probable cases were also identified following a rigorous case adjudication process. The authors performed a genome-wide association study that revealed a human leukocyte antigen (HLA) association, notably HLA-DRB1*03:01, was related to 5-ASAinduced nephrotoxicity. They reported that 5-ASA-induced nephrotoxicity is more common in males and can present at any age; and the most common histological finding is chronic tubulointerstitial nephritis. Nephrotoxicity occurred after a median treatment duration of 3 years. Of particular concern, only 30% fully recovered renal function, with 10% requiring permanent renal replacement therapy. Although very rare, annual monitoring of renal function is recommended to detect 5-ASA related nephrotoxicity early. 7
Practical Considerations
There is currently no evidence to support a specific kidney monitoring strategy to prevent 5-ASA-related nephrotoxicity and no consensus within international guidelines (Table 4). 7,[55][56][57][58] There is a lack of clarity about the optimal approach to monitoring renal function, as this has not been addressed systematically in the literature. Until further data are available and there is agreement on an optimal monitoring strategy, we offer a suggested approach (Figure 3).
Tacrolimus and Ciclosporin
Tacrolimus and ciclosporin are calcineurin inhibitors that have many roles in medicine, especially in the prevention of solid-organ transplantation rejection episodes. 59 Their use mandates careful attention to dosing and monitoring of trough serum concentrations of the drugs to avoid the risk of acute kidney injury. During prolonged use, careful attention is required to minimize the risk of chronic tubulointerstitial fibrosis, atrophy, and loss of kidney function.
These agents have an established role in treating acute severe UC, but the evidence for their use in CD is less clear. [60][61][62][63] Multiple contradictory studies in the literature have raised the possibility of nephrotoxicity associated with tacrolimus, leading to safety concerns. [64][65][66] However, a recent large series has provided some reassurance. In a retrospective 22-center study from Spain comprising 143 patients with CD or UC receiving tacrolimus, 7% developed acute kidney injury. In Table 3. Potential drug-associated nephrotoxic effects in IBD. all these cases, reversibility was achieved after dose reduction (40%) or discontinuation of the drug (60%). 60 In this patient cohort, the median serum creatinine during the tacrolimus treatment was 186 µmol/L (interquartile range, 159-230; maximum value, 451 µmol/L). It can be difficult to optimize the dose of tacrolimus and maintain safety; in attaining the target blood concentration to maximize efficacy, there can be large individual differences in dosage for a target range of 10 to 15 ng/mL. 67 Yamamoto et al found that the tacrolimus dose to maintain equivalent blood concentrations was lower in patients carrying the cytochrome (CYP) 3A5*3/*3 than in those carrying the CYP3A5*1 genotype, and the concentration/dose ratio was significantly higher in the latter. 68 Ciclosporin is an option for treating acute severe UC, but it has a significant toxicity profile, with nephrotoxicity occurring in 6.3%. 7,69 Rat models have revealed that acute renal damage secondary to ciclosporin is due to vasoconstriction of the afferent arterioles, leading to diminished renal blood flow and glomerular filtration, with a consequent rise in serum creatinine. 70,71 The histopathological changes seen in ciclosporin-induced chronic renal damage are interstitial fibrosis and arteriolar disruption. 72
Practical Considerations
In the context of solid-organ transplantation, there is abundant evidence of progressive kidney damage with chronic exposure to calcineurin inhibitors and the importance of adjusting drug dose in response to pharmacokinetic profiling of blood drug concentrations. 73 At present, aside from recommendations for target drug concentrations, the international consensus guidelines do not offer specific advice on how frequently to monitor the renal function of patients taking calcineurin inhibitors; we offer a suggested approach in Figure 3.
TNF-α Inhibitors
Anti-tumour necrosis factor (TNF)-α medications, including infliximab, adalimumab, certolizumab pegol, and golimumab, are increasingly used in the treatment of both CD and UC. 74 Infliximab, adalimumab, and etanercept have been associated with glomerulonephritis in systemic lupus erythematosus; although causality remains unproven. Most cases have been reported in other autoimmune conditions such as rheumatoid arthritis and psoriasis rather than IBD. 70,75 A possible case of infliximab-induced focal segmental glomerulosclerosis presenting as a severe nephrotic syndrome in a patient with UC has been described, 76 and a similar case in the setting of ankylosing spondylitis. 77 More data related to the possible renal side effects of anti-TNF therapy are needed, but available data suggest that renal complications are uncommon. 74
Emerging Therapies
An increasing number of novel agents are available to the clinician for treating IBD, but their potential to adversely affect kidney function is poorly understood (Table 3). Vedolizumab, a biologic used to treat moderate to severe UC and CD, has been associated with acute tubulointerstitial nephritis in a case report, although this was reversed following the administration of glucocorticoids. 78 Furthermore, it was successfully reintroduced without further kidney injury. 78 Ustekinumab, an anti-interleukin (anti-IL)-23 biologic is another option for moderate to severe UC and CD, but may be associated with nephrotic syndrome secondary to focal segmental glomerulosclerosis. 79 However, a recent real-world study was reassuring, with no renal complications described. 80 Tofacitinib was the first oral Janus kinase (JAK) inhibitor approved for the treatment of UC. It has been linked with rising serum creatinine, although the specific clinical relevance of this remains undetermined. 81 Other JAK inhibitors like upadacitinib do not appear to have any nephrotoxic effects. 82 Filgotinib, another JAK inhibitor, is an emerging option currently licensed for the treatment of UC in the EU and UK, but not in the United States. Although there is limited clinical experience regarding filgotinib in patients with renal impairment, no specific complications have been reported to date. However in pharmacokinetic studies, an increased drug concentration was observed in patients with an estimated glomerular filtration rate (eGFR) <60 mL/min/1.73m²; thus dose reduction in these patients is suggested. 83 More data pertaining to the effects on the kidneys of therapeutic agents used for managing IBD are needed ( Table 2).
IBD-related Glomerulonephritis
Glomerular disease as an EIM of IBD was first proposed 4 decades ago. 84,85 However, infrequent reports brought into question whether there was a causal link. Nevertheless in most cases, not only did the onset of glomerulonephritis coincide with acute exacerbation of intestinal inflammation but renal function improved in parallel with the treatment Table 4. Current Gastroenterology guidelines available and their suggested monitoring frequency interval of renal function for patients on 5-ASA therapy.
Guidelines (Ref.) Monitoring Regimen
ACG ( of the gastrointestinal disorder. [86][87][88] Although not often practiced, repeat kidney biopsy has shown histological resolution of the inflammatory response following treatment of the acute flare. 85 A wide spectrum of histological patterns of the glomerulonephritides has been described in patients with IBD, including IgA nephropathy, 87,89-94 minimal change disease, 92,94 Immunoglobulin M (IgM) nephropathy, 93,95 membranous nephropathy, 88,92-94 membranoproliferative nephropathy, 96 focal and segmental glomerulosclerosis, [92][93][94] and antibasement glomerular disease. 97 Proving true causation (as opposed to association) can be difficult to establish with certainty, so aiming for close monitoring to ensure that there is a treatment-related change in the trajectory of renal functional decline is important. Seeking continued nephrological input (more than "whether to biopsy") is ideal in these cases. A kidney biopsy series has established the association between IgA glomerulonephritis and IBD. 92 IgA nephropathy was found in 24% of a total of 83 biopsies performed in IBD patients with acute and chronic kidney disease. Moreover, the prevalence of IgA nephropathy was significantly higher in patients with IBD than in patients without IBD. This suggests a shared pathophysiology between intestinal and kidney disease. Plasma cells in gut mucosa produce large quantities of IgA that plays an important role in regulating the composition of the gut microbiota and in defense against environmental and pathogenic bacterial antigenic exposure. 98 Mucosal inflammation promotes enteric permeability, which leads to loss of systemic antigenic exclusion and stimulates abnormal IgA production. A frequent observation is of mucosal infection triggering episodes of IgA glomerulonephritis, alongside an increase in local mucosal IgA generation. 99 Furthermore, specific bacteria and proteins found at the interface between the intestinal mucosa and lumen can be used to differentiate and classify IBD and healthy human subjects. 100 Dysregulation of IgA production results in increased serum levels of IgA and IgAcontaining immune complexes. 101 The circulating level of immune complexes containing IgA correlates directly with clinical activity and extent of glomerular crescent formation. 102 T cells are implicated in the pathogenesis of this disorder, but their precise role is unclear, as few T cells can be identified in the glomerular mesangium. Joher et al studied a case series of 24 patients with IBD-associated IgA glomerulonephritis relative to a cohort of 134 patients with primary IgA nephropathy in the absence of IBD. 103 They found that IBD-associated IgA glomerulonephritis has frequent inflammatory lesions at onset and variable long-term outcomes. They reported no association between IBD activity and IgA glomerulonephritis outcome. Larger series with longer follow-up (such as in idiopathic IgA nephropathy, which has no specific therapy, and can take decades potentially to lead to severe loss of renal function) will help to better define the aetiopathological link between IBD and the development of IgA glomerulonephritis.
The link between a dysregulated gut-associated lymphoid tissue and IgA nephropathy was postulated in the 1980s following the observation of the increased association of IgA nephropathy with celiac disease. 104 Data have demonstrated a role for alimentary antigens, particularly gliadin in developing IgA nephropathy in BALB/c mice. A reduction in IgA antigliadin antibodies and proteinuria was reported after gluten free-diet in patients with IgA nephropathy. 105 A genome-wide association study demonstrated that the majority of loci associated with IgA nephropathy are also associated with immune-mediated inflammatory bowel diseases, maintenance of the intestinal barrier, and response to gut pathogens. 106 Transgenic mice that overexpress the B cell-activating factor develop IgA nephropathy modulated by alimentary components and intestinal microbiota. Mice expressing human IgA1 and a soluble form of the IgA receptor (sCD89) develop IgA nephropathy, which is regulated by dietary gluten. Recent data have established gut-associated lymphoid tissue hyperreactivity in IgA nephropathy patients with IgA against various alimentary components. 107 The NEFIGAN randomised controlled trial utilised an enteric controlled-release formulation of budesonide that was targeted specifically to Peyer's patches. 108 A reduction in proteinuria was seen after 9 months of treatment, as well as normalising of renal function with few reported safety concerns. This promising approach is now being tested more rigorously in the NefIgArd trial. 109,110 The gut-renal connection is an emerging and promising avenue for novel treatment approaches for patients with IgA nephropathy. 107,109,110 Genome-wide association studies of IgA nephropathy have advanced the notion of genetic cross-susceptibility of IBD and glomerular disease. 106 For instance, HLA-DR1 confers increased risk for IgA nephropathy and HLA-DR1/DQw5 for CD; this might explain why these 2 diseases co-occur more often than expected by chance. 92 Conversely, HLA-Cw*1202-B*5201-DRB1*1502 haplotype increases the risk for UC but reduces that for CD. Among non-HLA loci, an increasing number of IgA nephropathy loci are implicated with risk of IBD (eg, CARD9, HORMAD2) or encode proteins involved in maintaining the intestinal mucosal barrier or regulating mucosal immune response (eg, DEFA, TNFSF13, VAV3, ITGAM-ITGAX, PSMB8). 106
Acute Tubular Injury
In nephrology, early diagnosis and prompt intervention for acute tubular injury, which has many causations, is encouraged. In the context of IBD, acute-on-chronic loss of circulating volume (salt and water depletion) and diseasemodifying drugs can both lead to acute-and even chronicloss of kidney function, which is not always reversible. A full exposition on this important topic is beyond the scope of this article, but interested readers are directed to a review article by Kellum et al. 111 Tubulointerstitial Nephritis Tubulointerstitial nephritis (TIN) has many potential etiological associations. The diagnosis of TIN can be challenging and usually warrants a kidney biopsy as part of the diagnostic workup. Frequently, the urinalysis findings may be modest or minimal, but the diagnosis should always be considered in IBD patients for whom loss of kidney function, often but not invariably, takes place over time and without an obvious causative factor. 112,113 Continued close follow-up with nephrological input is key.
Tubulointerstitial nephritis has been reported in patients with IBD, but it is often difficult to determine whether this should be considered as an EIM or as an extraintestinal complication secondary to medical treatment from drugs such as 5-ASA, ciclosporin, and TNF-α inhibitors ( Figure 4). 93,114,115 For example, in a Finnish series of 819 patients undergoing kidney biopsy, 35 patients (4.3%) proved to have IBD but in those with TIN, the prevalence of IBD was 13.3%. 93 In this cohort, all patients with TIN had an ongoing or previous history of 5-ASA exposure, so the authors were unable to conclude whether this observation was an EIM or medication-related. Nevertheless, multiple studies have demonstrated a link between tubulointerstitial damage and IBD activity by assessing the levels of various proteins excreted in the urine, which are considered to be specific markers of tubular damage in both adult 116,117 and pediatric patients. 118 In health, low molecular weight proteins such as alpha-1-microglobulin (α-1-MG) and cystatin C are filtered by the glomerulus and reabsorbed in the proximal tubule. 119 Their presence in the urine implies diminished reabsorption and are considered sensitive markers of tubular damage. In some cases, the presence of a predominantly lymphocytic infiltrate with non-necrotising granulomata has been observed on renal biopsy, lending further support that the diagnosis of TIN is an EIM rather than secondary to medication. 118,[120][121][122][123] Once again, continued close follow-up with nephrological input is key.
Renal Amyloidosis
Serum amyloid A protein (SAA) amyloidosis, also known as secondary amyloidosis, involves the extracellular deposition of insoluble amyloid fibrils in any organ, derived from the acute-phase reactant; SAA and its production occurs in some chronic inflammatory states such as IBD. A systematic review comprising nearly 10 000 patients found that IBD-related amyloidosis is a rare entity with an estimated overall prevalence of 0.53%. Crohn's disease is complicated by amyloidosis in 1.05% of cases compared with just 0.08% in UC. 124 The most common presentation of SAA amyloidosis is renal involvement presenting with renal impairment in the setting of nephrotic syndrome. In about 15% of cases, neither proteinuria nor elevated serum creatinine is found, and so a high index of suspicion is required to make the diagnosis. 124 The diagnostic gold standard is a biopsy of the target organ ( Figure 1). Serum amyloid A protein is associated with an increased incidence of acute tubular necrosis and faster progression to advanced chronic kidney disease and end-stage renal disease; thus, it is likely to be associated with amyloid-and kidneyrelated pathologies leading to reduced patient survival. 125 Treatment is targeted at the underlying IBD disease activity to reduce the new formation and deposition of SAA protein and address renal amyloidosis. Curative therapies for renal amyloidosis are not yet available, but one approach to contain the disease is a combination of anti-TNF agents and colchicine. 124
Conclusions
The presentation of renal and urinary complications related to IBD can be subtle and requires continued vigilance to ensure prompt diagnosis and treatment. Involvement of a multidisciplinary approach (urology, nephrology) seems both prudent and valuable. We have identified a number of clinical and research priorities in this field ( Table 2) that highlight the compelling need for detailed combined phenotypic and genotypic characterization of large cohorts followed for at least 10, preferably 20 years. The detection of early renal impairment (loss of excretory function) is of paramount importance because once significant renal functional loss has occurred, this may be both irreversible and progressive (eg, recurrent episodes of acute kidney injury or overexposure to potential disease-modifying drugs with their own toxicities). The management of all aspects of IBD is considerably more complex if patients become dialysis-dependent, so avoiding this degree of loss of renal function is of paramount importance. The cornerstone of preventive nephrology still depends on repeated measurement of plasma creatinine concentrations over time that are then converted to an estimated GFR. 126 As repeated episodes of acute kidney injury are linked to progressive loss of kidney function, patients should receive appropriate monitoring of eGFR at their follow-up visits, following nationally approved and endorsed standards of care. Control of blood pressure using renin-angiotensinaldosterone system inhibitors (RAASi) and, increasingly, sodium/glucose co-transporter-2 inhibitors (SGLT-2i) are cornerstone management tools to arrest progressive loss of renal function, although SGLT-2i have not been much studied in the context of IBD. Timely referral for nephrological advice around diagnosis and management is also important. 126 Until dedicated, high-quality clinical and investigational studies with large cohorts and long-term follow-up are undertaken to discover the true nature of the relationships between IBD and chronic kidney disease. The most effective strategy for prevention of kidney and urological involvement in IBD is to achieve rapid diagnosis, treatment, and remission of the primary bowel pathology. Increased awareness of the renal manifestations and complications associated with IBD should reduce the risk of both acute and chronic kidney injury, leading to better patient outcomes. Figure 4. A high-power (100x magnification) haematoxylin and eosin (H&E) stain, derived from a biopsy of the interstitial compartment of the renal parenchyma from a patient with Crohn's disease. There is an intense eosinophil-rich interstitial infiltrate (example encircled in red) comprising polymorphonuclear leucocytes and lymphocytes, in some places exhibiting "tubilitis" (infiltration and blockage of the renal tubular lumina by cells) (See online version for color figure). | 2022-08-10T06:18:19.099Z | 2022-08-09T00:00:00.000 | {
"year": 2022,
"sha1": "259a84351e523031c2ad5974fab3e7c017acb9b5",
"oa_license": "CCBYNC",
"oa_url": "https://openaccess.sgul.ac.uk/id/eprint/114664/1/izac140.pdf",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "81902fd71c7c3f726e9e994703f0bcde4c751632",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
55637826 | pes2o/s2orc | v3-fos-license | ASPECTS OF SUPPORTING SUBSOIL SOFTENING AT EXCAVATION OF DEEP FOUNDATION PITS
Igor Shumakov Aleksandr Pogrebniak Oksana Grinchuk UDK: 69.051:624.131 DOI:10.14415/konferencijaGFS 2016.054 Summary: Evidence from one of the objects in Ukraine with a recessed substructure there were considered aspects of supporting subsoil softening over time. Deformations of differential movement of the foundation pit bottom cause to uneven slumps due to more intensive subsoil softening under the central part of the pit, compared to its edges and corners. Completed laboratory studies of the soil yielded a picture of the soil physical change with stratification depth, and as a result the corresponding characteristic curves were obtained. It was proposed to call the described process and its accompanying phenomena as soil reconsolidation, and the term "reconsolidation factor" was introduced. The authors have proposed the typology of soils, on the base of which the occurrence of the soil softening effect may be predicted.
INTRODUCTION
The contemporary construction practice has actual problem of available ground area utilization.The underground space is used increasingly for building of underground car parks, commercial area etc.And if now such projects are not large-scale, then in prospect usage of deep pits will became usual in building practice, and with that the depth of underground space usage will grow.In the process of a project completion of the underground space utilization at the pit cutting stage the builders frequently enough face a problem of so-called bottom elevating.Herewith, amount and geometry of elevating are variant and depend of the projected pit depth, its square and geometric form.In the existing literature, a fact of elevating is mentioned without cause analysis and engineering-geological conditions.
Basing on the hydraulic structures construction this phenomenon is described by L. Molokov [1,2].The most full description of this phenomenon is given in the works of B. Dalmatov [3,4].He also proposed the term "soil softening settlement" (which is developing under load don't exceeding natural value, i.e. a load equal to the weight of all the soil excavated during the pit excavation).According to the series of researchers (e.g.B. Dalmatov [3]), during the foundation in the pit less than 5 m deep, the soil softening doesn't happen or it is insignificant.I.e. it starts from the depth of 5 and mode meters.The other researchers mention the starting softening depth of 8 m.In the basic work [5] this phenomenon also is explained in the section devoted to the swelling soils as one of swelling types connected with "splitting water effect" in the "overconsolidated".In the researches of S. Kozlovskiy and A. Koshelev [6] is mentioned that "...during a pit excavation and common pressure release, a soil softening will happen in the pit bottom at the depth from 2 to 5 m".Basing on our researches, this statement is confirmed.
SOFTENING CHARACTERISTICS RESEARCH
In 2014 the authors performed the engineering-geology research for the projected residential area in Kharkov [7].In this object an one-level car park was projected with deepening of nearly 5,0 m.Warkshop examinations was performed for all the engineering-geological elements (EGE).Their results have high convergence of the particular values and was characterized by the normative factors mentioned in the Table 1.After obtaining of the research results, the developer changed the technical parameters of the object, in particular, a 3-level car park was projected with deepening of 12 m.Pit bead level was projected in the sandy clay bottom (EGE6), with lower sands (EGE 7) with thickness more than 1 m, and lower was heavy sandy clays (EGE 8).At the 2014 spring the pit excavating was started, which provided to don't reach to the projected pit bottom mark by 1,5÷2,0 m.In November 2014 the pit was drawn to the projected mark, and non-correspondence of the pit bottom soils to the examinations materials was realized.Instead of expected sandy clay (EGE 6) and sands beneath them (EGE 7), in the pit central part and at the projected mark a brown sandy clay (EGE 8) was spotted, and the sands (EGE 7) was presented enough at the near-wall pit parts.This demanded additional examinations.At December 2014 additional examinations have been made at the pit bottom to study the new bottom soils [2]. 3 pores were drilled with depth of 5 m from the pit bottom, samples of non-ruined structure was taken, a complex of laboratory researches was performed.Furthermore, the soils physical characteristics were compared.As Table 1 shows, during the period from November till December the consistency (flowability factor) in the soils natural stage didn't changed, and softening want on.The dry soil specific weight from 15,28 lowered down to 14,62 kN/m 3 .At this time, no strong rains happened, no snow, no frost.Herewith, the following can be concluded.The sandy clays of hard consistency, which will remain hard in water-saturated stage, transformed into semi-hard, and during water saturation became tightly-ductile (November) and further soft-ductile (December).
Савремена достигнућа у грађевинарству 22. април 2016. Суботица, СРБИЈА
The obtained relationships analysis permits to talk about pronounced tendency: maximum soil softening happened down to 2 m from the pit surface, and further at the depth of 5 m the approach of the examined factors to the normative values is observed.Herewith, we mention immediately that changed thickness is about 5 m, and deeper grounds properties remained similar to the normative characteristics of the 2013 examinations.
As the mentioned data shows, the geotechnical conditions of the site were changed significantly.The physical-mechanical properties change of soils was happened in the direction of their degradation and, as a result, additional resources were needed for building at this site.This demanded a scientific-technical estimate.Swelling?No: those soils were examined for swelling properties.Is this a known process of the pit bottom lifting due to the hydrostatic pressure release?No. Underground waters at this site are occurs deeper than 26 m from the surface and virtually have no pressure.The authors reached a conclusion that this soil softening occurred due to the common pressure release."Water propping factor" is likely the mechanism of this phenomena is definitely true.The softening thickness layer is no more than 5 m during pits excavating of 10-15 m depth.We can fit it to the group of swelling only provisionally, due to the lack of another established approach.The difference is significant.During the swelling processes, a water gets into the soil structure, which leads tits volume growth.During the softening all happens vice versa.In the first hand the porosity grows, then the soil will be ready to receive additional moisture.Moisture capacity is the secondary process.
In its natural occurrence, overconsolidated sandy clays are water tight and cannot receive the water in their structure.The problem begins solely after release of the above laying soils natural weightcommon pressure.Only after that the specific soils began to represent their special properties.
The authors propose to call the described process and its following "soils reconsolidation".According to the authors opinion, it is more correctly to use a term reconsolidation factor (Krec) instead of the term "reconsolidation settlement" by D. Dalmatov.The cause is that during the reconsolidation process the "softening settlement" is not equal to the "consolidation settlement" taking into account of the time factor.Lifetime of any structure is incommensurable with the geological time during which an initial dynamic balance formed (consolidation happened), and in means that if the soils will be loaded evenly with equal starting technogenic load, then after the normative time of the structure existence the soils will not reach initial density.The reconsolidation process begins under the dynamic balance disequilibrium of the system "pore pressurecommon pressure".At the same time the reconsolidation factor depends on the soil physical factors (A), the released common pressure value (Рcom) and the softening time (t), which is confirmed by the data quoted in the Table 1.
Taing into account that every specific case during the pit excavating in the specific place and for known depth (А) and (Рcom) = const, we obtain that the reconsolidation factor is a time function f(t).Thereby, from the reconsolidation beginning moment (the system's dynamic balance violation) it will last either till the moment of acquisition by the system Moreover, the soils physical factors change with depth was examined.The data is mentioned in the graphs form at the Figures 1-3.Straight line is normative factor value during the 2013 examinations.Polylines are the values by three pores (1, 2, 3 -number of pores). | 2018-12-11T18:16:27.868Z | 2016-01-01T00:00:00.000 | {
"year": 2016,
"sha1": "712bc7a2dbb147bf58c42bfa7e1d8e04ae2a3b5b",
"oa_license": "CCBYSA",
"oa_url": "http://zbornik.gf.uns.ac.rs/doc/NS2016.054.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "712bc7a2dbb147bf58c42bfa7e1d8e04ae2a3b5b",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": [
"Geology"
]
} |
15675280 | pes2o/s2orc | v3-fos-license | Analysis of detergent-free lipid rafts isolated from CD4+ T cell line: interaction with antigen presenting cells promotes coalescing of lipid rafts
Background Lipid rafts present on the plasma membrane play an important role in spatiotemporal regulation of cell signaling. Physical and chemical characterization of lipid raft size and assessment of their composition before, and after cell stimulation will aid in developing a clear understanding of their regulatory role in cell signaling. We have used visual and biochemical methods and approaches for examining individual and lipid raft sub-populations isolated from a mouse CD4+ T cell line in the absence of detergents. Results Detergent-free rafts were analyzed before and after their interaction with antigen presenting cells. We provide evidence that the average diameter of lipid rafts isolated from un-stimulated T cells, in the absence of detergents, is less than 100 nm. Lipid rafts on CD4+ T cell membranes coalesce to form larger structures, after interacting with antigen presenting cells even in the absence of a foreign antigen. Conclusions Findings presented here indicate that lipid raft coalescence occurs during cellular interactions prior to sensing a foreign antigen.
Background
Signals emanating from the plasma membrane have spatial and temporal components [1][2][3][4][5]. Spatial distribution and accessibility of signaling proteins on the plasma membrane can potentially have profound effects on the outcome of signaling. While knowledge of temporal signaling events has rapidly advanced, the spatial distribution of signaling proteins remains unclear. More so, how the spatial distribution of signaling molecules relates to temporal signaling is unknown. However, recently, re-organization on the plasma membrane of quiescent cells was recognized after triggering signaling from the membrane [6][7][8][9][10][11].
Lipid rafts assemble to form an immunological synapse, a central structure at the contact site of CD4 + T cells and antigen presenting cells involved in regulating cell signaling [36][37][38][39][40][41][42][43][44][45]. These early signaling events are crucial in generating a response by T cells, especially since CD4 + T cells are capable of generating specific cellular responses after the engagement of the same antigen receptor, ranging from differentiation to Th1 or Th2 or Th17 (T helper cell subsets).
In light of the observation that lipid rafts are compositionally heterogeneous, it remains unclear whether distinct sub-populations of rafts assemble at or around the synapse and thus, contribute to signal transduction and distinct cellular responses. Methods allowing enumeration of lipid rafts as on a single raft and sub-population basis in quiescent, activated, and differentiating cells will aid in addressing the role of lipid rafts in signaling. To enumerate lipid rafts in T cells, we have used a published detergent-free isolation procedure [46]. Lipid rafts isolated from a T cell line in the presence and absence of a specific antigen were visualized by transmission electron microscopy. It was surprising to find that lipid rafts isolated from co-cultures of CD4 + T cell and antigen presenting cells in the absence of antigen show raft coalescence/clustering.
Detergent-Free Isolation Protocol
Lipid rafts were isolated using a previously published protocol [46]. Briefly, 6 × 10 7 of total cells either YH16.33 alone or co-cultured with A20 (1:1 ratio) in the presence or absence of 1 mg/ml chicken ovalbumin (antigen) was cultured for 16-18 hrs. Cells were centrifuged for 5 minutes at 1000 × g at 4°C. The supernatant was decanted; the pellet was re-suspended in 10 ml of base buffer solution consisting of 20 mM Tris-HCl, 250 mM Sucrose (pH 7.8), supplemented with 1 mM CaCl 2 and 1 mM MgCl 2 followed by centrifugation for 2 minutes at 250 × g at 4°C. Then the supernatant was decanted, the pellet was re-suspended in 1 ml of the base buffer solution supplemented with, CaCl 2 and MgCl 2 , a protease inhibitor cocktail set (EMD BioSciences, Darmstadt, Germany), and a calpain inhibitor (Sigma-Aldrich, St. Louis, MO), and then lysed by passaging through a ¾ inch 23 gauge needle, 20 times. The lysate was centrifuged at 1000 × g for 10 minutes at 4°C. The supernatant was collected and stored on ice. The pellet was resuspended with 1 ml of the base buffer solution supplemented with CaCl 2 , MgCl 2 , and protease inhibitor and lysed again by passaging through a ¾ inch 23 gauge needle, 20 times. The lysate was centrifuged at 1000 × g for 10 minutes at 4°C. The supernatant was pooled with the previously collected supernatant. Two ml of the base buffer supplemented with an equal volume of 50% Optiprep solution (Sigma Aldrich, St. Louis, MO) was transferred to an ultracentrifuge tube (Beckman Instruments, Paolo Alto, CA). The solution was then overlaid with 1.6 ml each of 20%, 15%, 10%, 5% and 0% Optiprep solution, respectively, with a total final volume of 12 ml. The gradient was centrifuged for 90 minutes at 52,000 × g at 4°C in an ultracentrifuge (Beckman Instruments, Paolo Alto, CA). The sample was then fractionated in 1.3 ml aliquot from the top of the gradient and stored at -20°C. For detergent isolation experiments, lipid rafts were obtained in the presence of 1% Triton X-100 and subjected to sucrose density gradient as described previously [23,49].
Western Blot Analysis
Fifteen μl of each fraction was combined with 6.3 μl of lithium dodecyl sulfate (LDS) buffer (Invitrogen, Carlsbad, CA) and 2.3 μl DTT (Invitrogen, Carlsbad, CA). Twenty-two μl of the fraction solution was loaded into 4-15% gels (BioRad, Hercules, CA). The gel was electrophoresed using 2-(N-morpholino) ethanesulfonic acid (MES) buffer (Invitrogen, Carlsbad, CA) at 100 volts for approximately 45 minutes. The gel was then transferred to a polyvinylidene fluoride (PVDF) membrane for 1 hour at 45 volts. The membrane was blocked with 5% non-fat Carnation Instant milk prepared in phosphate buffer saline solution with Tween-20 (PBST) (Sigma Aldrich, St. Louis, MO) and incubated with appropriate primary antibodies against Linker of Activated T cells (LAT), β-COP (Santa Cruz Biotechnology Inc, CA), overnight at 4°C. The species specific, secondary antibodies conjugated to horseradish peroxidase (HRP) (Pierce, Rockford, IL) were added and incubated for 75 minutes at room temperature. The membrane was then exposed to substrate and chromogen solution, a mixture of equal volumes of H 2 O 2 and a luminol solution (SuperSignal West Dura) (Pierce, Rockford, IL) for 2 minutes and then exposed using an image analyzer (Alpha-Innotech, San Leandro, CA).
Dot Blot Protocol
PVDF membranes were soaked in methanol for two minutes to moisten the membrane. Three μl dots of fraction samples were placed on the PVDF membrane. The samples were allowed to dry on the membrane, and blocked with 5% non-fat Carnation Instant milk prepared in PBST for 60 minutes at room temperature. The membrane was then incubated in cholera toxin β chain conjugated to HRP (BD Biosciences, San Jose, CA) for 60 minutes. The membrane was then exposed to SuperSignal West Dura (Pierce, Rockford, IL) substrate for 2 minutes and then exposed using an image analyzer (Alpha-Innotech, San Leandro, CA).
Raft ELISA Protocol
Lipid rafts were analyzed by raft-ELISA as reported in previous publications [23,49], with one exception: detergent-free rafts were used instead of the detergent-resistant rafts. Briefly, 96 well flat bottom, high bonding, enzyme immuno-assay/radioimmuno assay (EIA/RIA) plates (Costar, New York, NY) were coated with 50 μl capture antibody (2 μg/ml) and covered with saran wrap and incubated at 4°C overnight. The microwells were then washed with 100 μl of wash buffer, PBST, 4 times. Wells were then blocked with blocking buffer PBST supplemented with 1% (w/v) fraction V bovine serum albumin (BSA) (PBST/BSA), (Fisher Scientific, Pittsburg, PA) for 30 minutes at room temperature. Excess of blocking reagents were removed with washing buffer, PBST; this step was repeated three times. Fifty μl samples (1:5 diluted raft fractions in PBST/BSA) were added to wells and incubated overnight at 4°C. Unbound lipid rafts were removed by washing with PBST 9 times. Biotinylated detection antibody (1 μg/ml) was added to each microtitre well and incubated for 1 hour at room temperature followed by washing unbound antibody 6 times with PBST. Avidin-HRP was added to each well and incubated for 30 minutes at room temperature. Unbound avidin-HRP conjugate was removed by washing 8 times with PBST. A 100 μl solution of a 1:1 mixture of 2,2'-azino-di[3-ethyl-benzthiazoline 6sulphonate] (solution A) and 0.02% solution of H 2 O in citric acid buffer (solution B) were added to appropriate well. The absorbance was read at 405 nm with a Spectramax 190 plate reader (Molecular Devices, Sunnyvale, CA).
Formvar Coating EM Grids
Coating of nickel grids with formvar was carried out according to previous publications. Nickel grids (Electron Microscopy Sciences, Fort Washington, PA) were sonicated 3 times in ethanol prior to their use. Clean microscopic glass slides were dipped into a formvar solution in ethylene dichloride (Electron Microscopy Sciences, Fort Washington, PA) and chloroform (Fisher Scientific, Pittsburg, PA) for a few seconds to allow coating of formvar on the slide. The edges of the glass slides were scored and tilted to release the formvar in a clean bowl of double distilled water. Nickel grids were mounted on top of the floating formvar sheets. Using a different microscope slide wrapped in parafilm, the floating formvar, with the grids on top, was carefully scooped up from the water bowl and allowed time to dry and store at RT until further use.
Immunogold labeling for TEM
Lipid rafts were captured and detected by the method we have previously used for detection of detergent isolated lipid rafts [23,49]. A capture antibody, purified anti-mouse CD90 (Thy-1) (G7) (BD Biosciences, San Jose, CA) was coated on the nickel grid at 4 μg/ml antibody concentration in carbonate/bicarbonate buffer in a humid chamber. Antibody coating was carried out by placing the formvar coated side of the grid faces down on a drop of carbonate-bicarbonate buffer with capture antibody for an overnight period at 4°C. Nickel grids were washed 4 times with phosphate buffer saline (13.7 mM NaCl, 0.27 mM KCl, 0.43 mM Na 2 HPO4-7H 2 0, 0.14 mM KH 2 PO4, pH 7.3) supplemented with 1% BSA-C (Aurion, Costerweg, Netherlands). For each washing step, grids were incubated with the washing buffer for 5 minutes at room temperature in a humid chamber. Non-specific sites on the grids were then blocked with a blocking buffer consisting of 1 × PBS supplemented with 0.05% (w/v) of fraction V bovine serum albumin for 20 minutes at room temperature. Grids were then washed with incubation buffer 2 times, 5 minutes each, at room temperature followed by incubation with 30 μl drops of lipid raft fraction samples for an overnight period at 4°C. Unbound lipid rafts were removed by washing with PBS/BSA buffer at room temperature, and this step was repeated 5 times. Grids were than incubated with biotin-conjugated detection antibody Ly6A/E (Sca-1) (D7) (BD Biosciences, San Jose, CA) at 3 μg/ml in PBS-BSA buffer for 60 minutes at room temperature. Grids were washed 4 times with PBS-BSA buffer at room temperature to remove excess detection antibody. Non-specific sites in the grids were blocked by incubating on top of 30 μl droplets of blocking buffer for 15 minutes at room temperature followed by incubation with goat anti-biotin antibody conjugated to 10 nm gold particles at a 1:250 dilution of the stock (Aurion, Costerweg, Netherlands) for 60 minutes at room temperature. Grids were washed 2 times with double distilled water (ddw) for 5 minutes each at room temperature and incubated on 30 μl drops of 1% gluteraldehyde (Electron Microscopy Sciences, Fort Washington, PA) in double distilled water for 5 minutes at room temperature. Grids were allowed time to dry, preparation side up, on Whatmann paper after washing with ddw. Lipid rafts on the grid were fixed with 1% osmium tetroxide (Electron Microscopy Sciences, Fort Washington, PA) in double distilled water for 10 minutes. This process was followed by counter staining with1% tannic acid (Electron Microscopy Sciences, Fort Washington, PA) and 2% uranyl acetate (Electron Microscopy Sciences, Fort Washington, PA) in double distilled water for 30 minutes at room temperature, under a cover to prevent light exposure. Grids were washed with double-distilled water 2 times, 5 minutes each, at room temperature and dried on Whatmann paper, specimen side up. Grids were then analyzed on an H-7600 Hitachi Transmission Electron Microscope (Tokyo, Japan). NIH ImageJ software was used to mark the boundaries of lipid rafts that were imaged. The longest distance on the boundary of the captured and detected rafts, including the counter stained part, was used to determine the Ferret's diameter.
Cholesterol Depletion. Cholesterol was depleted from cell-free lipid rafts (lipid rafts previously isolated from cells) by treatment with 10 mMol/L of methyl-betacyclodextrin (MβCD) (Sigma-Aldhrich Company, St Louis, MO, USA) for 18-24 hours at 4°C before their use in the raft ELISA according to previously published report [23]. YH16.33 and A20 co-cultured cells, with or without chicken ovalbumin antigen were treated with 10 mM MβCD for 15 minutes at 37°C as per earlier published report [50] and isolated lipid rafts were examined by transmission electron microscopy.
Characterization of detergent-free rafts from a CD4 + T cell line
Detergents promote coalescence of lipid rafts that may undermine assessment of raft heterogeneity [51,52]. To overcome the problems that are associated with the use of detergent in isolating lipid rafts from plasma membrane, we chose to isolate and characterize lipid rafts from a T-T hybrid CD4 + T cell line in the absence of a non-ionic detergent. To achieve this, we adopted an isolation procedure used for detergent-free lipid rafts from an epithelial cell line [46], as shown in Figure 1. To assess the success of the isolation procedure and identify which density gradient fractions were enriched in lipid rafts from YH16.33, a T-T hybrid cell line, following detergent-free isolation, we carried out raft-ELISA using monoclonal antibodies directed to Thy-1 and Ly-6A.2, two GPI-anchored proteins known to localize in lipid rafts [14,15,23]. Anti-Thy-1 mAb was coated on the ELISA plate and used to capture detergent-free lipid rafts and biotinylated anti-Ly-6A.2 followed by avidin-HRP was used for detection. Figure 2A shows that fractions 5 and 6 contained lipid rafts. Cholesterol is an essential component of lipid rafts and thus, cholesteroldepleting compounds destabilize these membrane structures [50]. To assess the specificity of the captured membrane rafts, we treated the cell-free fractions with such a compound, methyl-beta-cyclodextrin (MβCD), at 10 mM for 18-24 hours. Incubation of lipid raft fractions with MβCD significantly decreased the capture and detection of Thy-1 and Ly-6 lipid raft subsets (Figure 2A). Through our binary approach of capture and detection of rafts we observed the presence of the antigen receptor, TCRαβ, in fraction numbers 5 and 6, as well ( Figure 2B). Enrichment of TCRαβ in rafts has been observed by other investigators [53,54]. However, TCRαβ is present in the heavy fraction (fraction 9) which perhaps reflects its representation in the non-raft fractions as previously reported [29]. Alternately, the presence of TCRαβ in the heavy fraction reflects its presence in the cellular organelles (ER/Golgi etc), which is expected. To further analyze these fractions we carried out SDS-PAGE followed by Western blot analysis to detect Linker of activated T cells, LAT, another known component of lipid rafts [55] ( Figure 2C). Ganglioside (GM1), another lipid raft marker, was detected in fractions 4-6 when dot blots of isolated detergent-free fractions were probed with HRP-conjugated Cholera toxin-β chain ( Figure 2D). In contrast, β-COP, a Golgi-resident protein and a non-raft marker representing an internal cellular compartment was absent from these fractions ( Figure 2C). Taken together, the raft ELISA and biochemical data show that detergent-free lipid rafts are present in fractions 4, 5 and 6 of the density gradient.
Visualization and determination of size of lipid rafts using electron microscopy
Size of lipid rafts, reported in the literature using a variety of biophysical methods, has ranged from 10-100 nm in diameter [4,56,57]. Isolating the rafts from the plasma membrane in the absence of detergents and assessing their size will confirm their physical presence on the plasma membrane and will help clarify the disconnect between the biochemical and biophysical methods used to study these membrane entities. To examine the heterogeneity in size we used a clonal cell line, YH16.33, to generate detergent-free lipid rafts. Cell-free rafts from fraction 5 of the gradient were captured on nickel grids with anti-Thy-1 mAb and analyzed for the presence of Ly-6A.2 protein, another raft marker, with an anti-Ly-6A.2 mAb conjugated to biotin followed by anti-biotin antibody conjugated to 10 nm colloidal gold (see Materials and Methods section). Captured and detected rafts averaged 89.7 +/-38.8 nm (n = 3721) in diameter (Figure 3A &3B). Isolated lipid rafts are those structures that were both immune-stained (i.e. those containing gold particles) and showed counter staining with Tannic acid and Uranyl acetate, which stains lipids. These results highlight the innate heterogeneity of the size of rafts on the plasma membrane of a clonal cell line.
Lipid rafts coalesce in the presence of antigen presenting cells
We next sought to examine alterations in the lipid raft size and structure in CD4 + T cell after exposure to a specific antigen. Lipid rafts are known to contribute to the formation of the immunological synapse which is considered to be a large coalesced raft formed at the contact site of a CD4 + T cell and antigen presenting cell during T cell activation [4,14,27]. Previous studies have also suggested that lipid rafts take on the role of an anchoring platform for a number of signaling proteins [12][13][14]. To examine changes in the size of lipid rafts on the plasma membrane of T cells after engagement of their signaling receptors, we incubated YH16.33 T cells, with the antigen presenting cell, A20, in the presence and absence of a specific antigen, chicken ovalbumin. For each culture, detergent-free lipid rafts were isolated on an OptiPrep gradient after ultracentrifugation and lipid raft fractions were identified by raft ELISA (Without antigen, Figure 4A and with antigen, Figure 4B). Lipid rafts from fraction 5 were captured and detected with anti-Thy-1 and anti-Ly-6A.2 antibodies, respectively, for visualization under the electron microscope. Figure 5 shows that both in the absence ( Figure 5A) and the presence ( Figure 5C) of antigen, we frequently Figure 1 The detergent-free isolation protocol of lipid rafts. Six × 10 7 YH16.33 cells were centrifuged at 250 g for 10 minutes and the pellet was re-suspended in a base buffer consisting of 20 mM Tris-HCl, 250 mM Sucrose(pH 7.8), supplemented with 1 mM CaCl 2 and 1 mM MgCl 2 . The cells were then lysed by passing through a 23 gauge needle 20 times. The lysates were centrifuged at 1000 rpm for 10 minutes and the supernatant was saved. The pellet was re-suspended in fresh base buffer and then again passed though a 23 gauge needle 20 times. After centrifugation at 1000 rpm for 10 minutes again at 4°C, the supernatant was collected and pooled with the previously collected supernatant. An OptiPrep (Sigma-Aldrich, St. Louis, MO) gradient was prepared with a final 25% OptiPrep dilution at the bottom of an ultracentrifuge tube and overlaid with 20%, 15%, 10%, 5% and 0% OptiPrep solutions. The gradient was centrifuged at 52,000 × g for 90 minutes at 4°C and nine 1.3 ml aliquots from the top of gradient were collected and stored at -20°C until further analysis.
Kennedy et al. Cell Communication and Signaling 2011, 9:31 http://www.biosignaling.com/content/9/1/31 observed larger membrane entities (> 500 nm) composed of rafts attached to one another, and thus, appeared coalesced. These qualitatively distinct coalesced large rafts were not observed in the raft preparations from YH16.33 T cells alone. Moreover, the capture and detection of lipid rafts was T cell specific since we were unable to capture and detect lipid rafts generated from APCs (A20 cell line) alone with anti-T cell specific antibodies ( Figure 5E). The average diameter of the rafts captured from T cells co-cultured with APCs in the absence of antigen was 116.327 +/-52.112 nm (n = 2251) ( Figure 5B). The average diameter of the rafts captured from T cells cultured with APCs in the presence of antigen was 114.430 +/-46.748 nm (n = 2067) ( Figure 5D). The diameter of both cultures of T cells with APCs, with or without antigen, produced similar sizes, although both were larger than un-stimulated YH16.33 cells (89.7+/-38.8 nm). About 72% of the rafts isolated, from the T cells in the absence of interaction with APC, ranged between 50-100 nm, and 22% ranged from 101-200 nm ( Figure 5F). In contrast, the rafts isolated from YH16.33 with APC, either in the presence or the absence of antigen showed a shift towards higher size, with 38%-46% of total rafts showing 50-100 nm size distribution and 49% -57% of 101-200 nm size ( Figure 5F). When the co-cultures were treated with MβCD prior to lipid raft isolation there was a noticeable depletion in the formation of a circular shape of membrane rafts in YH16.33 and A20 cocultures both in the absence ( Figure 6A) and presence of antigen ( Figure 6C). The average diameter of lipid rafts isolated from the co-cultures with and without antigen treated with MβCD was 83 +/-39.9 nm (n = 499) and 95.2+/-46.2 nm (n = 712), respectively (Figure 6B &6D). Taken together, our data suggests that prior to co-culture with APCs the average size of the Figure 2 Analysis of detergent-free isolated lipid rafts by raft ELISA and SDS-PAGE. Anti-Thy-1, which recognizes a GPI-anchored protein known to be a raft constituent, Thy-1, was used as the capture antibody that coated the microtitre wells. Wells were incubated with the detergent-free fractions isolated from YH16.33 cells pretreated (Tx) with MβCD (A), or left untreated (No Tx) (A & B). Captured rafts were detected by biotinylated anti-Ly-6 antibody (A) or anti-TCRαβ antibody (B), followed by avidin-HRP. The assay was developed by adding ABTS peroxidase substrate and the absorbance was read by a Spectramax 190 plate reader. Fractions were also analyzed by running SDS-PAGE on 4-20% Tris-HCl gels followed by the transfer of proteins onto a PVDF membrane. Membranes were probed with anti-LAT or anti-β-COP (C), followed by the appropriate secondary antibodies. GM1 was detected in these fractions by dot-blot on PVDF membrane with biotin-cholera toxin β and avidin-HRP conjugate (D). Blots were observed using a chemiluminescence kit (Pierce, Rockford, IL) as described in the Materials and Methods section. Shown is a representation of at least three independent experiments. Kennedy et al. Cell Communication and Signaling 2011, 9:31 http://www.biosignaling.com/content/9/1/31 lipid rafts are relatively small (89.7+/-38.8 nm) and that rafts coalesce on the plasma membrane of CD4 + T cells as they interact with APCs even in the absence of an antigen.
Analysis of detergent-extracted lipid rafts
Use of non-ionic detergents for the extraction of lipid rafts from cells in a variety of cells has been controversial [51,52]. It has been suggested that these detergents Figure 3 Size of lipid rafts isolated from YH16.33 cells by the detergent-free isolation method. Lipid rafts from the YH16.33 T cell line were captured and detected with anti-Thy-1 and anti-Ly-6 antibodies, respectively, on formvar coated nickel grids. A representative micrograph of YH16.33 lipid rafts (as shown with arrows) from fractions 5 (A) is shown. The average Feret's diameter of lipid rafts collected from fractions 5 is shown (B). Error bars show average size (nm) +/-standard deviation. Each micrograph was at a 40,000 × magnification. The micrograph is representative of at least three sets of experiments and the quantitative data is derived from all the experiments (n = 3721 lipid rafts).
Figure 4
Analysis of detergent-free lipid rafts by Raft ELISA. Lipid rafts were isolated from T cell -APC co-cultures. YH16.33 cells were cocultured with A20 cells in the absence (A) or presence (B) of 1 mg/ml chicken ovalbumin (specific antigen) for 18-24 hours and lipid rafts were isolated using the detergent-free density gradient method. Each density gradient was analyzed by Raft ELISA by capturing with anti-Thy-1 and detecting with biotinylated anti-Ly-6A.2 followed by avidin-HRP. A representative analysis of at least 3 independent experiments is shown. Figure 5 Coalescence of lipid rafts isolated, detergent-free, from T cell -APC co-cultures. YH16.33 cells were co-cultured with A20 cells in the absence (A & B) or presence (C & D) of 1 mg/ml chicken ovalbumin (specific antigen) for 18-24 hours and lipid rafts were isolated using the detergent-free density gradient method. Detergent-free lipid rafts in fraction 5 from YH16.33 + A20 (A), YH16.33 + A20 + chicken Ovalbumin (C) or A20 alone (E) were captured with anti-Thy-1 on formvar coated nickel grids and detected with biotinylated anti-Ly-6A.2 followed by antibiotin conjugated to 10 nm colloidal gold and visualized with a Hitachi H-7600 transmission electron microscope. The average Feret's diameter of lipid rafts collected from fractions 5 of YH16.33 and A20 co-cultures either in the absence (B) or the presence (D) is shown. Error bars show average size (nm) +/-standard deviation. Lipid rafts from each group (T cells alone, T cells + APCs, T cells + APCs + antigen) were sized using NIH ImageJ software and their size distribution shown in nanometers is shown (F). Each micrograph was at a 40,000 × magnification. The micrograph shown are representative of at least three sets of experiments and the quantitative data is derived from all the experiments (n = 2251 lipid rafts for YH16.33+A20 and n = 2071 lipid rafts for YH16.33+A20+antigen groups).
promote coalescence of lipid rafts that may undermine assessment of raft heterogeneity. To address this issue, we wanted to compare detergent isolated rafts with those isolated using the detergent-free methodology proposed by McDonald and Pike [46]. We compared the size of lipid rafts from YH16.33 and A20 co-cultures that were isolated in the presence of the detergent, Triton X-100 (1% concentration) and examined their size by EM. These rafts were captured with an anti-Thy-1 mAb and detected with an anti-Ly-6A.2 antibody. Similar to the detergent-free rafts, lipid rafts isolated from YH16.33 alone with detergent-based isolation protocols yielded membrane entities less than 100 nm in diameter ( Figure 7B &7F), as reported before [23]. In contrast to rafts isolated in a detergent-free environment, lipid rafts isolated in the presence of detergent from co-cultures of YH16.33 and A20 cells in presence or absence of specific antigen showed a higher frequency of macrodomains, Figure 6 Lipid raft coalescence is cholesterol-dependent. Lipid rafts were isolated from YH16.33 cells co-cultured with A20 cells in the absence (A & B) and presence (C & D) of chicken ovalbumin after the treatment of co-cultured cells with MβCD for 15 minutes. Lipid rafts from fraction 5 of the density gradient were captured with anti-Thy-1 on formvar coated nickel grids and detected with biotinylated anti-Ly-6A.2 followed by anti-biotin colloidal gold conjugate. A representative micrograph (3 independent experiments) of lipid rafts (shown by arrows) from MβCD treated YH16.33 cells co-cultured with A20 cells in the absence (A) and presence (C) of antigen is shown. The average Feret's diameter of lipid rafts generated from YH16.33 and A20 co-cultures in the absence (C) and presence of antigen (D) is shown. Each micrograph was 40,000 × magnification. Kennedy et al. Cell Communication and Signaling 2011, 9:31 http://www.biosignaling.com/content/9/1/31 Figure 7 Analysis of detergent-resistant (isolated using 1% Triton X-100) lipid rafts. Lipid rafts were isolated from un-stimulated YH16.33 T cells (B), and YH16.33 cells co-cultured with A20 cells in the absence of antigen (C-E). Rafts were captured with anti-Thy-1 on the nickel grids followed by their detection with biotinylated anti-Ly-6A.2 and anti-biotin colloidal gold. Various sizes of lipid rafts ranging from less than 100 nm to several μm were visualized (C-E) and quantified (F). Feret's diameter of lipid rafts was determined for rafts isolated from YH16.33 alone (open squares), YH16.33 with A20 cells in the absence of antigen (light square), YH16.33 with A20 in the presence of antigen (grey square) and YH16.33 with A20 exposed to MβCD (black square) are shown. Asterisk (*) indicates significant differences and n.s denotes not significant differences from the YH16.33 alone group (F). Absence of lipid rafts isolated from A20 cells alone using anti-Thy-1 and anti-Ly-6A.2 antibodies to capture and detect, respectively is shown in A (negative control). Three independent experiments were carried out and 5 photographs were taken at 5 distinct regions on each grid. bar = 100 nm. some of which were up to micrometers in size ( Figure 7C-E). However, like in the detergent-free case, frequency of macrodomain formation did not depend on the presence of antigen ( Figure 7F). To examine if the presence of cholesterol was critical for formation of these large membrane domains we exposed the YH16.33 and A20 co-cultures to MβCD for 15 minutes prior to detergent isolation of lipid rafts. The isolated lipid rafts were captured and detected with anti-Thy-1 and anti-Ly-6A.2 monoclonal antibodies respectively. MβCD treated cultures showed considerably lower numbers of macrodomains than the untreated cultures ( Figure 7F). These data suggest that cholesterol is necessary for formation and/or stability of the macrodomains we observed. Taken together, these results indicate that detergent does not affect the average size of rafts during isolation. However, in the presence of APCs, rafts isolated from T cells by detergent-based methods are considerably larger than the detergent-free rafts, supporting, perhaps, the hypothesis of detergent-dependent coalescence of lipid rafts that has been reported by other investigators [55]. Regardless of the size of these detergent-resistant domains their macro structure is dependent on the presence of cholesterol and independent of the antigen.
Discussion
Heterogeneity of lipid rafts in the plasma membrane and their re-organization during ligand-receptor interactions plays an important role in cell signaling [14]. Resonance energy transfer (FRET) [15,56,58], super-resolution microscopy, [57,59] and other biophysical methods, have provided significant insights into establishing the existence and heterogeneity of these nano-size membrane domains. Analysis of lipid rafts, after immune-EM staining of intact plasma membrane, has also been useful in providing insights into the size and heterogeneity of lipid rafts [60]. Use of these methods in examining complex signaling cascades is challenging. Multitude of signaling proteins, participating in signal transduction, in native form or after post-translational modifications (phosphorylation) requires their visual detection simultaneously. Development of sensors allowing detection of several signaling molecules is currently underway. In here, we have examined alterations in size and composition of these membrane nano-domains following cellular interaction, on a single raft and raft-subpopulation basis. Use of biochemical approach to assess trafficking of native and post-translationally modified signaling receptors, moving in and out of lipid rafts isolated in the absence of detergent will be robust and without confounding issues with the use of detergents. Deciphering changes in size and composition in the same set of immune-isolated lipid raft populations is critical. While the biochemical approaches for examining the role of lipid rafts in spatiotemporal signaling in CD4 + T cells can be remarkably robust, in as much as it has potential for analysis of a complex series of a multitude interacting molecules in the signal transduction cascade. However, this reductionist approach has inherent limitations and needs to be complimented by dynamic cell imaging showing interactions of multitude signaling proteins on the plasma membrane. The biochemical approaches using detergent-free lipid rafts, as well as the biophysical/dynamic cell imaging approaches currently underway are essential for developing a thorough understanding of spatial and temporal regulation of cell signaling.
The data presented here suggest that antigen and its recognition by TCRαβ are not the primary mechanism for the creation of macrodomains on the membrane, since we find them to be formed in the absence of specific antigen recognition. It is long been recognized that T cells interact with antigen-presenting cells in two phases. The first step requires nonspecific adhesion involving interactions between a β1 integrin, LFA-1 on T cells with the ligand, ICAM-1, expressed on antigen presenting cells [61]. In the second phase, the antigen receptor senses the antigen presented by the APC. The initial nonspecific interactions help launch the second phase, where the antigen receptor (TCRαβ) senses an antigen presented by the antigen-presenting cells. Detachment of T cells from APC occurs in the absence of recognition of an antigen. This opens up the opportunity to bind and sense the antigen on another APC. The data presented here suggest that during the first set of interactions between CD4 + T cells and APC the lipid rafts on T cells are spatially organized and coalesce. Previous reports have described antigen-independent immunological synapses between naïve CD4 + T cells and dendritic cells [62]. Functional consequence of the antigen-independent interaction range from tyrosine phosphorylation, little calcium response and survival signals. It appears that these interactions allow survival of naïve T cell in vivo. However, the relationship between the antigen-independent synapse formation and coalescence of lipid rafts during T cell APC interactions needs to be elucidated. Further investigation needs to be carried out to understand the mechanism, and functional importance of this early spatial reorganization of the plasma membrane. Extent of raft coalescence and molecules that accumulate in it may depend on the source of interacting CD4 + T cell and degree of ligation of the antigen receptor and co-receptor [63]. In addition, a functional role of lipid rafts may not be the same in distinct subsets of differentiated CD4 + T cells. For example, activated Th1 and Th2 cells behave differently in their re-organization of lipid rafts. While the antigen receptor is easily recruited in the lipid rafts in Th1 cells, similar recruitment is not observed in activated Th2 cells [64]. Furthermore, it will be crucial to ascertain whether this re-organization reflects the underlying properties of the nanoscale assemblies that show additional interconnections when CD4 + T cells interact with antigen-presenting cells as suggested by a recent report [65]. While antibodies to T cell surface proteins were used in our experiments to capture and detect isolated lipid rafts, it is possible that the captured coalesced rafts have some membranes belonging to APC. We have not directly tested this idea. Future experiments, where antibodies directed against MHC class II proteins and anti-TCRαβ used to capture and detect coalesced lipid rafts will be able to address this issue.
Conclusions
We conclude that lipid rafts on CD4 + T cell membranes coalesce to form larger structures, after interacting with antigen presenting cells even in the absence of a foreign antigen. Findings presented here indicate that lipid raft coalescence occurs during cellular interactions prior to sensing a foreign antigen. | 2016-05-15T23:57:45.797Z | 2011-12-08T00:00:00.000 | {
"year": 2011,
"sha1": "7c14512b7ff6428ef2137bce61f46b6a4a24ed13",
"oa_license": "CCBY",
"oa_url": "https://biosignaling.biomedcentral.com/track/pdf/10.1186/1478-811X-9-31",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1a2f2395dc49a64226c3d429c92589d4ae408258",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
56449282 | pes2o/s2orc | v3-fos-license | PRECIPITATION REACTION OF CLAVULANIC ACID : THERMODYNAMIC AND ELECTRONIC STUDY
Ana C. Granatoa,#,*, Edson B. Costab, Wagner F. D. Angelottic, Geoffroy R. P. Malpassa,#, Marlei Barbozaa,#, Albérico B. F. da Silvad and Milan Trsicd Departamento de Engenharia Química, Universidade Federal do Triângulo Mineiro, 38025-180 Uberaba – MG, Brasil bEscola de Aplicação da Universidade Federal do Pará, 66095-780 Belém – PA, Brasil cDepartamento de Matemática Aplicada, Universidade Federal do Triângulo Mineiro, 38025-180 Uberaba – MG, Brasil dInstituto de Química de São Carlos, 13560-970 São Carlos – SP, Brasil
INTRODUCTION
Clavulanic acid (CA) is a β-lactam compound, which consists of a β-lactam ring condensed to an oxazolidin ring; it is a secondary metabolite isolated from Streptomyces clavuligerus, which inhibits most class A β-lactamases, has low activity against class C cephalosporinases and is inactive against class B Zn 2+ methaloenzymes.][3][4] Amongst the possible purification methodologies, the precipitation reaction is one of the best options to obtain CA.One procedure is to react CA with potassium 2-ethylhexanoate in the solvent-rich phase (i.e. after cell removal) and the resulting clarified broth is acidified to pH values between 2 and 3. CA is then extracted using organic solvents such as ethyl acetate and the salt is formed. 5,6[9][10][11][12][13][14] Hirata et al. 14,15 carried out both precipitation reactions referred to above, i.e., the direct reaction between CA and potassium 2-ethylhexanoate and the indirect reaction between CA and t-octylamine.Both reactions were performed using the cultivation broth to produce CA, using ethyl acetate as the solvent.The authors report that the reaction to produce the intermediate using t-octylamine presents higher selectivity when compared to the direct reaction between CA and potassium 2-ethylhexanoate.The authors explain that this is because the direct reaction also precipitates large amounts of impurities together with potassium clavulanate.It was also observed that the reaction with t-octylamine, in addition to releasing potassium clavulanate with high purity, also led to a higher percentage yield and high quality crystals.These results indicated that the reaction with the intermediate formation is the best for reproduction on an industrial scale.
Based on the results reported by Hirata et al. 14,15 this study aims to calculate the theoretical thermodynamic and electronic properties involved in both reactions to verify if these parameters correlate with the experimental data reported.Thus it is possible to define, from a thermodynamic point of view, which of the two options is most attractive.In addition, the results of the electronic studies open up research possibilities for other reagents, with the aim of understanding their impact on the efficiency of the process.
EXPERIMENTAL
In order to calculate the thermodynamic parameters -standard molar enthalpy variation (∆H°, kcal/mol) and standard molar Gibbs energy variation (∆G°, kcal/mol), the Zero Point Energy approximation methodology (ZPE) described by Ochterski 16 was used.In a previous study by the present authors, the Hartree-Fock-Roothaan method (HF) employing 6-31G (d,p) basis set was demonstrated to be an adequate method to describe clavulanic acid geometry. 17Although the structures were also fully optimized using Density Functional Theory (DFT) with B3LYP functional and employing 6-31G (d,p) basis set in order to take account the electron correlation factor in the properties studied.All calculations were performed using the Gaussian G03 program. 18n this methodology, the frequency calculations of the HF/6-31G (d,p) and DFT B3LYP/6-31G (d,p) optimized structures are made with the reagents and products and the results of ε 0 +H corr and ε 0 +G corr are used to obtain the thermodynamic parameters, according to the following equations: The electronic properties selected to evaluate were: • The energies of the frontier orbitals HOMO (ε HOMO ) and LUMO (ε LUMO ): these descriptors are related to the electron acid or base character of a given compound. 19 Absolute hardness (η): this property is resistance of the chemical potential to change in the number of electrons.• Electronic chemical potential (μ): it measures the escaping tendency (or fugacity) of electrons from the atomic or molecular system.• Absolute electronegativity (χ): it is a chemical property that describes the tendency of an atom or a functional group to attract electrons (or electron density) towards itself.• Electrophilicity index (ω): this property is a descriptor of reactivity that allows a quantitative of the electrophilic nature of a molecule.
To account for the solvent effect of ethyl acetate (the solvent used experimentally), single point energy calculation were performed using the Polarized Continuum Model (PCM) 22 at both HF 6-31G (d,p) and DFT B3LYP 6-31G (d,p) levels.For these calculations, the GAMESS free program was used 23 and the same electronic properties described before were evaluated.
Study of the theoretical thermodynamic properties
The reactions shown in Figure 1 were considered for this study.In the direct reaction CA (1) reacts with potassium 2-ethylhexanoate (2) to produce potassium clavulanate (3) and 2-ethylhexanoic acid (4), this is a typical acid-base reaction, where CA is an acid and 2-ethylhexanoate is a base.In the indirect reaction, CA (1) reacts with t-octylamine (5) to produce a stable intermediate (5'), which reacts with potassium 2-ethylhexanoate (2) to produce potassium clavulanate (3), t-octylamine (5) and 2-ethylhexanoic acid (4), this is a typical acid-base reaction as well, where acids and bases are indicated in Figure 1.The results obtained from the frequency calculation for the reagents and the products of both reactions are shown in Table 1.
Following Ochterski′s methodology 16 and applying the values obtained from the frequency calculations in equations 1 and 2, the ∆H o and the ∆G o in Table 2 were calculated.
The ∆G o value indicates if a certain reaction is thermodynamically spontaneous or not, if the reaction present a high ∆G o value and negative, this indicates that the reaction is product favored, and the equilibrium constant of the product formation is high, therefore ∆G o and K are related for the equation (3).
By the analysis of the ∆G o values in Table 2, it is possible to observe that for the calculations using Hartree-Fock 6-31G (d,p) and DFT B3LYP/6-31G (d,p) optimized structures there is no difference in the values of the calculated thermodynamic properties for the direct and indirect reactions.It can also be observed and there is no significant difference between the two methodologies employed.It was thought that the electron correlation factor, which is included in the DFT method, would give an important difference in the thermodynamic properties, but this was not observed.It may be involved with the formalism of Ochterski′s methodology, 16 which may not be appropriate to the systems studied here.
Quantum properties studied and solvent effect
With the intention of verifying if the reactive indexes correlate with the theoretical thermodynamic results, the aforementioned reactivity parameters were calculated for Hartree-Fock 6-31G (d,p) and Density Functional Theory B3LYP 6-31G (d,p) optimized structures.The energies of the frontier orbitals, ε LUMO and ε HOMO , and the reactivity indexes χ, η, µ and ω calculated with and without solvent effect are given in Table 3.
Analyzing only the left side of both direct reaction and indirect reaction (Figure 1), in the first reaction CA is acid and potassium 2-ethylhexanoate is a base.In the indirect reaction, the intermediate formed by CA and t-octylamine is an acid and 2-ethylhexanoate is a base.The difference between direct and indirect reactions is the acid.When different acid base reactions with the same base are compared, the reaction more product favored is the one that presents the strongest acid, and a compound with strong acidic character tends to have higher χ, η, µ and ω values, and lower ε HOMO and µ values.
Considering all these aspects and analyzing the data shown in Table 3, it is observed that the acids formed by CA and the precipitation agents, compound (5')-(15'), have higher χ, η and ω values and lower ε HOMO and µ values than CA, compound (1)* in all methodologies, Hartree-Fock 6-31G (d,p) and B3LYP 6-31G (d,p) including or not the solvent effect, used to calculate these reactivity indexes.
This study corroborates the studies of Hirata and coworkers, 14,15 where the authors concluded that the indirect reaction is appropriate for application as the last step in the CA fermentation broth purification process.It is also suggested that this reaction promotes purification without causing CA degradation, increases reaction stability without forming oils, colloids or incrustations, and allows for a broader operational range, thus favoring an industrial scale application.
CONCLUSIONS
Finally, it is possible to say that the quantum reactive indexes show direct correlation with the higher spontaneity of the indirect reaction for HF and DFT methods.From the quantum reactive indexes values (χ, η, ω, ε HOMO and µ), it is possible to say that one characteristic is necessary to have a higher percentage yield: more acid character of the compounds formed by CA and the precipitation agents (when compared to CA).As very consistent results have been obtained in this work for the main and also for other precipitation agents, in the future these theoretical properties could be useful to evaluate new precipitation agents.This would economize time and, most important, reagents and equipment, improving the industrial process.
Figure 1 .
Figure 1.Representation of the mechanisms involved in the direct and indirect precipitation reactions of CA
Table 1 .
Data obtained from the frequency calculation for both the direct and indirect reactions as shown in Figure1for Hartree-Fock 6-31G (d,p) and Density Functional Theory B3LYP 6-31G (d,p) optimized structures Methodology (HF/6-31G (d,p)) Methodology (DFT/6-31G (d,p)) H°=ε 0 +H corr is the sum of electronic and thermal enthalpies.#∆G°=ε 0 +G corr is the sum of electronic and thermal free energies.The numbers in brackets and ′ are the intermediates formed by CA and the precipitation agents numbered.All values are in kcal mol -1 .
Table 2 .
Thermodynamic All values are in kcal mol -1 . | 2018-12-15T11:19:27.577Z | 2016-09-01T00:00:00.000 | {
"year": 2016,
"sha1": "74e60453c0e7520cb464d8de31e1cc17db288f96",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.5935/0100-4042.20160099",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "74e60453c0e7520cb464d8de31e1cc17db288f96",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
5425502 | pes2o/s2orc | v3-fos-license | These feet were made for walking
New fossil footprints excavated at the famous Laetoli site in Tanzania suggest that our bipedal ancestors had a wide range of body sizes.
W alking on two hind limbs, or bipedalism, is one of the defining characteristics of the evolutionary lineage that gave rise to modern humans. Though fragments of fossilized bones suggest that this adaptation might date as far back as 7 million years ago (Zollikofer et al., 2005), this interpretation remains controversial. The earliest unequivocal evidence of bipedalism comes not from bones, but from footprints made some 3.66 million years ago and preserved at a site in Laetoli, Tanzania (Leakey and Hay, 1979). However, it is widely agreed that bipedalism most likely evolved in an ancestor whose brain was no bigger than that of a chimpanzee, and who had not yet started to make and use tools.
The footprints preserved at Laetoli are what are known as "trace fossils", because they are traces of behavior rather than the petrified remains of actual body parts. The footprints were formed when three of our distant hominin relatives -most likely members of the species Australopithecus afarensis (White and Suwa, 1987) -walked in the same direction across wet volcanic ash. Millions of years later, in 1976, their preserved footprints were discovered by British paleoanthropologist Mary Leakey and co-workers, and the prints were fully excavated by 1978. Since then, the scientific and public interest in the Laetoli footprints has been extraordinary. They are mentioned in hundreds, if not thousands, of scientific works, and a Google search for "Laetoli footprints" returns 52,600 hits. Now, in eLife, Marco Cherin of the University of Perugia and Sapienza University of Rome and colleagues -who are based at institutions in Tanzania and Italy -report the discovery of a second set of preserved footprints from Laetoli (Masao et al., 2016). These new trace fossils are the same age as the first ones, and were found at a site called "Site S", which is 150 meters south of "Site G", where the original discovery was made.
Cherin and colleagues -who include Fidelis Masao of the University of Dar es Salaam in Tanzania as first author -present captivating graphics and photographs of the new footprints and describe their setting ( Figure 1). The tracks at both Site G and Site S are well preserved in the same hardened volcanic ash known as the "Footprint Tuff" on the southern edge of the Serengeti Plains. It appears that the environment when the footprints were made was not unlike what is seen in this region today -a mix of bushland, woodland and grassland with a nearby forest along the river. Footprints from a rhinoceros, a giraffe, some prehistoric horses and guinea fowl were found at the site. However, the new hominin footprints are most definitely the star attractions. They were left by two individuals -Copyright Jungers. This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited. referred to as S1 and S2 -who again were most likely A. afarensis.
Like the trace fossils at Site G, the newly found footprints follow a path that heads northnorthwest. However, only the multiple footprints from S1 are especially informative; S2 is known from just a single print that is abnormal due to apparent slipping. When photographs of the S1 prints were carefully compared to casts of footprints from Site G, the two sets appeared to be similar in many respects. For example, the heel impressions are deep and oval, and the big toes are in line with the other toes.
So far, Masao et al. have only described the features of the prints and analyzed how weight was transferred through the foot in a qualitative manner that is consistent with the earlier interpretation of the footprints at Site G (Robbins, 1987). Although the big toe appears to be the longest digit, it was not always where most force was applied when the foot pushed off from the ground (as tends to be the case for modern humans). Instead, the deepest impression in some footprints (indicating the most force) occurred more to the side of the foot. Individual toes cannot be distinguished easily from the prints, but a clear ridge was formed across the footprint when the toes gripped the wet volcanic ash and pushed it backward.
At this stage of their analysis, Masao et al. have chosen not to weigh in on the debate over how similar the footprints are to those of modern humans. Previously, some have taken the footprints at Site G as early evidence that A. afarensis walked in a remarkably humanlike manner (e.g., Raichlen et al., 2010;Crompton et al., 2012). Others have contested this conclusion (e.g., Meldrum et al., 2011). In fact, a recent analysis strongly suggested that the Laetoli footprints were significantly different from those of a modern human walking barefoot, and actually in some ways more similar to chimpanzee footprints .
Masao et al. also report several important measurements including the length and width of the prints, the angle of gait, and the step and stride lengths. Based in part on these measurements, they predicted the weight and height (or body mass and stature) of the individuals who left the footprints at the two sites. The prints at Site S were most likely left by individuals who were taller and heavier than any of the three that left prints at Site G. Of the five individuals, the lightest one left footprints at Site G and likely weighed 28.5 kg, while the heaviest walked at Site S and is estimated to have weighed as much as 48.1 kg.
These estimates for body mass fit comfortably within the wide interval calculated for this species based upon fossils of its limb bones (Grabowski et al., 2015). The predicted maximum height for the S1 individual is a different matter and is surprisingly tall at roughly 165 cm. Masao et al. interpret their new data as indicating that different A. afarensis individuals might have had very different body sizes. Variation of this magnitude could imply big differences between males and females -a phenomenon referred to as "sexual dimorphism". However, this would only be the case if we assume that all the footprints are from adults, and not if younger individuals made some of the smaller ones.
Nevertheless, and as Masao et al. acknowledge, the height estimates depend on a previously reported relationship between foot length and stature (Dingwall et al., 2013). It is important to note that foot length has only been roughly estimated for this ancient species, and that the height estimates would change if a different foot length-to-stature ratio was used. For example, Homo floresiensis -a species of ancient hominid from Indonesia (which have been commonly referred to as "hobbits"; Jungers et al., 2009) -had a different ratio, and if this was used instead, the estimates for the height of the tallest individual at Laetoli (S1) would shrink down to 132-148 cm. Furthermore, the shortest (called G1) would become even shorter at approximately 100 cm: a height that is more similar to that of "Lucy", the iconic skeleton of a female A. afarensis.
Looking back, 2016 has been a banner year for trace fossils in human evolution (e.g., Bennett et al., 2016;Liutkus-Pierce et al., 2016;Roach et al., 2016), and these new sets of footprints from Laetoli are a fitting capstone to the year. To judge by the profound scientific impact of the first set of Laetoli footprints, we can expect the new ones to figure prominently in future narratives of the origins of humans. They will likely stimulate new research and debate for years to come. | 2018-04-03T05:55:31.619Z | 2016-12-14T00:00:00.000 | {
"year": 2016,
"sha1": "e3c64630d4eea629b805ac28a9c91a91affa68e8",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7554/elife.22886",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e3c64630d4eea629b805ac28a9c91a91affa68e8",
"s2fieldsofstudy": [
"Geography",
"History"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
256143732 | pes2o/s2orc | v3-fos-license | Production of Genetically Modified Porcine Embryos via Lipofection of Zona-Pellucida-Intact Oocytes Using the CRISPR/Cas9 System
Simple Summary Genetically modified pigs are very useful thanks to their applications in basic research, biomedicine, and meat production. There are different methods for producing them, including cloning and the microinjection or electroporation of oocytes and zygotes. Easier techniques are being developed, such as lipofection, which involves the encapsulation of the CRISPR/Cas9 system into vesicles that are introduced into cells. We compared the embryo development and mutation rates associated with different conditions of lipofection treatment with the electroporation technique in zona-pellucida-intact porcine oocytes. We found that the lipofection treatment, once optimized, was as effective as the electroporation technique in terms of the embryo development and mutation rates. In addition, an increment in the concentration in the media of the liposomes–CRISPR/Cas9 system complexes had a detrimental effect on the embryo development parameters, which could indicate a possible toxic effect. The achievement of generating mutant embryos via lipofection without removing the zona pellucida could open up a new, easy, and cheap way of producing genetically modified pigs. Abstract The generation of genetically modified pigs has an important impact thanks its applications in basic research, biomedicine, and meat production. Cloning was the first technique used for this production, although easier and cheaper methods were developed, such as the microinjection, electroporation, or lipofection of oocytes and zygotes. In this study, we analyzed the production of genetically modified embryos via lipofection of zona-pellucida-intact oocytes using LipofectamineTM CRISPRMAXTM Cas9 in comparison with the electroporation method. Two factors were evaluated: (i) the increment in the concentration of the lipofectamine–ribonucleoprotein complexes (LRNPC) (5% vs. 10%) and (ii) the concentration of ribonucleoprotein within the complexes (1xRNP vs. 2xRNP). We found that the increment in the concentration of the LRNPC had a detrimental effect on embryo development and a subsequent effect on the number of mutant embryos. The 5% group had a similar mutant blastocyst rate to the electroporation method (5.52% and 6.38%, respectively, p > 0.05). The increment in the concentration of the ribonucleoprotein inside the complexes had no effect on the blastocyst rate and mutation rate, with the mutant blastocyst rate being similar in both the 1xRNP and 2xRNP lipofection groups and the electroporation group (1.75%, 3.60%, and 3.57%, respectively, p > 0.05). Here, we showed that it is possible to produce knock-out embryos via lipofection of zona-pellucida-intact porcine oocytes with similar efficiencies as with electroporation, although more optimization is needed, mainly in terms of the use of more efficient vesicles for encapsulation with different compositions.
Introduction
According to the Food and Agriculture Organization of the United Nations [1], after poultry, pigs (Sus scrofa) are the second most commonly consumed meat source in the world. Therefore, they are an extremely important species agriculturally. Furthermore, their physiological and anatomical similarities to humans make pigs a great model for biomedical purposes (recently revised by Navarro-Serna et al. [2]). For these reasons, genetically modified pigs are produced for a variety of applications, including the investigation of human diseases [3,4], xenotransplantation research [5,6], and the improvement of animal production [7,8].
Regarding agriculture, an important issue is the prevalence of different viral diseases that can result in severe economic losses. The most important one is porcine reproductive and respiratory syndrome (PRRS) [8][9][10]. The symptoms of this syndrome in pregnant sows include anorexia, late-term abortions, weak piglets, and delayed return to estrus. In piglets, PRSS causes diarrhea, respiratory disorders, and increased preweaning mortality [11,12]. These worldwide economic losses can be avoided via the production of genetically modified pigs that are resistant to this virus. CD163 is a membrane protein located in different subtypes of macrophages that is involved in the recognition of various ligands. This protein was identified as the fusion receptor for the PRRS virus [13], and its removal in CD163-KO pigs was found to make them resistant to the PRRS virus [14].
Somatic cell nuclear transfer (SCNT) has been used as the main technique by which genome-engineered pigs are produced, whereby a genetically modified cell is used as the donor nucleus. With the development of new restriction endonucleases, the most important one being the clustered regularly interspaced short palindromic repeats (CRISPR)/CRISPRassociated gene 9 (Cas9) system, the efficiency of the generation of mutant cells for use in SCNT increased [15,16]. Furthermore, genetically modified pigs were also produced via direct modification of embryos with CRISPR/Cas9 using techniques such as microinjection [8] and electroporation [17]. However, SCNT and microinjection are difficult to perform, requiring specific equipment and trained personnel. While the electroporation technique is the easiest one to perform, an electroporator is needed, and the efficiency of generating KO porcine embryos is equivalent to that achieved via microinjection [18].
To avoid these disadvantages, lipofection may be an alternative method for producing genetically modified animals. Since its development in 1987 by Felgner et al. [19], lipofection has been a common transfection procedure used to introduce foreign molecules into cells. It involves the encapsulation of foreign molecules into liposomes formed from cationic lipids. These complexes are introduced into cells through a fusion and endocytosis process [20]. This procedure has been used successfully in many different types of porcine cells, including neural stem cells [21], epithelial cells [22], fibroblasts [23], granulosa cells [24], and even embryonic cells [25].
The CRISPR/Cas9 system can also be introduced into cells using the lipofection method, and it can be used to produce genetically modified animals. Currently, there are only a few reports from the same research group in Japan on the use of lipofection in zona pellucida (ZP)-free porcine oocytes and embryos [26][27][28].
The use of ZP-free oocytes/embryos comes with disadvantages regarding manipulation and viability, so the optimization of the process in ZP-intact oocytes and embryos is needed. Recently, lipofection in ZP-intact embryos has been reported, albeit without success, as no mutant embryos have been obtained [29].
For this reason, the aim of this work was to produce CD163 KO embryos via lipofection of ZP-intact oocytes, evaluate the efficiency of the method, and compare it with a standard electroporation method. Once the lipofection methodology is fully optimized, it will facilitate the generation of genetically edited embryos and animals for different models of interest in the biomedical and agriculture industries.
Ethical Issues
This study was developed in accordance with European Union Directive 2010/63/EU and the Spanish Policy for Animal Protection (RD 53/2013 fi). The Ethics Committee of the University of Murcia and Murcia Regional Government for the use of genetically modified organisms approved this project (reference CBE 195/2019, CCEA 525/2019; reference 01/2016, activities A/ES/16/79, facilities A/ES/16/I-22 and I-23).
Culture Media Reagents
All the reagents were obtained from Sigma-Aldrich Quimica, S.A. (Madrid, Spain) unless otherwise indicated.
Single Guide RNA (sgRNA) Design
A new single guide sequence targeting exon 7 of the CD163 gene was designed using the software available from CNB-CSIC (https://bioinfogp.cnb.csic.es/tools/breakingcas, accessed on 10 January 2022) [30]: 5 -TACTTCAACACGACCAGAGCAGG ( Figure 1). Both the sgRNAs and Cas9 protein were obtained from IDT (Integrated DNA Technologies, Coralville, IA, USA), and the RNP complex was prepared according to the manufacturer's instructions.
Culture Media Reagents
All the reagents were obtained from Sigma-Aldrich Quimica, S.A. (Madrid, Spain) unless otherwise indicated.
Single Guide RNA (sgRNA) Design
A new single guide sequence targeting exon 7 of the CD163 gene was designed using the software available from CNB-CSIC (https://bioinfogp.cnb.csic.es/tools/breakingcas, accessed on 10 January 2022) [30]: 5′-TACTTCAACACGACCAGAGCAGG ( Figure 1). Both the sgRNAs and Cas9 protein were obtained from IDT (Integrated DNA Technologies Coralville, IA, United States), and the RNP complex was prepared according to the manufacturer's instructions.
In Vitro Maturation of Oocytes (IVM)
The cumulus-oocyte complexes (COCs) were obtained from gilt ovaries from a slaughterhouse and processed as previously described [31]. Briefly, the ovaries were transported in saline solution at 38 °C and washed once in 0.04% cetrimide solution and then in saline solution, both at 38 °C. Fluid from follicles of 3-6 mm in diameter was aspirated, and good quality COCs were selected, washed in Dulbecco's PBS (DPBS) with 0.2 g/L polyvinyl alcohol (PVA), and then in maturation medium supplemented with 10% porcine follicular fluid (NCSU37). After washing, groups of 50 COCs were cultured in 500
In Vitro Maturation of Oocytes (IVM)
The cumulus-oocyte complexes (COCs) were obtained from gilt ovaries from a slaughterhouse and processed as previously described [31]. Briefly, the ovaries were transported in saline solution at 38 • C and washed once in 0.04% cetrimide solution and then in saline solution, both at 38 • C. Fluid from follicles of 3-6 mm in diameter was aspirated, and good quality COCs were selected, washed in Dulbecco's PBS (DPBS) with 0.2 g/L polyvinyl alcohol (PVA), and then in maturation medium supplemented with 10% porcine follicular fluid (NCSU37). After washing, groups of 50 COCs were cultured in 500 µL of NCSU37 supplemented with 40 ng/mL fibroblast growth factor 2 (FGF2), 20 ng/mL leukemia inhibitory factor (LIF), and 20 ng/mL insulin-like growth factor 1 (IGF1) [32] at 38.5 • C and 5% CO 2 . During the first 20-22 h, the media were supplied with dibutyryl 1 mM cAMP, 10 IU/mL eCG, and 10 IU/mL hCG, followed by 20-22 h in NCSU37 supplemented with FGF2, LIF, and IGF1 without dibutyryl cAMP, eCG, and hCG. After IVM, the COCs were denuded of cumulus cells via the addition of 50 µL hyaluronidase at 0.5% to each well of NCSU37 and gentle pipetting until most of the cumulus cells were removed [33].
Lipofection Treatment
Lipofection with Lipofectamine TM CRISPRMAX TM Cas9 (Thermo Fisher, Waltham, MA, USA) was performed at the same time as in vitro fertilization (IVF) according to the manufacturer's instructions.
Briefly, for the standard concentration, 12.5 µL of Opti-MEM I Reduced Serum Media (Thermo Fisher, Waltham, MA, USA) was well mixed with Cas9 protein (final concentration of 50 ng/µL), sgRNA (final concentration of 25 ng/µL), and 1.25 µL of Cas9 Plus TM Reagent. The solution was incubated for 5 min at room temperature (RT). Meanwhile, 12.5 µL of Opti-MEM I Reduced Serum Media was mixed with 0.75 µL of CRISPRMAX TM transfection reagent and incubated at RT for 3 min. After incubation, both solutions were well mixed and incubated at RT for 10-20 min. The resulting solution was added to each well of 500 µL medium during IVF.
Electroporation Treatment
The electroporation treatment was performed as previously described [34]. Briefly, after washing in Opti-MEM I Reduced Serum Media, the oocytes were electroporated in a slide between 1 mm gap electrodes (45-0104, BTX, Harvard Apparatus, Holliston, MA, USA) connected to an ECM 830 Electroporation System (BTX, Harvard Apparatus, Holliston, MA, USA) using 4 pulses of 30 V at a 1 ms pulse duration and a 100 ms pulse interval with a concentration of Cas9 protein and sgRNA of 50 ng/µL and 25 ng/µL, respectively (the same concentrations as in the lipofection treatment).
In Vitro Fertilization (IVF) and Embryo Culture
The procedures for IVF were mainly the same as those described in previous work [31]. In vitro matured oocytes were transferred to IVF-TALP (TALP medium [35] supplemented with 1 mM sodium pyruvate, 0.3% BSA, and 50 µg/mL gentamycin). The oocytes were inseminated with frozen-thawed ejaculated spermatozoa from a tested boar after being selected using a swim-up procedure [33]. Briefly, a 0.25 mL straw of semen was thawed in a water bath for 30 s at 38 • C. The semen was diluted in 2 mL NaturARTsPIG sperm swim-up media (Embryocloud, Murcia, Spain) at 38 • C. The quality of the semen after thawing was evaluated and the total sperm motility was found to be >60%, the vitality >80%, and the morphoanomalies <10%.
For the sperm selection, the swim-up was performed as in previous work [33]. The sperm was diluted in IVF-TALP and the oocytes were inseminated at a final concentration of 3000 sperm/mL. The gametes were cocultured at 38.5 • C, 5% CO 2 , and 7% O 2 for 18-20 h.
In Vitro Embryo Culture (EC)
After co-incubation in TALP for 18-20 h, the remaining cumulus cells and zonaattached sperm were removed from the putative zygotes via pipetting. The putative zygotes were cultured in NCSU23a (containing 0.5 mM sodium pyruvate and 5 mM sodium lactate) for 24 h and then in NCSU23b (containing 5.55 mM glucose) until 156 h after fertilization at 38.5 • C, 5% CO 2 , and 7% O 2 [31]. After the NCSU23a culturing, the cleavage was evaluated and 2-4 cell embryos were transferred to NCSU23b in a different well than that containing putative zygotes that did not divide. On day 6.5, the blastocyst formation rate was evaluated and the blastocysts were collected.
Mutation Analysis
The blastocysts were washed in nuclease-free water and stored individually with a minimum volume (2-5 µL) at −20 • C until analysis. Genomic DNA extraction and PCR were carried out using a Phire Animal Tissue Direct PCR Kit (Thermo Fisher, Waltham, MA, USA) according to the kit's protocol. The 12.5 µL PCR reaction was performed with a primer concentration of 0.5 µM (forward: 5 -TTGTCTCCAGGGAAGGACAGG; reverse: 5 -AGAGTGAAAGGTGGGACTCG). The PCR cycling times were 5 min at 98 • C, followed by 40 cycles (denaturation for 5 s at 98 • C, annealing for 5 s at 64.3 • C, extension for 20 s at 72 • C) and final extension for 1 min at 72 • C.
The mutation detection on exon 7 was analyzed via the fluorescent PCR-capillary gel electrophoresis technique [33,36]. The PCR was performed using 6-FAM-labeled forward primers. After the PCR, the samples were processed as described previously [33] and the fluorescent PCR-capillary gel electrophoresis technique was performed using a GeneScan TM 500 LIZ Size Standard (Applied Biosystem, Thermo Fisher, Waltham, MA, USA) and a 3500 Genetic Analyzer (Applied Biosystems, Thermo Fisher, Waltham, MA, USA). The details of the instrumental protocol were similar to those previously described [36] When the peak obtained via capillary electrophoresis was the same size as the control peak, the samples were considered to be WT, whereas other peaks of different sizes with respect to the control peak were considered to be KO. When more than two peaks were detected in a sample, it was evaluated as mosaic.
Statistical Analysis
All the data analyses were performed using SYSTAT version 13 (Systat Software, San Jose, CA, USA). The normality of the variables was tested using the Shapiro-Wilk test. As all the variables were not normally distributed, they were analyzed via the non-parametrical Kruskal-Wallis test. When significant differences were detected (p < 0.05), the values were compared via the Conover-Iman test for pairwise comparisons.
Experimental Design
To evaluate the effect of the oocyte treatment on embryo development (cleavage and blastocyst rate), non-treated oocytes (control group) were compared with electroporated and lipofected oocytes before IVF.
To evaluate the efficiency of the lipofection method in relation to the gene edition, different conditions were tested, and the electroporation method was used as a control treatment for comparison in terms of the mutation (monoallelic and biallelic) and mosaicism rates.
First, we evaluated the importance of the final concentration of the lipofectamine + RNP complex (LRNP complex) in the culture media, comparing 5% vs. 10% v/v, with fixed values for the Cas9 protein (50 ng/µL) and sgRNA (25 ng/µL). Four replicates were performed, and between 290 and 360 oocytes were evaluated per group.
Second, we evaluated the concentration of RNP in the lipofection complex, comparing 50 ng/µL of Cas9 protein and 25 ng/µL of sgRNA with 100 ng/µL of Cas9 protein and 50 ng/µL of sgRNA. Three replicates were performed, and between 170 and 225 oocytes were evaluated per group.
The embryo development parameters that were evaluated included the cleavage rate (embryos that achieved the 2-cell stage at day 2 post-insemination [pi] per total oocytes) and blastocyst rate (embryos that achieved the blastocyst stage at day 6 post-insemination per total oocytes).
The mutation parameters that were evaluated included the mutation rate (mutant blastocysts per total blastocysts), mosaicism (mosaic blastocysts per mutant blastocysts), and overall efficiency (mutant blastocysts per total oocytes).
Evaluation of Lipofectamine + RNP Complex Concentration
Four experimental groups were tested to evaluate the influence of the concentration of LRNP complexes (Table 1): control (without treatment), electroporated, lipofected 5% (5% v/v per well), and lipofected 10% (10% v/v per well). The developmental and mutation parameters were evaluated as described.
Concentration of Lipofectamine + RNP
The effect of the concentration of the LRNP complexes in the medium was evaluated. Regarding embryo development (Table 3), we observed an increase in the cleavage rate in the electroporated group compared with the other groups (p < 0.01); however, the rate of blastocysts per oocytes was similar for the control, electroporated, and lipofected 5% groups. Increasing the concentration of the lipofectamine + RNP complex in the medium from 5% to 10% (v/v) had a detrimental effect on embryo development, as the cleavage and blastocyst rates were significantly lower in the lipofected 10% group compared with the other groups.
Regarding the mutation parameters (Table 4), no significant differences were found between the groups in terms of the mutation rate (ranging from 18% to 33%) and the mosaicism rate (ranging from 0% to 6%) (p > 0.05). Mutant blastocysts were obtained in both groups of lipofected oocytes, showing that the method is effective in ZP-intact oocytes.
Regarding the overall efficiency, which was measured as the number of blastocysts with at least one mutant allele per 100 oocytes treated, the electroporated and lipofected 5% groups were similar (6.38% vs. 5.52%, p > 0.05, Table 4). Increasing the concentration of lipofectamine + RNP complex in the medium from 5% to 10% v/v reduced the efficiency. Although the mutation rate was similar, the lower blastocyst rate of the lipofection 10% group reduced the efficiency compared with the other groups (p < 0.01). Therefore, an LRNP complex concentration of 5% (volume) was used in the subsequent experiment.
Concentration of RNP
As in the previous experiment, the use of electroporation increased the rate of cleavage compared to the use of lipofection (p < 0.01, Table 5); however, there were no significant differences in the blastocyst rates between the groups (p = 0.25; Table 5). The increased concentration of RNP in the liposomes had no detrimental effect on embryo development. Regarding the mutation parameters (Table 6), the mutation rate and overall efficiency were similar among the groups, although they were considerably lower than in the previous experiment. Table 6. Mutation parameters and mosaicism rate in pig embryos after electroporation or lipofection with different concentrations of the CRISPR/Cas9 system. Variables were analyzed via the nonparametrical Kruskal-Wallis test. Remarkably, there was an absence of mosaic embryos in all the groups, although the mutation rate and the number of mutant blastocysts obtained were both low.
Discussion
Since the first genetically modified pigs were produced [37,38], easier, cheaper, and more efficient technologies have been developed to produce such animals, starting with the SCNT method and advancing now with the electroporation method. Even though electroporation is the easiest procedure, it still needs a specific instrument and involves the handling of oocytes/zygotes outside of an incubator.
For this reason, the use of lipofectamine in oocytes and embryos is starting to be applied and optimized [28]. Lipofection is an efficient method that has been widely used in many kinds of somatic cells to introduce foreign molecules, including the CRISPR/Cas9 system [39,40].
To the best of our knowledge, only one research group has reported the use of the lipofection method in porcine ZP-free oocytes and ZP-free embryos with some success [26][27][28][29]. As the ZP is a significant physical barrier and appears to reduce the effectiveness of lipofectamine, these researchers treated ZP-free oocytes and ZP-free embryos and achieved a moderate mutation rate (ranging from 8% to 57%) with a high level of mosaicism (ranging from 87.5% to 100% of mutant embryos) [27]. Later, with further optimization of the system using ZP-free embryos, they generated mutant embryos for different genes and produced genetically modified piglets with a monoallelic mutation for the MSTN gene [26].
ZP-free oocytes are considered to be more difficult to manipulate, and their viability is lower than that of ZP-intact oocytes [26,41]. For this reason, the optimization of a lipofection method for ZP-intact oocytes is a desired objective. In this regard, Takebayashi et al. previously tried to use lipofection in ZP-intact embryos, but with no success [29]. They also explored the use of lipofection in combination with electroporation, and they obtained similar results to the use of electroporation alone [29].
In the present study, we produced genetically modified embryos via lipofection of ZP-intact oocytes during IVF. In comparison with the work of Takebayashi et al., this achievement could be due to differences between the protocols used [29]. First, a different reagent was used. Even when the commercial reagents are the same, changes to the reagent preparation could improve the efficiency. Furthermore, the stage at which the lipofection treatment was performed differed. We lipofected oocytes during IVF, whereas they lipofected putative zygotes at 10 h post-insemination.
In our case, the lipofectamine + RNP complexes were able to traverse the ZP of the mature oocytes and enter the cells. It has been shown that porcine ZP has pores that change in size depending on the stage of the oocyte/embryo [42]. In the case of in vitro matured porcine oocytes, these pores can be more than 800 nm in diameter [42]. The liposomes of CRISPRMAX lipofectamine are around 350 nm in diameter [43], so we propose that they are able to pass through the pores of the ZP.
In the first experiment, we observed reduced effectiveness in the lipofected 10% group, possibly due to the conditions being toxic. As all the components of the LRNP complex in this group were twice that of the same components in the 5% group, the toxicity may be due to the lipofectamine, the CRISPR/Cas9 system, or a combination of both components in the lipofectamine + RNP complexes. No differences were found between the control, electroporated, and lipofected 5% groups, suggesting that the lower concentration had no toxic effect. The electroporated group had a higher cleavage rate than the control group, as determined previously, probably because the oocytes were activated by the electric pulses inducing an influx of calcium from the pulsing medium [18]. In previous studies concerning the electroporation of porcine oocytes, we analyzed the IVF procedure, showing that some of the oocytes presented one pronucleus, although the majority of them, and in a similar rate to the control, presented two pronuclei (male and female; data not shown). The difference in the cleavage rate could be due to the parthenote activation, although the blastocysts produced are mainly IVF blastocysts.
In the second experiment, only the concentration of the CRISPR/Cas9 system was increased, and no detrimental effect was observed in the 2× group compared with the 1× group. Taking this into account, the suggested toxicity observed in the first experiment may be due to the greater lipofectamine concentration. Hirata et al. did not find any toxic effect regarding the concentration of lipofectamine [26], although the reagent we used was different.
Regarding the mutation rates in the second experiment, we achieved a similar overall efficiency between the groups, although the rate for the lipofected 1xRNP (5%) and electroporated groups appeared to be somewhat lower than that achieved in the first experiment. The conditions used for the lipofected 1xRNP (5%) and electroporated groups were identical in both experiments. This variation could be due to different oocyte quality between the experiments, possibly due to seasonal and/or ambient temperature changes during the year [44,45] or the possible degradation of the sgRNA because of the freeze-thaw cycles that suffered due to its use in different dates. Furthermore, although not statistically significant, the mutation rate tended to increase when the concentration of RNP in the lipofection system was doubled. This has not been tested yet in embryos, but in somatic cells, it was found that a greater concentration of RNP in the complexes increased the mutation rate [46].
We found both concentrations of RNP in the complexes to be effective. The lower concentration of RNP may be preferable to reduce costs and minimize any detrimental effects on blastocyst development. On the other hand, the higher concentration of RNP may enhance the mutation rate.
It should be noted that no biallelic mutant embryos were obtained, as all the mutant embryos were monoallelic (with WT) or mosaic (with WT). This could be due to the contamination of the analyzed blastocysts with sperm DNA during the DNA extraction (as some sperm can still be attached to the ZP). In previous studies by Hirata et al., a similar result was obtained, as almost no biallelic mutations were found in the blastocysts and no mutant homozygote piglets were produced [26,27]. However, in future studies, the removal of the ZP before the mutation analysis should be performed to avoid sperm DNA contamination problems.
We observed that the production of genetically modified porcine embryos via lipofection of ZP-intact oocytes is possible and at efficiencies similar to those achieved with the more commonly used electroporation method. As only a few conditions of lipofection were tested, additional optimization of the process may further improve the efficiency of the method. The various parameters tested for the production of genetically modified embryos, including the time of lipofection, concentration of lipofection reagent, concentration of RNP, and stage of the oocyte/embryo [26][27][28][29], all achieved different mutation rates. Changes in the lipofection conditions may also increase the efficiency of the method.
Another parameter that can be changed is the permeability of the ZP using chemical treatments such as actinase E, as Namula et al. suggested when using electroporation [41]. In the field of gene therapy, several authors have evaluated different lipid carriers for transfection processes. The use of a lipid-peptide nanocomplex that consists of a mixture of lipids and peptides that complex electrostatically with nucleic acids could be a delivery system for the sgRNA and Cas protein [47] that achieves better results than lipofectamine. Furthermore, other vehicles have been studied for the transfection of CRISPR/Cas9 RNP into cells for gene targeting, such as carbon quantum dots [48].
Furthermore, more data regarding the quality of lipofected embryos should be analyzed in future work, such as the gene expression, cell number, proportion from the inner cell mass and trophectoderm of the blastocysts, and viability after embryo transfer.
Conclusions
Here, we have shown that it is possible to generate CD163 KO embryos via lipofection of ZP-intact porcine oocytes. Thus, lipofection is an alternative and convenient method for producing genetically modified embryos at mutation efficiencies similar to those achieved with the electroporation method. Further research and optimization of the lipofection conditions are needed. The application of lipofection to generate gene-edged pig embryos is an appealing option, as the technical resources and equipment required are less demanding than those for the alternative techniques of SCNT, microinjection, or electroporation. | 2023-01-24T16:45:02.514Z | 2023-01-18T00:00:00.000 | {
"year": 2023,
"sha1": "ad680859d3e799b77bd5fab93a16b168474dddd5",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-2615/13/3/342/pdf?version=1674058458",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "abbf62ff943d7cc585835659db0596d961d71c9e",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
266132878 | pes2o/s2orc | v3-fos-license | Promoting Equity, Diversity, and Inclusion in Medicine: A Comprehensive Toolkit for Change in Radiology
This toolkit presents a comprehensive framework for a toolkit intended to increase equity, diversity, and inclusion (EDI) within the medical field and recommendations. We advocate for clear, comprehensive definitions and interpretations of fundamental EDI terms, laying the groundwork necessary for initiating and maintaining EDI initiatives. Furthermore, we offer a systematic approach to establishing EDI committees within medical departments, accentuating the pivotal role these committees play as they drive and steer EDI strategies. This toolkit also explores strategies tailored for the recruitment of a diverse workforce. This includes integral aspects such as developing inclusive job advertisements, implementing balanced search methods for candidates, conducting unbiased appraisals of applications, and structuring diverse hiring committees. The emphasis on these strategies not only augments the diversity within medical institutions but also sets the stage for a more holistic approach to healthcare delivery. Therefore, by adopting the recommended strategies and guidelines outlined in this framework, medical institutions and specifically radiology departments can foster an environment that embodies inclusivity and equity, thereby enhancing the quality of patient care and overall health outcomes. Visual Abstract This is a visual representation of the abstract.
Introduction
Equity, Diversity, and Inclusion, commonly referred to as EDI, is a crucial part of healthcare and medicine, a sector that deals with intricate and sensitive matters related to diverse patient populations and nuanced health issues. 1With many patients and members of the medical community facing discrimination and inequities daily, it is imperative that EDI values and policies are incorporated into all aspects of medical practice.
Implementing EDI into medicine consists of ensuring equal access to quality healthcare for all, irrespective of race, gender, socioeconomic status, or other diverse backgrounds. 2t also means ensuring that the field of medicine itself reflects this diversity, both in its workforce and in the research, treatment, and policies it undertakes. 3owever, existing data and research suggest that there is a deficit in the integration of these principles in medicine. 4Such deficit can be due to various factors, such as deeply ingrained systemic biases, lack of awareness or training, and resistance to change, making it a complex challenge to address. 5 Incorporating EDI within medicine and radiology involves a transformation of existing systems, practices, and perspectives to acknowledge and uphold the diverse needs of both patients and healthcare providers. 6This journey toward inclusion may encounter challenges due to unconscious biases and a lack of education about EDI.
To facilitate this change, we recommend and provide a comprehensive toolkit framework to serve as a guide for medical institutions to incorporate EDI into their practices.A toolkit in this context refers to a set of resources, guidelines, strategies, and methods that assist medical institutions in promoting and implementing EDI principles effectively.
The toolkit provides a framework with a structured approach to addressing EDI, offers practical solutions to common challenges, and provides resources for continuous learning and improvement.The components of this toolkit include clear definitions of key terms to ensure a shared understanding, guidance on establishing EDI committees to drive and monitor change, strategies for recruiting and retaining a diverse workforce, and resources for assessing and enhancing current practices. 7These resources may include self-assessment tools, case studies, templates, training materials, and more.
Implementing EDI involves understanding its contents, customizing it to the unique context of the institution, and implementing it systematically.It also involves ongoing monitoring and evaluation, with adjustments made as necessary. 8,9y incorporating the recommendations and resources provided in this toolkit, medical institutions have the opportunity to create an environment that values diversity, challenges systemic barriers, and improves the quality of patient care (Figure 1).
Definitions
Language is a powerful tool that not only communicates our thoughts and ideas but also shapes our understanding of the world.In the context of Equity, Diversity, and Inclusion (EDI), having a clear, shared understanding of key terms is crucial.Definitions serve as a foundational building block for any dialogue or action concerning EDI.They create a common language that facilitates productive conversation, promotes shared understanding, and reduces misunderstandings. 10n addressing EDI in medicine, it becomes even more imperative to ensure that the correct language is used.Medical professionals, patients, and policymakers may have different interpretations of these terms, and these varying interpretations can lead to confusion or even unintentional harm. 11The toolkit, thus, provides definitions and explanations for a range of terms related to EDI.These terms include but are not limited to, concepts like ableism, accessibility, cisgender, colonialism, diversity, equity, gender expression, heteronormativity, intersectionality, microaggression, systemic barriers, unconscious bias, and more.
Discrimination can often occur inadvertently through language. 12Using incorrect or insensitive language, even unintentionally, can perpetuate stereotypes, marginalize individuals or groups, or create an unwelcoming environment.For instance, using a person's previous name (or "deadname") instead of their chosen name in a transgender individual's case can be distressing and disrespectful. 13oreover, addressing someone using inappropriate gender pronouns can cause discomfort and perpetuate exclusion.Thus, understanding and using correct language is not just about political correctness but also about respect, empathy, and creating a safe, inclusive environment.Recognizing individual identities and experiences and fostering an environment that respects and values these differences, without unconscious bias is crucial. 14
Establishing an Equity, Diversity, and Inclusion Committee
The process of establishing an EDI committee begins with determining purpose and considering the current context in which you are operating.It requires reflecting upon the existing landscape of equity, diversity, and inclusion in the workplace and identifying potential areas for enhancement. 15It's critical to consider how the committee could support and enhance existing initiatives, as well as assess the overall receptivity toward EDI initiatives within the institution.
Composition of the Committee
The committee should consist of diverse individuals passionate about advancing EDI principles, representing varied races, ethnicities, genders, and other demographic and professional categories.A blend of perspectives and expertise can further enrich the committee's work. 16The optimal committee size is typically between 5 and 15 members, but this can vary depending on the unit size and the need for facilitated discussion. 17ecruitment strategies might involve advertisements, peer nominations, self-nominations, proactive identification of interested individuals, or promotions via e-newsletters.During the selection process, like member accountability, budget considerations, reporting requirements, member roles, and the frequency of meetings need to be evaluated. 18,19he chairperson of the committee should be carefully selected, bearing in mind their capacity to guide and motivate the team toward achieving the set EDI goals.Providing necessary support to the leaders is a pivotal step.
Additionally, a robust communication strategy should be developed that identifies key stakeholders and messages, provides for stakeholder feedback, outlines reporting procedures, and shares regular updates and progress reports. 20he placement of the committee within the organizational structure should be strategic to facilitate reporting and resource allocation and to ensure commitment and accountability.This might require periodic reassessment based on the committee's evolving purpose and responsibilities.
Decision-making authority and processes are other key considerations.The committee should have adequate authority to make and implement decisions, even though these often require approval from a higher body.It's essential to recognize the power dynamics within the committee to create a safe space for all members to voice their thoughts and opinions.
Accountability measures must be established, such as regular reporting to leadership and the department, sharing results with the local Radiology community, and possibly creating budgetary reports if department funding is involved.These measures help ensure that the committee's work is transparent, and its progress is continuously monitored.
Planning meetings thoughtfully, ensuring accessibility, and formulating clear goals and objectives are all part of the committee's tasks.Progress tracking and periodic evaluations help to monitor the effectiveness of the committee's work and expand diversity initiatives as needed.
Goals of the EDI Committee
While the goals of the EDI committee will be institution-specific, potential areas EDI committees could work to improve include: ensuring institutional actions are in alignment with EDI values, securing adequate funding for EDI initiatives, combating misconceptions about EDI work, scheduling, preventing faculty burnout, and maintaining leadership support.The EDI committee can also advocate for more wellness and resilience initiatives and more education on EDI topics.The committee's scope and mandate should be wide-ranging, with the aim of breaking down organizational barriers, increasing visibility for marginalized members, and integrating EDI considerations into decision-making processes.These can include strategic planning, policy revision, diversity data collection, initiation of educational programs, addressing discrimination and human rights concerns, and boosting EDI competencies among staff.The EDI committee should establish a formal process for managing complaints related to equity, diversity, and inclusion.It is imperative to clearly communicate the process to all members and provide contact information for addressing concerns.Additionally, the committee should address complaints promptly, maintain confidentiality, support complainants, and prevent retaliation.
Recruiting a Diverse Workforce
2][23][24] Advertisements should be disseminated widely across multiple platforms, using gender-neutral language and focusing on essential qualifications with an emphasis on EDI experience.The organization's commitment to equity, diversity, and inclusion should be clearly stated to encourage applications from underrepresented groups.
Maintaining diversity within the search committee is crucial.Additionally, ensuring that members of the search committee are qualified and are committed to equitable and inclusive practices is imperative.
Representation from different seniority levels and demographic backgrounds should be a priority.Institutions should provide adequate compensation and allocate time for committee members to participate in committees, along with mandatory EDI training.Recognizing and compensating for the labour involved in EDI work is also crucial, as it often falls on faculty, staff, and trainees with existing commitments.This can be accomplished in a similar manner as other administrative committees, by providing honoraria, public recognition, or other forms of compensation. 23he hiring committee should be diverse and include members with EDI expertise and representation from different ranks.The committee should provide mandatory education and training around EDI topics to committee members enabling them to recognize and manage conflicts of interest.
Strategies for attracting and recruiting diverse talent should be multifaceted.Hiring committees should advertise widely across platforms, maintain a list of potential candidates, utilize social media and professional conferences, and encourage community involvement.
Collecting self-identification data to gain insights into workforce composition and identify areas for improvement proves.Respect privacy and confidentiality, communicate the purposes and benefits to employees, and encourage participation.
Hiring committees should structure interviews to ensure fairness and eliminate biases as well as prioritize accessibility and provide key information in advance to all candidates.Making hiring decisions based on objective assessment, prioritizing strategic hiring to address underrepresentation, and considering individual circumstances are essential for a fair evaluation.
Retaining and promoting diverse staff requires a commitment to equity, diversity, and inclusion. 24Clinical teams with diverse backgrounds may be more effective in tackling the observed health outcome differences among specific racial and ethnic patient groups.Their interest in exploring the influence of non-medical factors on health, such as social determinants, plays a role in this.As a result, diverse groups in academic medicine can enhance the educational framework, offering in-depth and important information on a range of patient demographics and biomedical issues.
Establishing and reviewing faculty evaluation and promotion guidelines every year with EDI oversight and implementing mentoring programs for new faculty can help in addressing systemic barriers, and contribute to creating a healthy work environment.Consider equity, diversity, and inclusion principles in faculty awards, nominations, and academic promotion committees.
Institutional support is crucial for promoting diversity and inclusion.Institutions should provide strong support to all chairholders, address dual career issues, provide equitable support for underrepresented groups, and incorporate EDI into organizational goals and strategic planning.Establishing policies to address hate speech, violence, harassment, and discrimination against underrepresented groups is also crucial and should be reviewed in consultations with EDI committees yearly.
Conducting periodic audits of policies and hiring practices is essential to ensure equitable access to career development opportunities for professionals from underrepresented groups.
EDI Practices in Radiology
There are certain practices that can be implemented in Radiology to increase EDI. 25 Consequently, radiologists should integrate this understanding into their practice to better optimize individual interpretation and recommendations, but also avoid unnecessary bias.
Consideration of gender differences is also pivotal.Notably, the manifestation of diseases can differ between male and female patients, which in turn might impact imaging outcomes.Furthermore, with the growing recognition of transgender patient's healthcare needs, radiologists need to provide targeted leadership for these communities, especially for the provision of exams where gender presentations may not be congruent with expected anatomy.For example, mammography for trans masculine people, and prostate imaging for trans feminine people. 26Radiology departments need to implement protocols and systems that guarantee respectful and safe care.Such protocols should include using preferred pronouns, handling discussions about anatomy sensitively, and understanding the impacts of hormone therapy on imaging findings. 27Additionally after consultation with transgender patients, EMR systems which allow for the correct identification of transgender people with an accurate anatomical organ inventory may be a key tool to help ensure proper evaluation and diagnoses of medical imaging. 28 critical focus should be on strengthening the training of radiologists.As Patel and Parikh 29 point out, cultural competency is a fundamental skill for radiologists.According to Goldberg et al., imaging guidelines are inconsistently applied across different racial groups, as demonstrated by numerous studies.For example Black patients, in comparison to White patients, are less likely to receive guideline-based follow-up for incidental pulmonary nodules. 30Medical education should include comprehensive training in understanding and addressing health disparities and stereotypes, appreciating cultural diversity, and effectively communicating with patients with limited English proficiency.Obstacles such as socioeconomic constraints, language barriers, and physical disabilities can limit access to imaging.Therefore, healthcare institutions need to offer services such as interpretation, accessible facilities, and financial aid, to ensure these barriers are minimized or eliminated.
Implementing Tools Into EDI Initiatives
The strategic use of tools is vital in promoting and monitoring progress in equity, diversity, and inclusion (EDI) within medical departments.These tools, which could be digital platforms, surveys, assessment frameworks, or data analytics applications, give an objective overview of the current EDI status within the organization.They help quantify and track the impact of EDI initiatives, providing a clear picture of what is working and what requires adjustment.They can provide practical guidance on addressing issues such as microaggressions, stereotypes, and discriminatory practices.
Tools like Implicit Association Tests (IAT) can be particularly useful in identifying unconscious biases in personnel. 31nconscious biases, if not addressed, can perpetuate discrimination, and hinder the practice of equitable patient care and staff interactions.By using the IAT, institutions can tailor subsequent training and development activities to tackle identified biases.
Data analytics applications can objectively assess the current EDI state within the radiology department, including instances of discrimination.By tracking representation at various levels, monitoring the demographics of patients served, and measuring the impact of EDI initiatives, these tools provide a clearer picture of existing inequalities and the effectiveness of efforts to combat them.
These tools aid in examining organizational policies, procedures, and training programs, allowing for the identification of potential biases or systemic barriers that may hinder EDI.They can also highlight areas of success, facilitating the sharing of best practices within the organization. 32oreover, leadership assessment tools can help gauge leaders' understanding and commitment to EDI, their effectiveness in fostering an inclusive culture, and their capacity to address the challenges faced by underrepresented groups. 32hese insights are useful for guiding leadership development programs and enhancing the overall effectiveness of the organization's leadership in promoting EDI.
By leveraging these tools, institutions can ensure that their EDI efforts are data-driven, targeted, and effective, which is essential for fostering an inclusive culture and achieving the goals of diversity and inclusion.
Utilizing Resources to Combat Inequities in Radiology
The appropriate use of resources is essential for fostering EDI within radiology departments and addressing discriminatory practices.Using resources from the Radiological Society of North America (RSNA) and the American College of Radiology (ACR) can provide guides on best practices, webinars, mentorship programs, and training modules focused on cultural competency and unconscious bias. 33,34nconscious bias training is critical in identifying and mitigating biases that could perpetuate discrimination in radiology practices, from patient care to hiring decisions.Raising awareness about these biases and providing strategies to counter them are important steps toward promoting fairness and equal opportunity.
Integrating practices from inclusive recruitment resources, such as the ACR's Diversifying the Radiology Profession guide, can help in attracting and retaining a diverse talent pool. 35These resources offer guidance on crafting inclusive job advertisements, developing unbiased selection criteria, and creating mentorship programs to support diverse radiologists, thereby actively combating discriminatory hiring practices.
By using these tools and resources, radiology departments can ensure their EDI efforts are data-driven, targeted, and effective.
Effectively Using Resources in EDI Initiatives
The availability and proper utilization of resources are fundamental for fostering EDI in medical departments.These resources, which may come in the form of guides, handbooks, training modules, and digital content, provide practical guidance and support for implementing and maintaining EDI initiatives. 36e recommend an inclusive language guide to help communicate in ways that are respectful and sensitive to all identities and experiences.It promotes an environment where every individual feels seen, heard, and valued, which can foster a sense of belonging among staff and contribute to higher job satisfaction and productivity.
Bias mitigation resources are essential for addressing unconscious biases that could skew decision-making processes and create inequities. 37By raising awareness of these biases and offering strategies to counteract them, these resources promote fairness and equal opportunity within the organization.
9][40] By ensuring that all applicants, regardless of their backgrounds, have equal access and opportunity to be considered for roles, these resources help the organization build a diverse and inclusive workforce that better reflects and serves the community. 41y leveraging these resources, medical departments can continually enhance their understanding and application of EDI principles, helping to create a more inclusive, equitable, and productive work environment.
Conclusion
The promotion of equity, diversity, and inclusion (EDI) in the healthcare sector is integral in ensuring an inclusive, equitable, and robust healthcare system that caters to the needs of diverse patient populations.This article presents a comprehensive framework for a toolkit that provides essential resources and strategies that empower medical departments to actively instill a culture of EDI.
This toolkit framework offers recommendations for the formation of EDI committees within medical institutions, asserting the importance of these committees as the catalyst for driving EDI initiatives.It also provides robust strategies for attracting and retaining a diverse workforce, thereby fostering a medical environment that is reflective of the diverse communities it serves.
Figure 1 .
Figure 1.A summary of elements to consider when incorporating an EDI framework into practice. | 2023-12-10T16:11:14.857Z | 2023-12-08T00:00:00.000 | {
"year": 2024,
"sha1": "9b7a5a968e2d74f6a3ae6cf3054f27d64da84a66",
"oa_license": "CCBY",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/08465371231214232",
"oa_status": "HYBRID",
"pdf_src": "Sage",
"pdf_hash": "3a1aa93b0114515a381a6813b8ade7596587e34a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
265232103 | pes2o/s2orc | v3-fos-license | Integrase Defective Lentiviral Vector Promoter Impacts Transgene Expression in Target Cells and Magnitude of Vector-Induced Immune Responses
Integrase defective lentiviral vectors (IDLVs) are a promising vaccine delivery platform given their ability to induce high magnitude and durable antigen-specific immune responses. IDLVs based on the simian immunodeficiency virus (SIV) are significantly more efficient at transducing human and simian dendritic cells (DCs) compared to HIV-based vectors, resulting in a higher expansion of antigen-specific CD8+ T cells. Additionally, IDLV persistence and continuous antigen expression in muscle cells at the injection site contributes to the durability of the vaccine-induced immune responses. Here, to further optimize transgene expression levels in both DCs and muscle cells, we generated ten novel lentiviral vectors (LVs) expressing green fluorescent protein (GFP) under different hybrid promoters. Our data show that three of the tested hybrid promoters resulted in the highest transgene expression levels in mouse DCs, monkey DCs and monkey muscle cells. We then used the three LVs with the highest in vitro transgene expression levels to immunize BALB/c mice and observed high magnitude T cell responses at 3 months post-prime. Our study demonstrates that the choice of the vector promoter influences antigen expression levels in target cells and the ensuing magnitude of T cell responses in vivo.
Introduction
Lentiviral vectors (LVs) are efficient tools for expressing genes of interest in diverse cell types in vitro and in vivo.Lentiviral vectors offer several advantages as gene therapy tools and vaccine platforms, including an ability to transduce both dividing and non-dividing cells, a relatively large cargo capacity, a variety of alternative envelope proteins for virion pseudotyping, an absence of pre-existing host immunity and an ability to maintain a persistent gene expression [1][2][3][4][5].
While lentiviral vectors show a lower risk of genotoxicity compared to other retroviral vectors, integrase defective lentiviral vectors (IDLVs) have been developed to further avoid risks associated with insertional mutagenesis, especially when using this platform as a vaccine.These viral vectors have been mutated in long terminal repeats (LTRs), the packaging signal and the integrase gene to render them self-inactivating, non-replicating and non-integrating.IDLV persists in the target cell in an episomal form and can express the encoded gene for the lifetime of the cell.Our group developed both HIV-and SIV-based IDLVs to deliver a broad range of proteins for the induction of durable antigen-specific immune responses in both mice and non-human primates (NHPs) [6][7][8][9][10][11][12][13].IDLVs under development for vaccine delivery can offer significant advantages in addressing the limitations shown by other vaccine platforms.In addition to their ability to efficiently transduce both dividing and non-dividing cells, IDLVs stimulate a potent and durable antigen-specific immune response and are not limited by pre-existing anti-vector immunity [6,14,15].In extensive safety studies by our group and others, no replication competent lentiviruses (RCLs) were detected after IDLV immunization [9,12,13].We have shown that a single intramuscular (IM) injection with an SIV-based IDLV expressing an HIV-1 Env under the ubiquitous cytomegalovirus (CMV) promoter induced broad and sustained antigen-specific immune responses for up to 1 year post-injection in NHPs [9] and stimulated antibody affinity maturation [12].Additionally, immunization of NHPs with IDLV induced highermagnitude and more durable binding and neutralizing Ab responses compared to the other vaccine regimens, including just protein and DNA +/− protein [13].The long-term immunity induced by IDLV may be ascribed to the persistence of IDLV in vivo.Indeed, the retro-transcribed episomal vector DNA can still be detected in the muscle of immunized NHPs at 6 months post-injection [12] and, in mice, transgene expression was detected up to 3 months post-injection in muscle tissue [12,16], albeit at lower levels compared to 3 days post-injection, suggesting that IDLV can provide persistent transgene expression at the injection site.The induction of durable protective immune responses is a desirable feature for any vaccine, eliminating the need to perform repeated immunizations every few months.Given the critical role of dendritic cells (DCs) in the induction of immune responses and of muscle cells in the maintenance of an antigen reservoir following IDLV injection, we generated and tested ten novel SIV-based lentiviral vector constructs expressing green fluorescent protein (GFP) as a model antigen to readily assess expression and immunogenicity under different hybrid promoters to further optimize transgene expression levels in both DCs and muscle cells.Transgene expression levels from the novel LVs were compared to those of LVs expressing GFP under the commonly used CMV or CMV early enhancer/chicken β actin (CAG) promoters in both mouse-and rhesus-macaque-derived DCs as well as in monkey muscle cells.To evaluate whether the high transgene expression levels observed in in vitro-transduced DCs and muscle cells translated to a high magnitude of immune responses in vivo, we used three of the novel IDLVs expressing GFP to immunize BALB/c mice and measured T cell responses 3 months post-immunization.Our results show that two of our novel SIV-based IDLVs that achieved higher transgene expression levels in in vitro-transduced DCs and muscle cells also induced a high magnitude of T cell responses compared to the IDLV expressing the transgene under the commonly used CMV promoter.
Plasmid Construction and Lentiviral Vector Production
The hybrid promoters (PrDrive1-10) shown in Table 1 were cloned into an SIV-based lentiviral transfer vector expressing GFP [8].For vector production, the newly generated plasmids were co-transfected together with either the integrase competent packaging plasmid (pAd-SIV-3+) for the in vitro studies, or the integrase defective packaging plasmid (pAd-SIV-D64V) for the in vivo studies and the pseudotyping plasmid expressing the vesicular stomatitis virus glycoprotein (VSV-G) of the Indiana serotype (pVSV.G IND ) into 293T Lenti-X cells as previously described [12].Briefly, the human epithelium kidney 293T Lenti-X cells (Clontech Laboratories, Mountain View, CA, USA) were cultured in Dulbecco's Modified Eagle Medium (DMEM) (Thermo Fisher Scientific, Waltham, MA, USA, Cat# 11965092) containing 10% fetal bovine serum (FBS) (GE Healthcare Life Sciences, Hy-Clone Laboratories, South Logan, UT, USA) and 100 units/mL of penicillin-streptomycin (PS) (Thermo Fisher Scientific, Waltham, MA, USA, Cat# 15140-122).A total of 3.5 × 10 6 293T Lenti-X cells were seeded on 100 mm diameter Petri dishes (Corning, Corning, NY, USA, Cat# 430167) and transfected with 8 µg per plate of a plasmid mixture containing transfer vector, packaging plasmid and VSV.G plasmid in a 2:4:2 ratio, using the JetPrime transfection kit (Polyplus Transfection, Illkirch, France, Cat# 55-134) following the manufacturer's recommendations.At 48 and 72 h post-transfection, culture supernatants were collected, and cellular debris was removed by low-speed centrifugation and filtration through a 0.45 µm pore size filter unit (Nalgene, Rochester, NY, USA, Cat# 125-0045).Filtered supernatants were concentrated by ultracentrifugation for 2 h at 23,000 RPM on a 20% sucrose cushion.Vector particles were resuspended in 1× phosphate-buffered saline (PBS) and stored at −80 • C until further use.LV and IDLV stocks were validated and titered by flow cytometry (BD Biosciences, Franklin Lakes, NJ, USA) and by a colorimetric reverse transcriptase (RT) assay (Roche, Basel, Switzerland, Cat# 11468120910) as previously described [9,17].Briefly, 5 × 10 4 293T Lenti-X cells were seeded on 12 well plates and transduced the following day with serial dilutions of either LV-GFP or IDLV-GFP stock.At 72 h post-transduction, the percentage of GFP positive cells was assessed by flowcytometry and the vector titers were calculated using the following formula: transducing unit (TU) = dilution factor × number of cells per well × percentage of GFP positive cells (expressed in decimals).
Viruses 2023, 15 [9,17].Briefly, 5 × 10 4 293T Lenti-X cells were seeded on 12 well plates and transduced the following day with serial dilutions of either LV-GFP or IDLV-GFP stock.At 72 h post-transduction, the percentage of GFP positive cells was assessed by flow-cytometry and the vector titers were calculated using the following formula: transducing unit (TU) = dilution factor × number of cells per well x percentage of GFP positive cells (expressed in decimals).
LV Transduction of Human-Monocyte-Derived Dendritic Cells
CD14+ monocytes were isolated from human peripheral blood mononuclear cells (PBMC) using magnetic separation of human CD14 microbeads (Miltenyi Biotech, Bergisch Gladbach, North Rhine-Westphalia, Germany, Cat# 130-050-201; MACS cell separation columns and separator, Cat# 130-042-401 and 130-042-302).Healthy donor PBMCs were purchased from the Gulf Coast Regional Blood Center (Houston, TX, USA).Monocytes were differentiated into DCs using cRPMI containing 100 ng/mL human GM-CSF (Peprotech, Cranbury, NJ, USA, Cat# 300-25) and 100 ng/mL human IL-4 (Peprotech, Cranbury, NJ, USA, Cat# 200-04) on a 100 mm diameter plate (Corning, Corning, NY, USA, Cat# 430167).cRPMI supplemented with the previously mentioned cytokines was added to the cells 3 days post-isolation and every 2 days for the remainder of the experiment.Differentiation into DCs was verified by flow cytometry (BD Biosciences, Franklin Lakes, NJ, USA) following incubation of the cells with a PE-labeled mouse anti-human CD1a antibody (BD Biosciences, Franklin Lakes, NJ, USA Cat# 561754).At 7 days post-isolation, 14,000 DCs were plated in each well of a 24-well plate (Corning, Corning, NY, USA, Cat# 353047) and transduced with each of the LVs listed in Table 1 at an MOI of 4 (based on 293T Lenti-X titers) in duplicate.Half of the medium (cRPMI+cytokines) was changed the morning after transduction.GFP expression was analyzed by flow cytometry (BD Biosciences, Franklin Lakes, NJ, USA) 3 and 7 days post-transduction (Supplementary Figure S4).
LV Transduction of Monkey Skeletal Muscle Cells
Cynomologus macaque skeletal muscle cells (Cell Biologics, Chicago, IL, USA, Cat# MK-6167) were maintained in a complete smooth muscle-cell medium (Cell Biologics, Cat# M2268).A total of 25,000 cells were seeded in each well of a 12-well plate (Coning, Corning, NY, USA, Cat# 353043) previously coated with 0.1% gelatin (Sigma-Aldrich, St. Louis, MO, USA, Cat# G1393).Cells were then transduced with each of the LVs listed in Table 1 at an MOI of 0.5 (based on 293T Lenti-X titers) in duplicate.The medium was changed the morning following transduction.GFP expression was analyzed by flow cytometry (BD Biosciences, Franklin Lakes, NJ, USA) and fluorescence microscopy (ZEISS Axiovert A1 inverted fluorescence microscope) 3 and 7 days post-transduction (Supplementary Figure S5).
Mouse Immunization Study
Five groups of 5 Balb/C mice (2 male, 3 female) were immunized once intramuscularly with 50 ng of RT equivalent (corresponding to 5 × 10 6 transducing units (TU)) of IDLVs expressing GFP under promoters PrDrive2, 6 and 8 and CMV (Groups 1-4) or saline (Group 5).Twelve weeks post-immunization, the mice were euthanized and spleens were recovered for analysis of cellular immune responses.Splenocytes were cryopreserved and stored in liquid nitrogen until assayed.
IFN-γ ELISpot Assay
For the assay, 96-well plates were coated overnight with 5 µg/mL of purified antimouse interferon-γ (IFN-γ) (BS Biosciences, Franklin Lakes, NJ, USA, Cat# 551083) diluted in D-PBS.Plates were washed once with cRPMI, and then blocked for 2 h at room temperature using 200 µL per well of cRPMI.After blocking, the plates were aspirated and 200K splenocytes per well were added and stimulated with either GFP peptide (H-2d-restricted 9-mer peptide HYLSTQSAL, 4 µg/mL) (ProImmune UK, Oxford, UK, Cat#P198-0A) or the antigen-independent mitogen concanavalin A (10 µg/mL) (Sigma-Aldrich, Darmstadt, Germany, Cat# C0412-5MG) in cRPMI.Plates were incubated overnight.The following day, plates were washed 2 times with deionized water and 3 times with PBS containing 0.05% of Tween-20 (Sigma-Aldrich, Darmstadt, Germany, Cat# P1379) and incubated with 2.5 µg/mL biotinylated anti-mouse IFN-γ (BS Biosciences, Franklin Lakes, NJ, USA, Cat# 551083) for 2 h at room temperature.Subsequently, plates were washed 3 times with PBS containing 0.05% Tween-20 and incubated with streptavidin-HRP 100X for 1 h at room temperature.Following incubation, cells were washed 4 times with PBS containing 0.05% Tween-20 and 2 times with PBS, and the substrate solution (NovaRed, Vector Laboratories, Newark, CA, USA, Cat#101098-448) was allowed to develop for 5 min.After spot development, plates were washed with deionized water, air-dried and read using the ImmunoSpot Capture software version 7.0.26.0 (Cellular Technology Limited, Shaker Heights, Cleveland, OH, USA).A response was considered positive when there was at least a 2-fold increase in the number of spots over medium-treated wells (background), with a minimum threshold of 50 SFCs per million splenocytes in the stimulated wells.
Vector Promoters Impact Transgene Expression Levels in Different Cell Types
To determine the impact of different promoters on transgene expression levels in various cell types, we cloned the ten hybrid promoters shown in Table 1 into an SIV-based transfer vector plasmid encoding for GFP.These hybrid promoters are made of enhancers, core promoters, and 5 untranslated regions (UTRs) of various origins (Table 1).The newly generated plasmids were then used to generate integrase competent lentiviral vector (LV) particles for subsequent transduction experiments in various cell types.
We first assessed the transgene expression levels achieved by each of the newly generated LVs on 293T-Lenti-X cells.LVs expressing GFP under the commonly used CMV or CAG promoters were used as controls.As shown in Figure 1a under promoters PrDrive1, 5 and 7 together with the CMV promoter had the highest transgene expression levels (measured as mean fluorescence intensity, MFI) at 2 days post-transduction.At 7 days post-transduction, the four previous LVs along with the LV expressing GFP under the PrDrive6 promoter had MFIs that were twice as high as those of the rest of the LVs.The MFIs for all vectors increased from Day 2 to Day 7 post-transduction and then remained at similar levels from Day 7 to 14, suggesting that, in these cells, peak GFP expression occurs at 7 days post-transduction.
transfer vector plasmid encoding for GFP.These hybrid promoters are made of enhancers, core promoters, and 5′ untranslated regions (UTRs) of various origins (Table 1).The newly generated plasmids were then used to generate integrase competent lentiviral vector (LV) particles for subsequent transduction experiments in various cell types.
We first assessed the transgene expression levels achieved by each of the newly generated LVs on 293T-Lenti-X cells.LVs expressing GFP under the commonly used CMV or CAG promoters were used as controls.As shown in Figure 1a,b, LVs expressing GFP under promoters PrDrive1, 5 and 7 together with the CMV promoter had the highest transgene expression levels (measured as mean fluorescence intensity, MFI) at 2 days post-transduction.At 7 days post-transduction, the four previous LVs along with the LV expressing GFP under the PrDrive6 promoter had MFIs that were twice as high as those of the rest of the LVs.The MFIs for all vectors increased from Day 2 to Day 7 post-transduction and then remained at similar levels from Day 7 to 14, suggesting that, in these cells, peak GFP expression occurs at 7 days post-transduction.We then tested the same LVs on mouse BMDCs isolated from C57BL/6 mice.In contrast to HEK 293T cells, LVs expressing GFP under promoters PrDrive2, 6 and 8 had the highest transgene expression levels at Days 3 (Figure 2a) and 7 (Figure 2b) post-transduction.There was no significant difference in MFI between Days 3 and 7 post-transduction.We then tested the same LVs on mouse BMDCs isolated from C57BL/6 mice.In contrast to HEK 293T cells, LVs expressing GFP under promoters PrDrive2, 6 and 8 had the highest transgene expression levels at Days 3 (Figure 2a) and 7 (Figure 2b) post-transduction.There was no significant difference in MFI between Days 3 and 7 post-transduction.
We repeated these experiments on monkey-and human-monocyte-derived DCs (MD-DCs) to determine whether species-specific differences exist in transgene expression levels driven by these promoters.Similar to what we observed in murine BMDCs, the LVs expressing GFP under the PrDrive2, 6, and 8 and CMV promoters had the highest transgene expression levels in monkey MDDCs at Days 3 and 7 (Figure 3a,b) post-transduction.These results suggest that the LVs expressing GFP under promoters PrDrive2, 6 and 8 and CMV are the most efficient at driving gene expression in both mouse and monkey DCs.
In human DCs, we observed higher transgene expression levels from the LV expressing GFP under the PrDrive8 promoter at Day 3 post-transduction; however, this advantage was lost at Day 7 post-transduction, where an increase in GFP expression was observed for all LVs (Figure 4a,b), with the LVs expressing GFP under the PrDrive5 and 8 and CMV promoters demonstrating the highest transgene expression levels (Figure 4b).In human DCs, we observed higher transgene expression levels from the LV expressing GFP under the PrDrive8 promoter at Day 3 post-transduction; however, this advantage was lost at Day 7 post-transduction, where an increase in GFP expression was observed for all LVs (Figure 4a,b), with the LVs expressing GFP under the PrDrive5 and 8 and CMV promoters demonstrating the highest transgene expression levels (Figure 4b).Finally, given the importance of muscle cells in the maintenance of an antigen reservoir following IDLV injection [16], we tested the LVs on monkey skeletal muscle cells.As shown in Figure 5a-c, LVs expressing GFP under promoters PrDrive2, 5, 6 and 8, together with the CMV promoter, had the highest transgene expression levels at Days 3 and 7 posttransduction.Interestingly, LVs expressing GFP under promoters PrDrive2, 5, 6 and 8 and CMV were the most efficient in both monkey muscle cells and DCs, implying that the same LVs could be used to achieve high transgene expression in both DCs and muscle cells in vivo.Finally, given the importance of muscle cells in the maintenance of an antigen reservoir following IDLV injection [16], we tested the LVs on monkey skeletal muscle cells.As shown in Figure 5a-c, LVs expressing GFP under promoters PrDrive2, 5, 6 and 8, together with the CMV promoter, had the highest transgene expression levels at Days 3 and 7 posttransduction.Interestingly, LVs expressing GFP under promoters PrDrive2, 5, 6 and 8 and CMV were the most efficient in both monkey muscle cells and DCs, implying that the same LVs could be used to achieve high transgene expression in both DCs and muscle cells in vivo.
Impact of Vector Promoter on the Magnitude of IDLV-Induced T Cell Responses
To determine whether the higher transgene expression levels achieved in vitro by these LVs translated to a high immune response in vivo, we immunized five groups of five BALB/c mice.Each group of mice was immunized intramuscularly with the corresponding IDLV expressing GFP under promoters PrDrive2, 6 and 8 and CMV, or with a saline sham injection (Figure 6).T cell responses were measured by IFN-γ ELISpot using splenocytes collected 3 months after the single immunization.As shown in Figure 6b and Supplementary Figure S6, the IDLVs expressing GFP under the PrDrive6 promoter induced higher T cell responses than the other IDLVs, although differences were not statistically significant.These data suggest that in vivo, these novel IDLVs induce antigen-specific T cell responses of similar magnitude to those produced by the IDLV with expression driven by the commonly used CMV promoter.
Impact of Vector Promoter on the Magnitude of IDLV-Induced T Cell Responses
To determine whether the higher transgene expression levels achieved in vitro by these LVs translated to a high immune response in vivo, we immunized five groups of five BALB/c mice.Each group of mice was immunized intramuscularly with the corresponding IDLV expressing GFP under promoters PrDrive2, 6 and 8 and CMV, or with a saline sham injection (Figure 6).T cell responses were measured by IFN-γ ELISpot using sple- Supplementary Figure S6, the IDLVs expressing GFP under the PrDrive6 promo duced higher T cell responses than the other IDLVs, although differences were not tically significant.These data suggest that in vivo, these novel IDLVs induce antige cific T cell responses of similar magnitude to those produced by the IDLV with expr driven by the commonly used CMV promoter.
Discussion
Lentiviral vectors (LVs) offer many advantages as gene delivery systems for in ex vivo and in vivo approaches.Their relatively flexible genome and ability to tran dividing and nondividing cells, combined with their potential for cell-specific pseu ing, provides a useful tool for several applications in both experimental and thera settings [4].In the clinic, lentiviral vector-based gene-engineering strategies have su fully been used for the chimeric antigen receptor (CAR)-T and CAR-NK cell ther cancer due to their high transduction efficiency and low risk of oncogenic insertio 20].In addition to their use in the development of these cell-based immunotherap tegrase defective lentiviral vectors (IDLVs) have shown promise in pre-clinical v studies against pathogens such as HIV, influenza, and SARS-CoV-2 [12,13,[21][22][23][24], clinical studies as a DC-targeting vaccines against metastatic sarcoma [25,26].
Discussion
Lentiviral vectors (LVs) offer many advantages as gene delivery systems for in vitro, ex vivo and in vivo approaches.Their relatively flexible genome and ability to transduce dividing and nondividing cells, combined with their potential for cell-specific pseudotyping, provides a useful tool for several applications in both experimental and therapeutic settings [4].In the clinic, lentiviral vector-based gene-engineering strategies have successfully been used for the chimeric antigen receptor (CAR)-T and CAR-NK cell therapy of cancer due to their high transduction efficiency and low risk of oncogenic insertion [18][19][20].In addition to their use in the development of these cell-based immunotherapies, integrase defective lentiviral vectors (IDLVs) have shown promise in pre-clinical vaccine studies against pathogens such as HIV, influenza, and SARS-CoV-2 [12,13,[21][22][23][24], and in clinical studies as a DC-targeting vaccines against metastatic sarcoma [25,26].
One advantage of IDLVs as vaccine platforms is their ability to efficiently transduce DCs [27,28].Most DC-based vaccine strategies previously tested against infectious diseases and cancer have used ex vivo approaches, where DCs differentiated from donor-derived
Figure 1 .
Figure 1.Impact of vector promoter on transgene expression in 293T Lenti-X cells over time.293T Lenti-X cells were transduced with a multiplicity of infection (MOI) of 1 using the LVs expressing GFP under the indicated promoters.(a) Results of time-course flow cytometric analyses performed at the indicated time points to compare the mean fluorescence intensity (MFI) among the different LVs.(b) MFI normalized by the percentage of transduced cells expressing GFP, to account for differences in transduction efficiency.Histograms show the means of three replicates.Error bars indicate standard error mean (S.E.M).
14 Figure 1 .
Figure 1.Impact of vector promoter on transgene expression in 293T Lenti-X cells over time.293T Lenti-X cells were transduced with a multiplicity of infection (MOI) of 1 using the LVs expressing GFP under the indicated promoters.(a) Results of time-course flow cytometric analyses performed at the indicated time points to compare the mean fluorescence intensity (MFI) among the different LVs.(b) MFI normalized by the percentage of transduced cells expressing GFP, to account for differences in transduction efficiency.Histograms show the means of three replicates.Error bars indicate standard error mean (S.E.M).
Figure 2 .
Figure 2. Impact of vector promoter on transgene expression in mouse-bone-marrow-derived dendritic cells (BMDCs).Mouse-BMDCs were transduced with a MOI of 2 using the LVs expressing GFP under the indicated promoters.(a) Results of time-course flow cytometric analyses performed at Days 3 and 7 post-transduction, to compare MFIs among the different LVs.(b) MFI normalized by the percentage of transduced cells expressing GFP.Histograms show the means of two replicates.Error bars indicate S.E.M.We repeated these experiments on monkey-and human-monocyte-derived DCs (MDDCs) to determine whether species-specific differences exist in transgene expression levels driven by these promoters.Similar to what we observed in murine BMDCs, the LVs expressing GFP under the PrDrive2, 6, and 8 and CMV promoters had the highest transgene expression levels in monkey MDDCs at Days 3 and 7 (Figure3a,b) post-transduction.These results suggest that the LVs expressing GFP under promoters PrDrive2, 6 and 8 and CMV are the most efficient at driving gene expression in both mouse and monkey DCs.
Figure 2 . 14 Figure 3 .
Figure 2. Impact of vector promoter on transgene expression in mouse-bone-marrow-derived dendritic cells (BMDCs).Mouse-BMDCs were transduced with a MOI of 2 using the LVs expressing GFP under the indicated promoters.(a) Results of time-course flow cytometric analyses performed at Days 3 and 7 post-transduction, to compare MFIs among the different LVs.(b) MFI normalized by the percentage of transduced cells expressing GFP.Histograms show the means of two replicates.Error bars indicate S.E.M. Viruses 2023, 15, x 8 of 14
7 Figure 3 .
Figure 3. Impact of vector promoter on transgene expression in monkey-monocyte-derived dendritic cells.Monkey-monocyte-derived DCs were transduced with an MOI of 4 using the LVs expressing GFP under the indicated promoters.(a) Results of time-course flow cytometric analyses performed at the indicated time points to compare MFIs among the different LVs.(b) MFI normalized for the percentage of transduced cells expressing GFP.Histograms represent means of two replicates.Error bars indicate S.E.M.
Figure 4 .
Figure 4. Impact of vector promoter on transgene expression in human-monocyte-derived dendritic cells (MDDCs).Human MDDCs were transduced with an MOI of 4 using the LVs expressing GFP under the indicated promoters.(a) Results of time-course flow cytometric analyses performed at the indicated time points to compare MFIs among the different LVs.(b) MFI normalized for the percentage of transduced cells expressing GFP.Histograms represent the means of two replicates.Error bars indicate S.E.M.
3 Figure 4 .
Figure 4. Impact of vector promoter on transgene expression in human-monocyte-derived dendritic cells (MDDCs).Human MDDCs were transduced with an MOI of 4 using the LVs expressing GFP under the indicated promoters.(a) Results of time-course flow cytometric analyses performed at the indicated time points to compare MFIs among the different LVs.(b) MFI normalized for the percentage of transduced cells expressing GFP.Histograms represent the means of two replicates.Error bars indicate S.E.M.
Figure 5 .
Figure 5. Impact of vector promoter on transgene expression in cynomolgus macaque skeletal muscle cells.Monkey skeletal muscle cells were transduced with an MOI of 0.5 using the LVs expressing GFP under the indicated promoters.(a) Results of time-course flow cytometric analyses performed at the indicated time points to compare mean fluorescence intensity (MFI) among the different LVs.(b) MFI normalized by the percentage of transduced cells expressing GFP.Histograms show the means of two replicates.Error bars indicate S.E.M. (c) Fluorescence microscopy images of monkey muscle cells transduced with the indicated LVs at 7 days post-transduction.
7 Figure 5 .
Figure 5. Impact of vector promoter on transgene expression in cynomolgus macaque skeletal muscle cells.Monkey skeletal muscle cells were transduced with an MOI of 0.5 using the LVs expressing GFP under the indicated promoters.(a) Results of time-course flow cytometric analyses performed at the indicated time points to compare mean fluorescence intensity (MFI) among the different LVs.(b) MFI normalized by the percentage of transduced cells expressing GFP.Histograms show the means of two replicates.Error bars indicate S.E.M. (c) Fluorescence microscopy images of monkey muscle cells transduced with the indicated LVs at 7 days post-transduction.
Figure 6 .
Figure 6.Magnitude of T cell responses in mice immunized with IDLVs expressing GFP un ferent promoters.(a) A total of 25 BALB/c mice were immunized intramuscularly with RT/mouse corresponding to 5 × 10 6 transducing units (TU) of the indicated IDLVs or saline.were harvested 12 weeks post-immunization to measure T cell responses.(b) Magnitudes o specific T cell responses induced by the indicated IDLVs at 12 weeks post-immunization, a ured by IFN-γ ELISpot.Data are expressed (left panel) as numbers of specific spot-formin (SFCs) per million cells and (right panel) as % of GFP-specific SFCs normalized per ConA-i SFCs, to account for potential differences in T cell responsiveness to stimuli among samples ground responses in unstimulated wells (medium only) were subtracted.GFP = green fluo protein; ConA = concavalin A.
Figure 6 .
Figure 6.Magnitude of T cell responses in mice immunized with IDLVs expressing GFP under different promoters.(a) A total of 25 BALB/c mice were immunized intramuscularly with 50 ng RT/mouse corresponding to 5 × 10 6 transducing units (TU) of the indicated IDLVs or saline.Spleens were harvested 12 weeks post-immunization to measure T cell responses.(b) Magnitudes of GFPspecific T cell responses induced by the indicated IDLVs at 12 weeks post-immunization, as measured by IFN-γ ELISpot.Data are expressed (left panel) as numbers of specific spot-forming cells (SFCs) per million cells and (right panel) as % of GFP-specific SFCs normalized per ConA-induced SFCs, to account for potential differences in T cell responsiveness to stimuli among samples.Background responses in unstimulated wells (medium only) were subtracted.GFP = green fluorescent protein; ConA = concavalin A. | 2023-11-17T16:30:56.171Z | 2023-11-01T00:00:00.000 | {
"year": 2023,
"sha1": "6e71458efed618fbc6f7fb75728b4219e72f7bbb",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "0818c3391fba505998fbfbf70a8d5989e42a6865",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
} |
274761 | pes2o/s2orc | v3-fos-license | Isolation and Identification of Endosulfan-Degrading Bacteria and Evaluation of Their Bioremediation in Kor River, Iran
Objectives Endosulfan is a lipophilic insecticide, which causes severe health issues due to its environmental stability, toxicity, and biological reservation in organisms. It is found in the atmosphere, soil, sediments, surface waters, rain, and food in almost equal proportions. The aim of this study was to isolate and identify endosulfan-degrading bacteria from the Kor River and evaluate the possibility of applying bioremediation in reducing environmental pollution in the desired region. Methods Samples of surface sediments and water were collected from three different stations in two seasons (summer and autumn), as these are areas with high agricultural activity. Isolated bacteria were identified by various biochemical tests and morphological characteristics. The amounts of degradation of endosulfan isomers and metabolites produced as a result of biodegradation were then analyzed using gas chromatography/mass spectrometry. Results In this study, the following five bacterial genera were able to degrade endosulfan: Klebsiella, Acinetobacter, Alcaligenes, Flavobacterium, and Bacillus. During biodegradation, metabolites of endosulfan diol, endosulfan lactone, and endosulfan ether were also produced, but these had lesser toxicity compared with the original compound (i.e., endosulfan). Conclusion The five genera isolated can be used as a biocatalyst for bioremediation of endosulfan.
Introduction
During the past 50 years, pesticides have been the essential part of the agricultural world. Although the demand for production and distribution of pesticides to increase the quality and efficiency of the agricultural industry is evident, use of improper and unreasonable pesticides is likely [1]. Despite their benefits, pesticides are compounds that may have toxic side effects, causing potential environmental risk [2]. Endosulfan is an organochlorine pesticide, which belongs to the family of polycyclic chlorinated hydrocarbons. For more than 30 years, it has been used extensively in agriculture, horticulture, and forestry [3]. Endosulfan contains two stereoisomers [alfa-and beta-endosulfan (ratio, 3:7)] and has been registered under the trade names Thiodan, Cyclodan, Thimol, Thiofar, and Malix [4]. Endosulfan contamination and its persistence in soil and water environments cause it to accumulate in cells of phytoplankton products, zooplankton, fishes, and vegetables [5]. Endosulfan persists in soil and water for 3e6 months or longer [6]. Endosulfan attaches to gammaaminobutyric acid receptors located on the membrane of neurons and reduces flow of chloride. Endosulfan poisoning causes seizures. All of these aforementioned problems encouraged the scientific community to develop biological methods to remove endosulfan instead of incineration and landfill methods [7].
Biodegradation is an efficient bioremediation technique in microorganisms that grow in different ecosystems and through symbiosis with xenobiotics, these microorganisms are able to survive even in incompatible conditions. Various studies have used endosulfan as a source of sulfur for microbial growth and as a carbon resource in bioremediation. Endosulfan is decomposed into endosulfan sulfate by oxidation pathway and into endosulfan diol by hydrolysis. Endosulfan sulfate is also toxic and stable as the major component (endosulfan). Endosulfan diol can be converted to endosulfan ether, endosulfan hydroxyl ether, endosulfan dialdehyde, and endosulfan lactone. However, these metabolites are less toxic [8]. There are many reports on the degradation of endosulfan by bacteria [9]. Klebsiella pneumoniae has the capability to biologically degrade (biodegradation) endosulfan. Pandoraea sp. degrades around 95e100% of alfa-and beta-endosulfan without producing endosulfan sulfate when incubated for 18 days. Klebsiella oxytoca was reported to degrade 145e260 mg of endosulfan in 6 days [6]. Some Gram-positive bacteria such as Bacillus are able to convert endosulfan to endosulfan sulfate [10]. Natural adsorbents such as sediments can control water quality indirectly by absorbing or releasing pesticides. Surface waters are good environments for degrading pesticides, particularly when microorganisms are capable of binding to the surface contour of the water, sediment, rocks, and plants. Many types of compounds decompose slowly in aerobic regions [11].
Kor River is one of the largest surface water resources in the Fars Province, with thousands of farmers depending on its water for agricultural use. In addition, the river provides a high percentage of drinking water to the regions of Shiraz, Marvdasht, and other villages along its way. The river also provides water to industries and factories located nearby. Urban, industrial, and agricultural activities are the main causes of pollution of this river. This water is mainly polluted by the Fars meat industrial complex, petrochemical industries, sugar mills, refineries, glazed tile factories, industrial towns, and wastewater of Marvdasht [12]. Urban and industrial wastewaters have adverse effects on fish breeding, environmental health, and particularly on drinking water of downstream residents. Thus, protecting the river is particularly important. The aim of this study was to isolate and identify endosulfandegrading bacteria from the Kor River and also to evaluate the use of bioremediation techniques to improve the environmental status of this river.
Sampling range
The study site is located in Fars Province in the southwestern region of Iran. Because of intense agricultural activities in the surrounding Kor River, three sampling stations were chosen.
Counting the bacterial colonies
Laboratory chemical manufactured by Merck Company (Darmstadt, Germany) were used in this study. After transporting the soil samples to the laboratory, bacteria were counted using the total viable plate count method. During the procedure, sediment and water samples were diluted with normal saline (from 10 À1 to 10 À9 ). Then 0.1 mL of each dilution was taken and surface cultured on two medium of nutrient agar: one containing the toxin and the other without toxin (control). The cultures were incubated for 48 hours at 37 C. Once the colonies appear, plates with identifiable and countable colonies were selected and the number of colonies was counted [14].
Enrichment and isolation of endosulfandegrading bacteria
To enrich endosulfan-degrading bacteria, 5 g of sediment of each station was added to an Erlenmeyer flask containing 50 mL of mineral medium and 50 mg/ mL of endosulfan pesticide. The mixture was then incubated in a shaking incubator at 30 C at 150 rpm. After 10 days, 5 mL of the mixture from each flask was added to a fresh medium containing 50 mL of mineral medium and 50 mg/mL of endosulfan pesticide, and then incubated in a shaker incubator as described earlier.
Identification of endosulfan-degrading bacteria
For identification of endosulfan-degrading bacteria, Gram staining, shape of the colony, and movement of bacteria were analyzed. In addition, various biochemical tests such as the production of acid from carbohydrates, urea breath test, gelatin hydrolysis, indole production, citrate consumption, methyl red, Voges Proskauer, oxidative-fermentative (OF), starch hydrolysis, reduction of nitrate to nitrite, catalase, and oxidase were used [6].
Biodegradation
The bacteria detected were grown in a mineral medium containing endosulfan (50 mg/mL) for 1 week. The endosulfan were then extracted by solid-phase extraction with methanol. The extracted samples were analyzed by gas chromatography/mass spectrometry (GC/MS), HP5840 series with a 30-m capillary column (DB 1-MS) using a mass-selective detector. The temperature settings were as follows: initial temperature, 120 C; rate, 20 C per minute; Step 2 temperature, 200 C; rate, 50 C per minute; final temperature, 270 C; injector temperature, 280 C; and interface temperature, 300 C.
The amount of biodegradation of alfa-and betaendosulfan isomers was measured by GC/MS analysis. The metabolites produced during biodegradation were also identified and measured [2,15].
Statistical analysis
The obtained data were analyzed using analysis of variance and SPSS software version 15 (SPSS Inc., Chicago, IL, USA).
Endosulfan-degrading bacteria
In this study, the following five bacterial genera were identified: Klebsiella, Acinetobacter, Alcaligenes, Bacillus, and Flavobacterium. Figure 1 shows the frequency of bacteria identified in summer and autumn.
Bacterial counts
There were differences between stations in terms of bacterial counts. The highest count of bacteria in the summer was found in Station 3 (8.082 CFU/mL; Figure 2). The lowest count of bacteria in the autumn was seen in Station 1 (6.033 CFU/mL; Figure 3). All of the stations showed significant difference at the 5% level. Based on the results, it was observed that there is significant difference between the station with and without endosulfan at the 5% level.
Biodegradation
Our experimental results show that alfa-and betaendosulfan were degraded by bacteria at different levels. During the biodegradation process, metabolites of endosulfan diol, endosulfan ether, and endosulfan lactone have been produced, all of which have less toxicity than the original compound (i.e., endosulfan). The initial and remaining values (ppb) of the metabolites were measured. Figure 4 shows the GC/MS chromatogram of standard endosulfan.
Discussion
Biodegradation is a method of removing contaminants from water, and this is a natural process. Microorganisms survive by decomposing a xenobiotic or pesticide. Most of these microbes live in natural environments, but it is possible to change and strengthen them to decompose pesticides with greater speed. This ability of microbes can be used as a technology to remove contaminants. In addition, because using chemical methods to remove contaminants such as pesticide are very expensive, with advances in technology, the use of microorganisms for this purpose has been suggested. Different genera of bacteria have been isolated that could degrade the pesticide endosulfan. Because using microorganisms is an efficient and affordable method, many recent studies have focused on biodegradation of pesticides with microorganisms. The
42
F. Kafilzadeh, et al biodegradation process has also been observed in areas where these pesticides are used. Many studies have been conducted around the world to identify pollutants in water/soil as well as to identify microorganisms that have the ability to degrade pesticides. These studies have evaluated the ability of various types of fungi and bacteria in biodegradation [16].
In a study conducted by Goswami and Singh [6], degradation of endosulfan and its metabolites was evaluated by GC/MS analysis. In this study, the main metabolite produced by Bordetella sp. B9 was endosulfan ether and endosulfan lactone, respectively. No endosulfan sulfate residual was detected. Endosulfan ether concentration after 6 days of incubation was 0.53% AE 0.2%, which declined to 0.41 AE 0.13 after the 18 th day. Endosulfan lactone concentration after 6 days was 0.24 AE 0.09%, which increased to 0.35 AE 0.07% after 18 days. The study indicated that the Bordetella sp. degraded 80% alfa-endosulfan and 86% beta-endosulfan after 18 days of incubation [6].
Siddique et al [17] found that Pandoraea sp. is able to degrade 95e100% of the alfa-and beta-endosulfan after 18 days of incubation with the initial endosulfan concentration of 100 mg/mL, with no endosulfan sulfate produced during biodegradation.
Li et al [5] found that the Achromobacter xylosoxidans strain CS5 is able to use endosulfan as a source of carbon, sulfur, and energy. This study proves that CS5 can degrade > 24.8 mg/L of alfa-endosulfan and > 10.5 mg/L of betaendosulfan in an aqueous environment after 8 days. Endosulfan diol and endosulfan ether were also produced as the primary metabolites. Their results suggested that the metabolism of endosulfan by the CS5 strain was accompanied by a significant reduction in the toxicity.
In another study, Kumar et al [18] examined degradation of endosulfan by a mixed bacterial culture containing Stenotrophomonas maltophilia and Rhodococcus erythropolis in soils contaminated with pesticides. After 2 weeks of incubation, the bacterial culture was able to degrade 73% and 81% of the alfa-and beta-endosulfan, respectively. Intermediate metabolites known as endodiol were produced during the biodegradation process. S. maltophilia demonstrated more degradation than R. erythropolis.
In 2006, a mixed culture of bacteria containing Staphylococcus, Bacillus circulance I, and B. circulance II was isolated from soil contaminated with endosulfan. After 3 weeks of incubation, the bacteria were able to decompose 71.58 AE 0.2% of alfa-endosulfan and 75.88 AE 0.2% beta-endosulfan under aerobic and anaerobic conditions. In addition, no intermediate metabolites were detected. The results showed that bacterial mixed cultures used can be applied to treat soil and water contaminated with endosulfan [19]. Bajaj et al [10] isolated and identified Pseudomonas sp. strain IITR01, which has the ability to degrade alfa-endosulfan. This strain quickly degrades endosulfan sulfate and converts it into the less toxic metabolites such as diol, ether, and lactone. In their study, the appropriate amount of lactone was prepared. The GC/MS analysis revealed degradation of alfaendosulfan and endosulfan sulfate as well as production of endosulfan diol, endosulfan ether, and endosulfan lactone [10].
Xie et al [20] reported that bacteria can strengthen the production of endosulfan diol in a short period compared with fungi. It has also been reported that Pseudomonas aeruginosa degrades > 85% of the alfaand beta-endosulfan after 16 days of incubation [16].
In a study by Kumar et al [21], endosulfan-degrading bacteria were isolated from soil contaminated with pesticides. These bacteria included Ochrobactrum, Burkholderia, Pseudomonas alcaligenes, Pseudomonas sp., and Arthrobacter sp. All cells of P. alcaligenes and Pseudomonas sp. absorbed 89% and 94% of alfaendosulfan and 89% and 86% of beta-endosulfan, respectively. Endosulfan sulfate and a small amount of endosulfan diol were produced during biodegradation by Pseudomonas sp. By contrast, P. alcaligenes just generated endosulfan diol, indicating that there were no oxidation. Thus, in the case of P. alcaligenes, hydrolyzation is the only mechanism of endosulfan degradation. Whereas in the case of Pseudomonas sp., there is oxidation in addition to hydrolyzation. Based on these results, the bacteria can be used in various technologies for removing endosulfan or endosulfan sulfate from contaminated areas [21].
In 2007, in one study, 29 bacterial strains were isolated from soil contaminated with endosulfan, of which Pseudomonas spinosa, P. aeruginosa, and Burkholderia degraded bacteria faster and were able to degrade 90% of alfa-and beta-endosulfan [22].
In our study, which was carried out on water and sediments contaminated with the pesticide endosulfan, Klebsiella was the most powerful isolated strain, which could decrease the initial alfa-endosulfan concentration of 3.5 Â 10 À4 ppb to 0.06 Â 10 À6 ppb and the initial beta-endosulfan of 1.5 Â 10 À4 ppb to 0.09 Â 10 À6 ppb within a week, and therefore 90% of the alfaendosulfan and 90% of the beta-endosulfan were degraded. In this study, endosulfan sulfate was not produced. The result of this study is parallel with results of Kwon et al [23] who found that Klebsiella sp. biologically degrades 81.72% of endosulfan after 10 days of incubation, and during this process endosulfan sulfate, which is a toxic metabolite of endosulfan, was not produced.
In this study, Alcaligenes, Acinetobacter, and Flavobacterium were reported to biodegrade the pesticide endosulfan in different regions of the Kor River, which was not previously reported.
According to Sutherland et al [24], degradation of endosulfan can be achieved by oxidation and hydrolysis pathways, and the toxic metabolite endosulfan sulfate along with other less toxic metabolites can be produced [24].
Bacterial culture causes rapid degradation of alfaand beta-endosulfan isomers. The rate of degradation of isomers is comparable. Degradation of both isomers is associated with the production of metabolites of endosulfan sulfate, endosulfan diol, endosulfan lactone, endosulfan ether, and unknown metabolites [25].
In this study, endosulfan diol, endosulfan ether, and endosulfan lactone were metabolites obtained by
44
F. Kafilzadeh, et al degradation of alfa-and beta-endosulfan isomers, among which endosulfan diol was the greatest and most original metabolite. These are consistent with the results reported by Cotham and Bidleman [26] that endosulfan diol is produced by microorganisms in water and sediments [26]. Awasthi et al [25] also found that endosulfan diol is the major metabolite of endosulfan degradation. Differences in the rate of degradation between alfa and beta isomers can be related to differences in rate and low solubility in the liquid medium. This is especially true because a concentration of 10 mg/L endosulfan is four times greater than its solubility in water. Degradation of endosulfan isomers is associated with the production of metabolites. Production of endosulfan ether has also been reported by other bioremediation studies. When oxidation occurs in a metabolic pathway, endosulfan sulfate is produced. In addition, by hydrolysis, both endosulfan diol and endosulfan ether are finally produced [8].
Furthermore, differences in the degradation of alfa and beta isomers may be due to stereo-isomerization because enzymes released from the bacterial system may correspond to only one of the stereoisomers [18]. As a result, endosulfan sulfate was not produced during the process of biodegradation in this study, with the predominant biodegradation route being hydrolysis.
Results of this study showed that endosulfandegrading bacteria are widely distributed across various regions of the Kor River. A review of the previous studies and the results of this study showed that bioremediation can reduce endosulfan contamination in this river.
Conflicts of interest
The authors do not have any conflicts of interest. | 2016-05-12T22:15:10.714Z | 2014-12-24T00:00:00.000 | {
"year": 2014,
"sha1": "3ce76c1db5dc2fe893d8cafba9a61ae733db7da5",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1016/j.phrp.2014.12.003",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3ce76c1db5dc2fe893d8cafba9a61ae733db7da5",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
49558001 | pes2o/s2orc | v3-fos-license | Genetic variants in systemic lupus erythematosus susceptibility loci, XKR6 and GLT1D1 are associated with childhood-onset SLE in a Korean cohort
Impact of genetic variants on the age of systemic lupus erythematosus (SLE) onset was not fully understood. We investigated a cumulative effect of SLE-risk variants on the age of SLE onset and scanned genome-wide SNPs to search for new risk loci of childhood-onset SLE (cSLE). We analyzed 781 Korean single-center SLE subjects who previously genotyped by both Immunochip and genome-wide SNP arrays. Individual genetic risk scores (GRS) from well-validated SLE susceptibility loci were calculated and tested for their association with cSLE (<16 years at onset). Single-variant association tests were performed using a multivariable logistic regression adjusting for population stratification. GRS from SLE susceptibility loci was significantly higher in cSLE than aSLE (p = 1.23 × 10−3). Two SNPs, rs7460469 in XKR6 (p = 1.26 × 10−8, OR = 5.58) and rs7300146 in GLT1D1 p = 1.49 × 10−8, OR = 2.85), showed the most significant associations with cSLE. The model consisting of GRS of SLE and two newly identified loci showed an area under curve (AUC) of 0.71 in a receiver operating characteristics (ROC) curve for prediction of cSLE. In conclusion, cSLE is associated with a high cumulative SLE-risk effect and two novel SNPs rs7460469 and rs7300146, providing the first predictive model for cSLE in Koreans.
the largest ancestral group in that study. Additionally, both the studies used HLA SNPs that are hard to explain majority of the SLE associations in the HLA region and they calculated individual's genetic burdens from the small numbers of SLE loci found at the time of publications.
Here, we investigated the genetic contribution on cSLE, overcoming the limitations of the previous researches. A cumulative SLE-risk effect in an individual from the current HLA amino-acid haplotypes and the latest non-HLA SLE-risk variants was calculated and tested for the association with cSLE (diagnosed before the age of 16). We also searched for new risk loci of cSLE using genome-wide SNP data in a Korean population.
Results
Characteristics of cSLE and aSLE. Mean age of SLE onset was 12.5 ± 2.5 years in 96 cSLE patients and 29.0 ± 9.4 in 685 aSLE patients. cSLE showed worse clinical outcomes regarding to SLE specific manifestations and disease activity. The number of cumulative American College of Rheumatology (ACR) criteria and the adjusted mean SLE disease activity index (AMS) were higher in cSLE than aSLE during follow-up period (p < 0.05, Table 1).
Association of rs7460469 and rs7300146 with cSLE. A total of 648,077 genome-wide SNPs were obtained with the average call rate of 99.7%, after the quality control procedure. We identified two loci associated with cSLE that surpassed the genome-wide significance level in multivariable logistic regression model (Table 2 and Fig. 2A), with the genomic inflation factor of 1.003 (Fig. 2B). The most significant signal (p = 1.26 × 10 −8 , OR = 5.58) was at the genotyped SNP rs7460469 in XKR6 (Fig. 2C). Notably, this locus was located between FAM167A (previously referred to as C8orf13) and BLK, which were demonstrated as an SLE risk loci in Asians as well as Caucasians [5][6][7][8] . The SNP rs7460469 was polymorphic only in Asian populations according to the 1000 Genomes Project data. The SLE-risk variants reported in FAM167A-BLK (rs922483 and rs1382568) 5 were not correlated with rs7460469 in XKR6 and showed no association with cSLE in the study subjects (p > 0.05).
We note that the local association was supported by only a single SNP rs7460469 without its correlated SNPs with similar association P values. In order to ensure the association of rs7460469, we checked that the raw fluorescence signals in the rs7460469 assays clearly cluster into three discrete genotypic groups (data not shown) and the distribution of genotypes in the subjects follows Hardy-Weinberg equilibrium (HWE; P HWE = 1.00). In addition, we re-genotyped the same variant in a subset (n = 110 with 1 AA, 56 AG and 53 GG) of the study subjects using the Sanger sequencing method. The genotype results between GWAS array and sequencing were concordant 100% in the 110 samples. Linkage disequilibrium (LD) between rs7460469 and its neighboring SNPs was also calculated, based on the Asian LD information in the 1000 Genomes Projects database. There was no SNP in LD (r 2 > 0.2) with rs7460469, which supports the observed regional association plot with only an associated variant (Fig. 2C).
The other significant signal (p = 1.49 × 10 −8 , OR = 2.84) was detected at rs7300146 in GLT1D1 (Fig. 2D). High genotyping quality for rs7300146 was assessed by checking the raw fluorescence signals in the genotyping assays (data not shown). We re-genotyped the same variant in the subset (n = 110 with 24 CC, 57 AG and 29 GG) of the subjects using Sanger sequencing and confirmed a concordance rate of 100% between array and sequencing data. After a regional imputation and fine mapping for the rs7300146 region, 8 variants exceeding the genome-wide significance threshold were identified (Supplementary Table S1). Among them, rs122989222 (p = 6.16 × 10 −9 , OR = 3.02) showed the most significant association and the rs12309809 (p = 8.96 × 10 −8 , OR = 2.93) was associated with the cis expression quantitative trait locus (eQTL) for SLC15A4, which has been known an SLE susceptibility gene in a Chinese population.
Predictive model for cSLE using an ROC curve analysis. We further estimated the predition efficacy of the model constructed with weighted SLE GRS and two novel cSLE-risk loci. These predictors were added seperately or together into a multivariate model and AUCs of the ROC curve were calculated. The model with the weighted SLE GRS showed a ROC curve with an AUC of 0.6 and the model with the two cSLE-risk SNPs (rs7460469 and rs7300146) showed an AUC of 0.68, bringing the full model combined with the separate models to the highest predictive value with the AUC of 0.71 (Fig. 3).
Discussion
We have demonstrated that cSLE known as worse phenotype than aSLE is associated with a higher accumulation SLE-risk effect, which might explain worse clinical feature in cSLE. In addition, for the first time, we newly identified two genes associated cSLE susceptibility, XKR6 and GLT1D1. There findings allowed us to construct a single model to predict cSLE.
Webb et al. demonstrated that cSLE (<18 years at onset) was associated with the increased number of SLE-risk variants than aSLE 4 . They counted 19 SLE-risk alleles in each individual and found the significant difference between cSLE and aSLE in Gullah and African-American patients with SLE, but not Hispanic and European-American patients with SLE. Asian patients with SLE were excluded due to small sample size. In our study, we showed that SLE patients with higher GRS were significantly enriched in the cSLE than aSLE group. In contrast to Webb et al. study, we used large-size Korean cohort and calculated GRS from the most updated, well-validated Asian SLE-risk loci and HLA-DRβ1 amino-acid haplotype model. The weighted GRS used in the study was calculated based on the odds ratio as well as the total number of risk-variant copies.
In addition, our study revealed the genetic contribution on the age of SLE onset at individual variants within the two loci. First, GLT1D1 around rs7300146 has been known as a candidate oncogene of colorectal cancer and relapse marker of multiple myeloma 9,10 . The SNP rs12309809, the proxy SNP of rs7300146 in GLT1D1 was associated with expression level of the neighboring gene SLC15A4. It is known that SLC15A4 is indispensable to innate immunity and antibody isotype switching to IgG2c 11,12 . Lack of SLC15A4 is related with impaired function of SLE pathogenesis associated cytokines and protein, such as toll-like receptor 7 (TLR7)-and TLR9 dependent cytokines including type 1 interferon (IFN) and nucleotide oligomerization domain-1 (NOD1) 13,14 . Interestingly, the genetic variants in SLC15A4 was associated with SLE risk in a Chinese population 15 and lupus nephritis in a Caucasian population 16 . Another new locus associated with cSLE was XKR6, which is located between FAM167A and BLK. This locus was associated with lupus nephritis as well as SLE susceptibility in multiple ancestreis [5][6][7][8]17 . However, the cSLE-associated SNP in XKR6 that we identified in the study is Asian-specific and has no correlation with the SLE-risk, functional SNPs reported in previous studies 5 . Furthermore, the associated SNP in XKR6 had no proxy SNPs in the flanking region and no functional effects reported so far. Thus, it may need more studies on this region to understand biological importance.
Consistent with reported observations 2,18,19 , lupus nephritis was more common in cSLE (60.4%) than aSLE (46.4%) in the study cohort (p = 0.010). Although the regions around the two novel cSLE-risk variants have been associated with lupus nephritis 16,17 , we note that the associations of the two newly identified variants were not explained by lupus nephritis. There was no association between lupus nephritis and the two cSLE-risk variants (p > 0.05 in 376 lupus nephritis-positive SLE vs. 405 lupus nephritis-negative SLE).
The limitation of the study is the relatively small sample size (n = 781) and lack of replication in independent cohorts. However, we could use the samples with high-density genotyping data and accurate clinical information including onset age which was gathered in the prospective Hanyang BAE Lupus cohort.
In conclusion, we reported the presence of genetic effects on the age of SLE onset, especially in the SLE susceptibility loci and the two novel cSLE loci, providing the first model to predict cSLE in Koreans.
Methods
Ethic statement. This study protocol was approved by the Institutional Ethics Review Board of Hanyang University Hospital and written informed consent was obtained from all participants in accordance with the principles of the Helsinki Declaration. All methods were carried out in accordance with the relevant guidelines and regulations.
Study population.
A total of 781 Korean patients with SLE were selected from the Hanyang BAE Lupus cohort 20 . cSLE (n = 96) was defined as SLE patients with onset age of below 16 and aSLE (n = 685) was defined as those with onset age of 16 or above 2 . As it is well known that hormones such as estrogen influence the development of SLE and flares, we restrict cSLE to patients below 16 based their stage of puberty, in which stage 5 of puberty is defined as the stage after 16 years when height no longer increases.
Genotype data. All the study subjects were previously genotyped by both Immunochip and HumanOmniExpress genome-wide arrays, as previously described 21,22 . In brief, the merged array data was passed the general quality control thresholds such as call rates per individual or SNP (>95%), HWE in autosomal SNPs (p < 1 × 10 −5 ), and minor allele frequency (MAF > 0.01). Principal component (PC) analysis calculated PCs to adjust for population stratification among the subjects in the subsequent statistical models. We noted that there were no genetic outliers of >6 standard deviations for each of the top 10 PCs. Ungenotyped variants in the associated loci were imputed by IMPUTE2 and passed the imputation quality score (info) of 0.6. Genetic risk score for SLE risk. Individual's weighted GRS from well-validated SLE susceptibility loci were calculated to measure the accumulation effect of SLE susceptibility loci on the age of SLE onset. The SLE susceptibility loci used in calculation of GRS were composed with 45 Asian confirmed non-HLA SNPs from Sun et al. study 21 and HLA-DRB1 haplotypes (constructed from HLA-DRβ1 amino-acid positions 11, 13, and 26) in our previous study 23 . We weighted the number of effect alleles by the previously reported effect size (=the natural logarithm of the previously reported odds ratio) for both non-HLA and HLA loci, and then summed all the locus-specific weighted score, as previously described 24 . Statistical analysis. The associations of weighted GRS with the age of SLE onset and the development of cSLE were tested using a linear regression and a logistic regression adjusting for the top 10 PCs as covariates, respectively. The association of each of single variants in the whole genome with cSLE were evaluated by a multivariable logistic regression analysis controlling for the top 10 PCs to calculate an odds ratio (OR) and its 95% confidence interval (95% CI) of the minor allele. Then, we estimated the predictive power of weighted GRS and/ or risk loci of cSLE by measuring the area under the curve (AUC) of receiver operating characteristics (ROC). All data were analyzed with using the PLINK and R. | 2018-07-12T07:59:35.230Z | 2018-07-02T00:00:00.000 | {
"year": 2018,
"sha1": "a5c41e0931b01c14de28876632616d99ee94dc5b",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-018-28128-z.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a5c41e0931b01c14de28876632616d99ee94dc5b",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
18919685 | pes2o/s2orc | v3-fos-license | Upregulation of inflammatory genes in the nasal mucosa of patients undergoing endonasal dacryocystorhinostomy
Background Epiphora is a common complaint of nasolacrimal duct obstruction (NLDO) in adults. The precise pathogenesis of NLDO is still unknown, but inflammatory processes are believed to be predisposing factors. Endoscopic dacryocystorhinostomy (EN-DCR) is an effective surgical technique for treating symptomatic NLDO. The purpose of the procedure is to relieve the patient’s symptoms by creating an opening, ie, a rhinostoma, between the lacrimal sac and the nasal cavity. Although the success rates after EN-DCR are high, the procedure sometimes fails due to onset of a fibrotic process at the rhinostomy site. The aim of this prospective comparative study was to investigate inflammation-related gene expression in the nasal mucosa at the rhinostomy site. Methods Ten participants were consecutively recruited from eligible adult patients who underwent primary powered EN-DCR (five patients) or septoplasty (five controls). Nasal mucosa specimens were taken from the rhinostomy site at the beginning of surgery for analysis of gene expression. Specimens were taken from the same site on the lateral nasal wall for controls. Quantitative reverse transcription polymerase chain reaction (qRT-PCR) was performed for the inflammatory genes interleukin (IL)-6, IL-1β, and CCL2, and because of a clear trend of increased inflammation in the EN-DCR samples, a wider PCR array was performed to compare inflammation-related gene expression in EN-DCR subjects and corresponding controls. Results Our qRT-PCR results revealed a clear trend of increased transcription of IL-6, IL-1β, and CCL2 (P=0.03). The same trend was also evident in the PCR array, which additionally revealed notable differences between EN-DCR subjects and controls with regard to expression of several other inflammation-related mediators. At 6-month follow-up, the success rate after primary EN-DCR was 60%, ie, in three of five patients. Conclusion The present study demonstrates that there is an intense inflammation gene expression response in the nasal mucosa of patients undergoing EN-DCR.
Introduction
Epiphora, ie, tearing of the eye, is a common complaint, particularly in the elderly, the extent of which can vary from minor inconvenience to significant social embarrassment. The common cause of epiphora and discharge from the eye in adults is nasolacrimal duct obstruction (NLDO). The pathogenesis of NLDO is unknown, but the process is characterized by gradual inflammation and subsequent fibrosis of the nasolacrimal duct, which may lead to obstruction of the lacrimal pathway. 1,2 Endoscopic dacryocystorhinostomy (EN-DCR) is an effective and safe surgical technique for treating symptomatic lower lacrimal pathway obstruction and dacryocystitis in cases where there is no response to conservative treatment. The purpose of the procedure is to create a bypass, ie, a rhinostoma, between the lacrimal sac and the nasal cavity. The success of primary EN-DCR has been reported to be high, varying between 74% and 94%. 3 However, the procedure sometimes fails, and the most common reason for this is scarring of the rhinostoma. 4,5 The granulation tissue and scar formation is thought to be linked to the biology of wound healing in the nasal mucosa ( Figure 1). 6 There have been several histopathological studies reporting chronic inflammation and fibrosis in specimens taken from the nasal mucosa during dacryocystorhinostomy, [7][8][9] and various histopathological features related to chronic inflammation may also play a role in the outcome of EN-DCR. 9,10 However, inflammatory signaling molecules in the nasal mucosa of patients with NLDO have been poorly investigated. Smirnov et al demonstrated that high expression of heat shock protein 47, a regulator of fibrosis, might predict a poor surgical result after EN-DCR. 11 Further, these biological factors are important when investigating potential targets for development of antifibrotic therapy. This is of importance, given that there are no effective antifibrotic drugs available to target fibrogenic factors or to block their receptors. 12 The aim of this prospective comparative study was to investigate the inflammatory gene expression profile at the rhinostomy site in patients undergoing EN-DCR.
Patients and methods Patients
Five study subjects were consecutively recruited from adult patients who underwent primary EN-DCR due to epiphora or recurrent infection of the lacrimal sac between May and August 2012. Five control subjects were also recruited from eligible patients who underwent septoplasty during the same time period. Patients undergoing septoplasty were selected as controls because their indication for surgery was a septal deformity not associated with any inflammatory or infectious process. All the operations were performed in the Department of Otorhinolaryngology at Kuopio University Hospital, in Kuopio, Finland. Patients were eligible for enrollment if they were adults (age .18 years) and if their American Society of Anesthesiologists physical status score was I-III. 13 Exclusion criteria were presacral obstruction, malignancy in the paranasal sinuses, nasal cavity, or lacrimal pathway, mental disability, pregnancy, or breast-feeding. Patients who underwent septoplasty were not eligible for participation as controls when there was a risk of postoperative adhesions due to the presence of a narrow nasal cavity or where there was a history of recurrent or chronic paranasal infections. The patient demographics are shown in Table 1. There were no dropouts during the 6-month follow-up period. This study was approved by the research ethics committee at the District Hospital of Northern Savo, Kuopio, Finland. The patients were given oral and written information about the trial protocol, and all provided their written consent.
assessments At the preoperative visit and 1-week, 2-month, and 6-month postoperative visits, an objective assessment was performed by an otorhinolaryngologist who performed lacrimal irrigation and nasal endoscopy, and the findings in the nasal cavity were evaluated using the Lund-MacKay staging system. 14 A subjective assessment was performed using the Nasolacrimal Duct Symptom Score questionnaire. 15 The surgical outcome was considered successful if saline solution freely reached the nose during lacrimal irrigation and if there was relief of symptoms.
Operative technique
Standardized general anesthesia was used. Nasal mucosa specimens were taken from over the rhinostomy site at the onset of the surgery for analysis of gene expression. The control specimens were taken from the lateral nasal wall at exactly the same site. The standardized, detailed endoscopic powered instrumentation technique used and the postoperative care have been described elsewhere. 16 No stents were used.
Quantitative reverse transcription polymerase chain reaction (qrT-PCr) for inflammatory cytokine genes Human tissue samples reserved for RNA extraction from five patients undergoing EN-DCR surgery and five control patients were placed immediately into liquid nitrogen and thereafter stored at −70°C. RNA was extracted from tissue samples using an RNeasy ® mini kit (74104; Qiagen, Valencia, CA, USA), according to the manufacturer's instructions. First, the tissue pieces were ground mechanically using a glass pestle homogenizer, including the kit buffer, and chilling the homogenizer on ice. The procedure also included a separate RNase-free DNase I treatment (79254; Qiagen) in the extraction column, as described in the kit protocol. Quantity and quality control of the extracted RNA was performed by spectrophotometric analysis.
Next, 500 ng of extracted RNA was reverse-transcribed to generate the corresponding DNA using a SuperScript ® III first strand synthesis system (Life Technologies, Carlsbad, CA, USA). In brief, the protocol was as follows: 500 ng of purified RNA in 11 µL of RNase-free water was incubated with 50 ng random hexamers and 10 nmol dNTPs (deoxynucleotide triphosphates) for 5 minutes at 65°C; 100 nM DTT (dithiothreitol), 40 U RNAse OUT ® (Life Technologies), and 200 U SuperScript ® III reverse transcriptase were then added to the reaction, along with the appropriate amount of 5× First Strand Reaction Buffer. The complete reaction was subsequently incubated at 50°C for 50 minutes, after which the enzymes were inactivated at 70°C for 15 minutes. The generated complementary DNA (cDNA) samples were used immediately for analysis by qRT-PCR. SYBR ® Green Real-Time PCR Master Mix (Life Technologies) and specific primer pairs for human interleukin (IL)-6, IL-1β, chemokine (C-C motif) ligand 2 (CCL2), and β-actin were used to determine the relative messenger (m) RNA expression in the samples. For IL-6, the primer pairs were forward, 5′-AGT GAG GAA CAA GCC AGA GC-3′ and reverse, 5′-CAG GGG TGG TTA TTG CAT CT-3′; for IL-1β, the primer pairs were forward, 5′-AAA AGC TTG GTG ATG TCT GG-3′ and reverse, 5′-TTT CAA CAC GCA GGA CAG G-3′; for CCL2, the primers were forward, 5′-CTC ATA GCA GCC ACC TTC ATT C-3′ and reverse, 5′-TCA CAG CTT CTT TGG GAC ACT T-3′; and for β-actin, the primers were forward, 5′-GGA TGC AGA AGG AGA TCA CTG-3′ and reverse, 5′-CGA TCC ACA CGG AGT ACT TG-3′. The primers were sourced from Oligomer Oy, Helsinki, Finland. The qRT-PCR reaction was run on an ABI Prism ® 7500 Thermocycler (Life Technologies) using standard conditions. The data were analyzed using the ΔΔCt method 17 and normalized using human β-actin expression as an internal control.
PCR array study for inflammatory response gene expression cDNA for the array analysis was prepared using an RT 2 Pre-AMP cDNA synthesis kit (330451; Qiagen) following the manufacturer's instructions. For both septoplasty and EN-DCR, five individually extracted RNAs were mixed in equal amounts (1 µg total, 200 ng of each) to provide templates for separate cDNA syntheses. A 96-well RT 2 Profiler™ PCR Array for Human Inflammatory Response and Autoimmunity (PAHS-077Z; SABiosciences/Qiagen) was used for the inflammation gene expression analysis, including the wells for control reactions. The SYBR ® Green (Life Technologies) fluorescence detection methodology was employed. Thus, two one-plate runs were performed, ie, one for the septoplasty controls and the other for the EN-DCR samples. An Applied Biosystems 7500 Real-Time PCR System (Life Technologies) was used for PCR array amplification. All the quality control requirements stipulated by the manufacturer of the array were fulfilled in both PCR runs (including genomic DNA control, cDNA synthesis control, and positive PCR controls). The data were analyzed using the ΔΔCt method 17 and normalized using human glyceraldehyde phosphate dehydrogenase expression as an internal control.
Results
The overall success rate after primary EN-DCR was 60% (3/5 patients) at the 6-month follow-up. On nasal endoscopy, the two failed patients showed tight fibrous scarring over the rhinostomy site, and one also had severe synechiae. Otherwise, there were no abnormal endoscopic findings according to the Lund-MacKay staging system. 14 No other intraoperative or postoperative complications occurred during the study period.
802
Penttilä et al CCL2 (P=0.03), was increased in the EN-DCR samples, but statistical significance was achieved only for CCL2 ( Figure 2). Due to a clear trend of increased inflammation in the EN-DCR samples, we performed a wider PCR array for inflammatory markers. Interestingly, there were notable differences between the groups with regard to inflammatory mediators ( Table 2). The most significant findings in the EN-DCR samples when compared with controls were increased gene expression of the following: E-selectin
Discussion
Our qRT-PCR results showed a clear trend toward increased transcription of IL-6, IL-1β, and CCL2. This finding was evident also from the PCR array, which additionally revealed notable differences in expression of several other inflammation-related mediators. There was a clearly increased expression of E-selectin mRNA, indicating an endothelial cell response in samples isolated from patients undergoing EN-DCR. By relatively weak carbohydrate interactions, E-selectin stimulates blood leukocytes to slow down and roll along the endothelium before their transmigration through the endothelium into the tissue. 18,19 The endothelium can become activated by bacterial lipopolysaccharide, but in the case of NLDO, the activation is more probably mediated by proinflammatory cytokines, such as IL-1 and TNF-α. IL-1β and TNF-α together with a third acute phase protein, IL-6, are pleiotropic cytokines exerting a variety of effects on cellular function. In addition to contributing to acute and chronic inflammation, they have all been associated with the process of fibrosis. 20,21 The continuing presence of inflammation and subsequent fibrosis, in turn, are considered to be the ultimate reason for NLDO. 1,22 In acute inflammation, neutrophils are the primary leukocytes attracted to the inflammatory site. 22 In response to chemokines such as IL-8, neutrophils express their IL-6 receptors which activate endothelial cells to decrease their IL-8 production and to favor production of CCL2, which attracts monocytes in particular. 20 The decreased expression of IL-8 mRNA and increased expression of CCL2 in our NLDO samples suggest that the inflammation has passed through its initiation phase. The increased gene expression of CCL16, CXCL3, CCL13, and CCL3 in NLDO samples compared with those in controls also supports the transition toward a mononuclear cell type-dominated response. [23][24][25][26] Our present results are in line with previous findings that inflammation is involved in the pathogenesis of NLDO. 1,2 Nuclear factor kappa B (NF-κB) is a major transcription factor regulating the expression of many inflammation-related genes. 27 In addition to the induction of E-selectin, 19 NF-κB plays an important role in the expression of other genes, such as CCL2, CCL3, IL-6, IL-1β, and TNF-α, which were also strongly increased in samples from NLDO patients as compared with those from controls. In order to avoid overwhelming inflammation, NF-κB is kept under strict autoregulation. 19 The dynamic regulation probably results in no visible increase in expression of mRNA for NF-κB in EN-DCR patients.
In the present study, our success rate for primary EN-DCR was 60%, which is lower than our previously reported rate of 93%. 15 This difference may be explained by the small patient population recruited for this study. Although symptomatic relief was achieved in our two failed patients, our strict criteria categorized these patients as failures because irrigation was unsuccessful. The lack of studies examining the inflammation restricts our understanding of the underlying mechanisms
804
Penttilä et al causing NLDO. Therefore, we propose that the present study provides new important information. The profile of investigated markers suggests that inflammation has passed from its initiation state, which is in accordance with the previous findings that long-lasting inflammation is present in NLDO. Since the main reason for failure of EN-DCR is scarring over the rhinostomy site, 4,5 an antifibrotic drug able to target fibrogenic factors or block their receptors could be beneficial. Therefore, a potential target for therapy to control the progression of fibrosis is currently being sought. 11 These potential inflammatory factors are promising candidates for future studies because they can be viewed as potential targets in the development of antifibrotic therapy intended for prevention of excessive scar formation.
Conclusion
The present study shows that expression of various inflammatory response-related genes is upregulated in the nasal mucosa of patients undergoing EN-DCR, but larger study populations are required to understand the details of these inflammatory responses.
Author contributions
Author contributions were as follows: EP, study planning, operations and clinical examination, manuscript writing; JMTH, analysis and interpretation of data, manuscript writing; MH, qRT-PCR measurements; AK, interpretation of data, manuscript writing; GS, study planning, manuscript writing; HT, study planning, manuscript writing; JS, study planning, manuscript writing; JN, manuscript writing; and KK, study planning, manuscript writing, and economic support. All authors contributed toward data analysis, drafting and revising the paper and agree to be accountable for all aspects of the work. | 2016-05-04T20:20:58.661Z | 2014-04-25T00:00:00.000 | {
"year": 2014,
"sha1": "b04045e6774cf520a73e26bb209de3fbab9bf0d0",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=19763",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3b46edcf2730bfd81ade5c3b8708cf46fdd66dfc",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
238025450 | pes2o/s2orc | v3-fos-license | An Integrated Multi Criteria Decision Making Model for Evaluating Park-and-Ride Facility Location Issue: A Case Study for Cuenca City in Ecuador
: A park-and-ride (P&R) system is a set of facilities where private vehicle users can transfer to public transport to continue their journey. The main advantage of the system is decreasing the congestion in the central business district. This paper aims to analyze the most significant factors related to a Park-and-Ride facility location by adopting a combined model of Analytic Hierarchy Process (AHP) and Best Worst Method (BWM). The integrated model is applicable for complex problems, which can be structured as a hierarchy with at least one 5 × 5 pairwise comparison matrix (PCM) (or bigger). Applying AHP for at least 5 × 5 PCM may generate inconsistent matrices, which may cause a loss of reliable information. As a solution for this gap, we conducted BWM, which generates more consistent comparisons compared to the AHP approach. Moreover, the model requires fewer comparisons compared to the classic AHP approach. That is the main reason of adopting the AHP-BWM model to evaluate Park-and-Ride facility location factors for a designed two-level hierarchical structure. As a case study, a real-world complex decision-making process was selected to evaluate the Park-and-Ride facility location problem in Cuenca city, Ecuador. The result shows that the application of multi-criteria methods becomes a planning tool for experts when designing a P&R system.
Introduction
The Park-and-Ride (P&R) system is a set of facilities available for private vehicle users to transfer to public transport to complete their journey. Thus, the facilities should be located close to public transport stations. Besides this, in most cities with a Sustainable Urban Mobility Plan (SUMP), the purpose or criteria for these facilities is stipulated. These criteria include reducing pollution in the city center, increasing accessibility to public transport, and reducing private vehicle trips to the Central Business District (CBD) [1][2][3][4].
Considering that establishment of a P&R system involves locating them close to public transport stations, it would be necessary to have a facility for each public transport station. Therefore, a set of criteria and sub-criteria should be formulated, based on which experts experts can support the implementation of the P&R system in the urban area of a city in an efficient way [5][6][7][8][9][10][11].
Thus, a set of 6 main criteria and 19 sub-criteria have been established in research on the P&R system, in which the authors have used two methods of multi-criteria identified as AHP (Analytic Hierarchy Process) and BWM (Best Worst Method) (Figure 1) [12][13][14]. Using the AHP and BWM methods, accessibility to public transport is the crucial criterion in implementing the P&R. Furthermore, the set of main criteria proposed in the study does not differ for the multi-criteria methods used, while differences remain between the sub-criteria. Therefore, more comprehensive models such as the AHP-BWM model should be applied to these criteria and sub-criteria [15]. The authors have already created a set of criteria and sub-criteria, however, it is pertinent to analyze these with different multi-criteria methods in order to know if the criteria are modified according to the method applied. Although researchers have begun to use Multicriteria Decision-Making Applications (MCDM) in the area of transportation, a gap is evident. Essentially there are no articles that have developed an investigation of the location of facilities of a P&R system using AHP-BWM. Section 2 provides the theoretical background for MCDM methods, including a description of the location of the P&R facilities; Section 3 details the research method; Section 4 explains the case study for Cuenca, Ecuador; Section 5 applies the method developed; and Section 6 presents and discusses the results obtained by applying the methodology. The conclusions section discusses the findings of the study and offers directions for future research.
Literature Review
The literature review in this paper discusses the criteria used by the researchers for the location of P&R facilities, accompanied by an overview of the multi-criteria methods.
Private vehicles have negatively impacted CBDs in urban areas, and their consequences are widely known: congestion and pollution. Mobility experts made numerous recommendations to city leaders to mitigate these issues [16]. The vast majority of approaches consisted of switching from a private mode to a more beneficial transportation mode (e.g., public transport, pedestrian) [17][18][19]. Although for citizens who have private vehicles and live outside the metropolitan area with no public transport connections or limited transport, there should also be an option. This option is commonly known as the P&R system. The P&R system is a connection point between the private vehicle and public transport, and therefore, the criteria that the experts consider when implementing the system are related to all the transport modes involved [20][21][22]. Establishing a P&R system in the urban environment of a city leads to a set of criteria that are part of the operation of the P&R system, such as the capacity that refers to the number of necessary parking places spaces that can be installed in the P&R system. Moreover, since the P&R system involves public transportation parameters such as demand, accessibility, frequency, and travel times of public transportation, these criteria are also taken into consideration [23][24][25][26][27]. Besides, when involving private vehicles, criteria such as traffic, travel time, and pollution are taken into account [28][29][30][31][32].
A multi-criteria MCDM models a method that is widely used for offering solutions to several problems in the field of transport and urban planning [33][34][35][36][37][38][39][40]. The AHP approach is based on the criteria for the interpretation, assessment, correction, and selection of the experts' judgment, which leads to some imperfections of the outcomes, resulting in a certain loss of precision in weighting the criteria and findings that contradict reality [41,42]. Therefore, to solve AHP problems more efficiently, they were combined with other multicriteria methods and used different mathematical and optimization models to achieve greater accuracy [43][44][45][46]. The specialists also compared and optimized pair matrices using a real scenario, incorporating a sensitivity assessment with a simulation approach and a network analytical method (ANP) [47][48][49]. In an alternative integration model, the AHP approach was used to incorporate a model with the order of priorities to resemble the optimal solution preferences in order to assess the feasibility of this method [50,51]. Combining fuzzy theory and AHP makes it possible to interpret imprecise parameters within the mathematical decision-making in the fuzzy domain [45,46]. The ANP can be combined with the fuzzy theory to minimize possible errors in the decision-making system [47,48]. The so-called B&W method (Best Worst Method) helps to minimize pairwise comparisons (PC) in questionnaire surveys; the method aims to facilitate fewer pair-wise comparisons than the traditional method. Pairwise comparisons are formulated and solve max-min problems to determine the weight of the criteria among all pairwise criteria between each of the best and worst criteria for the decision [52]. In addition, a technique of preference ordering by similarity to the ideal solution (TOPSIS) has been developed with interval-valued fuzzy numbers for multi-criteria decision making with variables and their transformation [53,54]. However, the method is new and has not been used to study which criteria are important for the location of the P&R system. [55,56].
The P&R system and MCDM techniques are combined in a few studies. For example, social, environmental, and economic criteria were applied to assess the location of P&R in combination with the VISUM software [57,58]. The scientific literature on P&R and MCDM shows a gap in P&R planning using advanced MCDM models. This article aims to formulate a combined model between AHP-BMW that contributes to making decisions about the location of P&R facilities in a consistent manner.
Data Collection and Methods
This section deals with methodological elements and references, including a summary of how the survey was performed and a detailed overview of the methodology used.
Survey
The surveys have been prepared on the basis of meetings with transport planning specialists in order to determine the most relevant criteria for the implementation of a set of facilities in a P&R system. Respondents were recruited from the municipality and university's transport planning experts, consisting of six men and four women of different ages. The survey was completed in 25 to 30 min per expert. A total of 25 P&R system elements were assessed.
Design of the Saaty Scale and Description Criteria
The planning and ranking of the criteria to be used or taken into account for the location of the P&R system facilities is one of the most relevant parts of the study, and therefore, these criteria are ordered on a scale from 1 to 9. The use of this scale has the advantage of simplicity of application and the possibility of also using a scale of proportions. Moreover, it is more accurate than other methods as it compares each indicator with the other. According to the previous study, 6 main criteria and 19 sub-criteria were identified [59]. Figure 2 shows the coding of each main criterion and also contains the description of the first level. Six criteria are the main criteria of this level (designed as level 1). These criteria have been assigned a code ranging from C1 to C6. Figure 3 refers to the description of the 19 sub-criteria of the second level, and a code is provided that identifies which main criteria it belongs to.
The approach is determined by the hierarchical criterion structure, in which criteria of the same category from the decision criteria tree chosen by the expert assessors are compared in pairs. (See Figure 4). social, environmental, and economic criteria were applied to assess the location of P&R combination with the VISUM software [57,58].
The scientific literature on P&R and MCDM shows a gap in P&R planning usi advanced MCDM models. This article aims to formulate a combined model betwe AHP-BMW that contributes to making decisions about the location of P&R facilities in consistent manner.
Data Collection and Methods
This section deals with methodological elements and references, including summary of how the survey was performed and a detailed overview of the methodolo used.
Survey
The surveys have been prepared on the basis of meetings with transport planni specialists in order to determine the most relevant criteria for the implementation of a of facilities in a P&R system. Respondents were recruited from the municipality a university's transport planning experts, consisting of six men and four women of differe ages. The survey was completed in 25 to 30 min per expert. A total of 25 P&R syste elements were assessed.
Design of the Saaty Scale and Description Criteria
The planning and ranking of the criteria to be used or taken into account for t location of the P&R system facilities is one of the most relevant parts of the study, a therefore, these criteria are ordered on a scale from 1 to 9. The use of this scale has t advantage of simplicity of application and the possibility of also using a scale proportions. Moreover, it is more accurate than other methods as it compares ea indicator with the other. According to the previous study, 6 main criteria and 19 su criteria were identified [59]. Figure 2 shows the coding of each main criterion and a contains the description of the first level. Six criteria are the main criteria of this lev (designed as level 1). These criteria have been assigned a code ranging from C1 to C Figure 3 refers to the description of the 19 sub-criteria of the second level, and a code provided that identifies which main criteria it belongs to. The approach is determined by the hierarchical criterion structure, in which criteria of the same category from the decision criteria tree chosen by the expert assessors are compared in pairs. (See Figure 4).
Description of the Conventional Analytic Hierarchy Process (AHP)
The traditional AHP approach is built on the hierarchical decision-making structure formed from decision-making elements and applied extensively in many fields [29][30][31]. The hierarchical structure consists of multi-levels where the main elements and subelements are placed, and the significance for the final level of the various levels defines the global values. The main steps of the conventional AHP are: setting up the hierarchical structure of the decision problem, formulating PCMs dependent on the hierarchy, The approach is determined by the hierarchical criterion structure, in which criteria of the same category from the decision criteria tree chosen by the expert assessors are compared in pairs. (See Figure 4).
Description of the Conventional Analytic Hierarchy Process (AHP)
The traditional AHP approach is built on the hierarchical decision-making structure formed from decision-making elements and applied extensively in many fields [29][30][31]. The hierarchical structure consists of multi-levels where the main elements and subelements are placed, and the significance for the final level of the various levels defines the global values. The main steps of the conventional AHP are: setting up the hierarchical structure of the decision problem, formulating PCMs dependent on the hierarchy,
Description of the Conventional Analytic Hierarchy Process (AHP)
The traditional AHP approach is built on the hierarchical decision-making structure formed from decision-making elements and applied extensively in many fields [29][30][31]. The hierarchical structure consists of multi-levels where the main elements and sub-elements are placed, and the significance for the final level of the various levels defines the global values. The main steps of the conventional AHP are: • setting up the hierarchical structure of the decision problem, • formulating PCMs dependent on the hierarchy, • planning the questionnaire sample, • testing the accuracy, • aggregation by the geometric mean, • calculating weight vectors, • estimating the global ratings, • sensitivity evaluation.
The PCM is often a positive square A x , where x ij > 0 is the quantitative measure between w i and w j , and w x is the weight score from the A x Table 1. The popular method of Saaty for PCMs can be determined using the following equation: where the maximum eigenvalue of the A matrix is λ max . Table 1. The structure of a (6 × 6) consistent theoretical PC matrices.
For experiential PCMs: the reciprocity is indeed fulfilled for every PCM, x ji = 1/xij where x ii = 1 is provided. However, for empirical matrices, accuracy is most likely not achieved. Participants were invited to indicate how frequently each studied Park-and-Ride facility location factors were on a Saaty scale, as shown in Figure 4 [60] and Table 2. The criteria for consistency were:
Numerical Values Explanation
1 Two elements have the same importance 3 Expertise and judgment favor an aspect in contrast 5 One element that is strongly important 7 One element is very strongly dominant 9 A factor is benefited by at least one order of scale 2, 4, 6, 8 It is used to compromise two rulings Empirical matrices are generally not congruent in the eigenvector method despite being filled in by the evaluators.
In order to examine the PCM consistency, Saaty invented an AHP consistency check, which guarantees that those matrices fulfill the appropriate inconsistency criterion [60].
where CI is the Consistency Index and λ max is the maximum eigenvalue of the PCM, while m represents the number of rows in the matrix. CR is determined by the following formula: where RI is the average CI value of randomly generated PCM of the same size (Table 3). In AHP method, the acceptable value of Consistency Ratio (CR) is CR < 0.1. Sensitivity analysis allows perceiving the effects of alternates in the main element on the sub-element ranking and helps the decision-maker to check the stability of results throughout the process.
Best Worst Method
The Best Worst Method (BWM) was implemented to produce weights in criteria and sub-criteria while allowing less pair comparisons and offering a clearer comparison procedure. The key criterion for decision making is the best or the most important criterion or alternative, whereas the worst or less important criterion or alternative is the opposite. [61,62]. As part of a broad range of MCDM approaches, BWM is established. It is seen as a successful solution because of its data criteria, as well as being well-structured, transparent, easy to use, and producing accurate outcomes [63]. The key distinction between the BWM approaches is based on pair comparisons and relies on the most significant and least critical parameters [64]. The appeal of the BWM is motivated by its advantages, including multiple features that facilitate measurement and interpretation: fewer pair compared approaches, greater reliability, and output precision of calculated weight coefficients. Below, the key steps are quickly explained: We consider various elements (e 1 , e 2 , . . . , e n ) and then select and contrast the best part of the scale (designed from 1-9). This evaluation gives the best of the elements to others, and the adopted vector would be: V B = (e B1 , e B2 , . . . , e Bn ), and obviously e nn = 1. However, the worst element to other vectors would be: V W = (e 1W , e 2W , . . . , e nW ) T by adopting the same scale.
The optimal weights were obtained, and the concordance values were checked by calculating the coefficient of concordance by the use of the following formula: The consistency index values are shown in Table 4: In order to achieve an optimum weight for all components, the maximum differences should be indicated w B w j − e Bj and w j w W − e jW , and for all j is minimized. Assuming a positive sum for the weights, the following problem is solved: The conflict can be described in the following scenario: By solving the mentioned problem, the optimal weights are obtained and ξ * .
The Proposed AHP-BWM Model
Suppose that in a decision problem, there are n criteria structured in m levels. Thus, we have k = 1, . . . , m levels in the decision. Let us designate j criteria at a certain level of the decision to indicate a criterion at a certain level where we have h criteria. j = 1, . . . , h and J is the group of all decision criteria as J = 1, . . . , n. In this sense, it refers respectively to the first criterion of the first level and so on.
The first step in the proposed model is setting up hierarchical structure of decisions and alternatives, in the second step, the BWM approach has to be applied for the clusters, which contains 5 criteria or more, and AHP approach has to be applied for the clusters, which contains 4 criteria and smaller clusters. The proposed AHP-BWM model helps the evaluators to avoid the inconsistency issue during the evaluation process. Moreover, the model saves time and effort for both evaluators and experts, because in the AHP approach the evaluators need to evaluate n(n − 1)/2 pairwise comparisons (PCs), however, in the BWM approach the evaluators need 2n − 3 PCs.
The main steps of the proposed AHP-BWM model are described in Figure 5.
Case Study
This research study applies the method mentioned above in a real case study, and for this purpose, the city of Cuenca in Ecuador, was considered. This city, located in the south of Ecuador, has two significant mobility projects: the sustainable mobility plan and the new transport system, called LRT. Cuenca comprises 15 urban areas, each of which is divided into neighborhoods segmented by central roads. UNESCO has also categorized
Case Study
This research study applies the method mentioned above in a real case study, and for this purpose, the city of Cuenca in Ecuador, was considered. This city, located in the south of Ecuador, has two significant mobility projects: the sustainable mobility plan and the new transport system, called LRT. Cuenca comprises 15 urban areas, each of which is divided into neighborhoods segmented by central roads. UNESCO has also categorized the historic center as a World Heritage Site. The heavy traffic congestion in the city center area is currently a cause for concern. Orellana et al. [65] have carried out numerous studies in the field of mobility. The author carried out important work on the impact on public space, particularly on road networks and bicycle users' circulation. Hermida et al. [66] have also published research on the effect of the layout on mobility.
In contrast, minimal attention has been paid to users who commute by private car from outside the metropolitan area regularly and choose to travel to the city center by a mode of transport that is more convenient for them (LRT). For this issue, in a study created by Ortega et al. [67,68], a series of seven facilities have been established belonging to the P&R system (marked from A to G), see Figure 6.
Results
The questions elaborated in the research involve a first comparison of the main criteria belonging to level one; for example, the relative value of the P&R system facilities is compared between "distance-C1" and "traffic conditions on the route (origindestination)-C2", see Table 5. 0.2164 2 Figure 6. Map of the city of Cuenca in Ecuador and its zonal division, the P&R system, and the Light Rail Transport.
Results
The questions elaborated in the research involve a first comparison of the main criteria belonging to level one; for example, the relative value of the P&R system facilities is compared between "distance-C1" and "traffic conditions on the route (origin-destination)-C2", see Table 5. In order to obtain all the aggregated weights of the 10 expert evaluators, PCs have to be generated for all decision system branches. Level 2, which is the sub-criteria of level 1, is used to compare these criteria, see Table 6. Finally, an overall comparison is made between all level 1 and level 2 criteria, as presented in Table 7.
The AHP-BWM model was used according to the matrix sizes to more effectively evaluate the P&R related factors. Besides, the reliability of the PCs' consistency in the AHP and BWM was tested and was acceptable for all of them.
A future comparative study with some cities could show results that vary according to the type of city. For this reason, planners should carry out comparative studies.
Discussion
The research carried out with transport planners in municipalities and local universities regarding the location of facilities in the P&R system allowed a fundamental assessment to be made of each factor that planners consider when placing a facility. As a result of this approach, the criteria that are considered most relevant can be identified.
To facilitate the interpretation of the data, the discussion section is divided into three parts. The first part focuses on level one of Figure 1 and describes the level of relevance and the usefulness of taking all these criteria into account. The next part consists of a comparison between the sub-criteria belonging to the main criterion. Finally, the second level results, in which the 19 sub-criteria are ranked, are analyzed.
At the first level, each criterion in a sequence is defined, beginning with the most important one and going towards to the least important criterion. The C3 criterion of "accessibility to public transport" proved to be the most important. This is a rather rational outcome, since the second part of the journey is made by public transport in the P&R system. Accessibility of public transport also lies in the accessibility of the P&R system. After that is criterion C6, which refers to the "environment"; i.e., to minimize the adverse effects of cars in the city center. In order of importance, the following criterion is C4, which applies to all transport aspects in which the P&R system is implemented as part of the urban transport network. This derives from the fact that the P&R is connected to public transport. The following one is the "economic" criterion, C5, whereby, similar to all transport projects, the economic aspect of the project's development focuses on the location and viability of the project. C1 is the penultimate criterion and refers to the shortest or longest distance from the P&R system, but transport planners do not consider it very relevant, because a private car is used in the first part of the journey. The lowest ranking criterion is C2, which applies to the route's traffic conditions, because the second part of the journey is via a direct public transport line.
The principal criterion comprises several sub-criteria. Thus, criterion C1 is composed of two sub-criteria in which the C1.1 is more significant than the criterion C1.2. The explanation can be found in the fact that the first portion of the trip is made by a private vehicle; therefore, the distance that the user can travel in the first portion of the trip is also more relevant, compared to the distance of the user who makes the second part of the trip by public transport.
Sub-criterion C2.3 is the most prominent one within C2, and reflects the total travel time using the P&R system (private vehicle + parking time + bus time). In contrast, when comparing sub-criteria C2.1 and C2.2, the more prominent sub-criterion is travel time by public transport, meaning that the hourly operations of public transport are essential to ensure the accessibility of the system. The experts considered one of the most important aspects of the P&R system to be public transport.
Regarding the sub-criteria belonging to the main criterion, C3, C3.1, is the most important one, because it ensures connectivity with the other part of the journey, and C3.3 follows it. As described in the literature, this is one of the elements of the facilities that are part of the P&R system; thus, sub-criterion C3.2 is related to travel time.
In category C4, C4.2 was ordered first, which reflects that the demand for public transport is increased due to the potential users of the P&R system. C4.1, the decrease of private car journeys, is one of the main objectives of the P&R system and it is consequently one of the main components. C4.3 is connectivity and the number of existing public transport lines. C4.4 is the demand for parking in the P&R system. C5 involves the economic part of the project, and sub-criterion C5.2, which is the cost of land use, depends on the profitability of the project, followed by C5.1 and C5.3, which refers to the financial part of the operation of the P&R system. Sub-criterion C5.4 refers to the cost of implementation and maintenance.
Furthermore, the order of relevance of the sub-criteria of criterion C6 is pollution reduction, C6.1, followed by noise reduction, C6.2. These two sub-criteria relate to the undesirable effects of the private vehicles in the CBD that can be reduced with the implementation of a P&R system. Finally, C6.3 refers to the expropriation of land used principally for green spaces.
Level two of the hierarchy requires extensive explanation and has been divided into three categories for better understanding: most important (<6), mid-level important (6)(7)(8)(9)(10)(11)(12), and low-level important (>12). The most important criterion is C3.1, since specialists consider implementing a P&R system that ensures accessibility and connection to public transport to be essential, as the P&R system is a place of interchange between the private vehicle and public transport. In the second place, there is criterion C6.1, which reflects that the P&R system is a tool that reduces pollution in the CBD. C1.1 and C3.3 are sub-criteria that both relate to the distance from the origin to the facility in the P&R system and to the facility's distance to the public transport station. C4.2 refers to the demand for public transport by users of the P&R system. C6.2 involves the undesirable effects of the private car, such as noise.
At a medium level of importance is C4.1, which refers to reducing private vehicle trips in the CBD. C3.2 is the travel time from the facility to the public transport station. C5.2 focuses on the cost of land use, in terms of expropriation or use of land to implement the P&R system. C2.3 is the total travel time using the P&R system. C4.3 relates to the number of transport connections connected to the P&R system. C6.3 is about the land occupied by the P&R system to be expropriated.
The sub-criteria levels considered not important for implementing the P&R system are C5.1, and C5.3, which relate to the economic aspects of project construction C2.2 focuses on general transport aspects, such as demand and travel time by public transport, which is guaranteed due to its connection or implementation of a direct public transport line. C5.4 refers to the maintenance cost, C1.2, which is the distance in the first portion of the trip. C 4.4 the demand on the P&R system that is guaranteed according to the demand of the total system. C2.1 is the lowest priority of all the criteria since it belongs to the first part of the route using a P&R system, which is done by private vehicle.
The research carried out is a tool for planning the P&R system in medium-sized cities with a P&R system or the wish to implement one. In Latin America, P&R is relatively new, and it would be interesting to carry out a study to compare Latin American and European cities. The most important thing is to guarantee the connection to the public transport system and, consequently, to encourage private vehicle users to make this change; and therefore, one of the main criteria is the accessibility to public transport.
Conclusions
Park-and-Ride facility location problems have been considered an important and complex task for solving road issues because of the large amount of location data and its variation. The study conducted in this paper sets out the criteria and sub-criteria identified by transport planning specialists in developing a P&R system. A multi-criteria decision model is used to select the order of importance of criteria (MCDM). This novel approach to the location of P&R facilities is possible because it combines environmental, traffic and distance attributes. The AHP-BWM model enables us to identify, interpret, and know experts' essential requirements for locating the ideal place for a P&R facility within a city's urban environment. In comparison to conventional MCDM models, the approach offers a more excellent range. As the PCs are consistent, the models' findings are better than those from the traditional models.
The AHP-BWM approach's findings are more realistic than the conventional multicriteria approach, mathematically representing the criteria properly considered by transport planning experts. The accessibility of public transport is thus the most crucial criterion. P&R should be linked to a city's public transport connections, thus guaranteeing a strategic position in public transport stations. The P&R system is new in some Latin American cities. In scientific testing of the parameters used to locate P&R structures, some design parameters have a secondary function. For instance, a new approach is to apply this methodology to cities with a P&R framework and determine if these requirements are different from expert opinions.
Considering further research, many other AHP-BWM model applications are necessary to analyze different real-world characteristics. The objective benefits are clear, it provides faster and cheaper survey process, and undoubtedly the survey pattern can more easily be extended by this technique than applying the conventional AHP with a complex PC questionnaire. However, this paper merely provided one example, but many other applications can ultimately verify the technique. The combined AHP-BWM model will help the researchers to improve their future studies by improving the consistency with fewer PCs and save time for analyzing the collected data. However, the analysis process needs considerable time and effort from the experts, making it challenging to gather the various transport planners' answers.
Multiple MCDM techniques (e.g., ELECTREE, TOPSIS, machine learning) should be adopted in fuzzy environment for future study. The combination of this model and another broader model group, such as the geographical approach. A discrepancy between cities with P&R will bring new ideas to light. A study where experts and potential users are consulted will allow us to evaluate the requirements to be improved in the implementation and design of the P&R system. | 2021-09-28T18:20:18.401Z | 2021-07-04T00:00:00.000 | {
"year": 2021,
"sha1": "c0566f526f8edf3bcb363d5a47f2c2bf5a992b20",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/13/13/7461/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "929ec1dceac1c6ebfde2d895d62263578d72d48d",
"s2fieldsofstudy": [
"Engineering",
"Business",
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
245636442 | pes2o/s2orc | v3-fos-license | Predictors for symptomatic intracranial hemorrhage after intravenous thrombolysis with acute ischemic stroke within 6 h in northern China: a multicenter, retrospective study
Background and purpose This study assessed the predictive factors for symptomatic intracranial hemorrhage (sICH) in patients with acute ischemic stroke (AIS) after receiving intravenous thrombolysis (IVT) within 6 h in northern China. Methods We retrospectively analyzed ischemic stroke patients who were treated with IVT between November 2016 and December 2018 in 19 hospitals in Shandong Province, China. Potential predictors of sICH were investigated using univariate and multivariate analyses. Results Of the 1293 enrolled patients (845 men, aged 62 ± 11 years), 33 (2.6%) developed sICH. The patients with sICH had increased coronary heart disease (36.4% vs. 13.7%, P = 0.001), more severe stroke (mean National Institutes of Health Stroke Scale [NIHSS] score on admission of 14 vs.7, P < 0.001), longer door-to-needle time [DNT] (66 min vs. 50 min, P < 0.001), higher blood glucose on admission, higher white blood cell counts (9000/mm3 vs. 7950/mm3, P = 0.004) and higher neutrophils ratios (73.4% vs. 67.2%, P = 0.006) et al. According to the results of multivariate analysis, the frequency of sICH was independently associated with the NIHSS score (OR = 3.38; 95%CI [1.50–7.63]; P = 0.003), DNT (OR = 4.52; 95%CI [1.69–12.12]; P = 0.003), and white blood cell count (OR = 3.59; 95%CI [1.50–8.61]; P = 0.004). When these three predictive factors were aggregated, compared with participants without any factors, the multi-adjusted odds ratios (95% confidence intervals) of sICH for persons concurrently having one, two or three of these factors were 2.28 (0.25–20.74), 15.37 (1.96–120.90) and 29.05 (3.13–270.11), respectively (P for linear trend < 0.001), compared with participants without any factors. Conclusion NIHSS scores higher than 10 on admission, a DNT > 50 min, and a white blood cell count ≥9000/mm3 were independent risk factors for sICH in Chinese patients within 6 h after IVT for AIS.
Introduction
Intravenous thrombolysis (IVT) treatment is an effective therapy for acute ischemic stroke (AIS), which includes providing recombinant tissue plasminogen activator (rt-PA) to patients within 4.5 h [1] and urokinase (UK) within 6 h in China [2]. However, symptomatic intracranial hemorrhage (sICH) is one of the most feared complications after thrombolytic therapy [3,4], and is associated with neurological deterioration and poor outcome. Thus, identifying the predictors of sICH in patients receiving IVT is crucial.
SICH has been defined slightly differently among previous studies; the incidence rate of sICH varies between 2.0 and 7.2% worldwide [5][6][7][8], and the incidence of sICH in Asia is higher [9] or equivalent [8] to that in Western countries, according to mSITS-MOST (the modified version of Safe Implementation of Thrombolysis in Stroke-Monitoring Study) criteria. Different factors, such as a higher National Institutes of Health Stroke Scale (NIHSS) score, advanced age, elevated serum glucose levels and cardioembolism, have been suggested as predictors of sICH after IVT within 4.5 h of AIS in China (TIMS-China) study [5].
Given that UK is much cheaper and may have an apparent longer therapeutic time window than rt-PA, Chinese guidelines recommend UK for use within 6 h of the onset of AIS as an alternative to rt-PA [10]. However, to our knowledge, the incidence and predictors of sICH in the Chinese patients with AIS treated with rt-PA or UK within 6 h remain unclear, and it is worthwhile to explore the risk factors for sICH based on multicenter retrospective study data in northern China. In summary, identifying the risk factors for sICH has important practical significance for communicating with patients and their relatives and taking efficacious measures to prevent sICH.
Study design
We retrospectively analyzed ischemic stroke patients who received IVT, including rt-PA and urokinase, within 6 h of stroke onset at 19 hospitals in Shandong Province, China, between November 2016 and December 2018. Inclusion and exclusion criteria for IVT were mainly adopted from the National Institute of Neurological Disorders and Stroke protocol and the protocol of the Chinese Guidelines for the Management of Stroke. Patients treated with endovascular treatment after IVT, patients with known tumors, inflammation, treatment administration > 6 h after symptom onset, no follow-up brain imaging after treatment administration and unavailable blood samples were excluded from this study. The selection of different thrombolytic doses and treatment strategies was completely determined by the institution or the treating physician and was not standardized. We assessed stroke severity by the baseline NIHSS score, and the presumed cause of ischemic stroke with the international Trial of ORG 10172 in Acute Stroke Treatment (TOAST) classification [11]. All TOAST classification assignments were further verified by an experienced senior neurologist (Jifeng Li).
Data on patient demographics, clinical characteristics and the use of thrombolysis medications were collected from patients' charts by neurologists from the participating hospitals. Potential risk factors included age, sex, hypertension, diabetes mellitus, dyslipidemia, coronary heart disease, atrial fibrillation, door-to-needle time (DNT) (minutes), Onset-to-needle time (ONT) (minutes), NIHSS scores on admission, systolic blood pressure before thrombolysis, diastolic blood pressure before thrombolysis, smoking status, alcohol consumption, blood glucose on admission, antiplatelet treatment pre-IVT, anticoagulant treatment pre-IVT, serum white blood cell (WBC) count, neutrophil-WBC ratio (neutrophil ratio), platelet count, low-density lipoprotein cholesterol (LDL-C) level on admission, rt-PA dose (mg/kg body weight), urokinase dose (million units), and stroke subtype.
The study was approved by the Ethical Standards Committee on Human Experimentation at Shandong Provincial Hospital Affiliated to Shandong First Medical University. A signed consent form was obtained from all participants. The study was conducted in accordance with the principles for medical research involving human subjects expressed in the Declaration of Helsinki.
Outcome measures
The primary outcome was the incidence of symptomatic intracranial hemorrhage (sICH) between 0 and 36 h after IVT. SICH was based on parenchymal hemorrhage with deterioration using the National Institutes of Health Stroke Scale score of ≥4 points or death (the modified Safe Implementation of Thrombolysis in Stroke-Monitoring Study [mSITS-MOST]) [12].
Statistical analysis
For categorical variables, the chi-square (χ2) test or Fisher's exact test was presented as percentages. For Keywords: Stroke, Intravenous thrombolysis, Intracranial hemorrhages, Predictor, Risk factors continuous variables, the t-test or the Mann-Whitney U-test was presented as the means ± SD or median (interquartile range). Variables with P < 0.05 in the univariate analysis were entered into the multivariate logistic regression analysis. Receiver operating characteristic curve analysis was performed for continuous variables with statistical significance in the logistic regression analysis to analyze the predictability of sICH. A multiple logistic regression model was used to estimate the odds ratios (ORs) and 95% confidence intervals (CIs) of sICH associated with individual risk factors and their load, which was assessed by counting the number of risk factors that were significantly related to an increased odds ratio of sICH (P < 0.05). We reported the results from two models: Model 1 was unadjusted, and Model 2 was adjusted for coronary heart disease, neutrophil ratio, and stroke subtype. A two-sided p value of < 0.05 was considered statistically significant. All analyses were performed with the SPSS 22.0 software.
Results
From November 2016 to December 2018, 1541 patients with AIS from 19 hospitals participated in this retrospective study. Among them, 22 patients who received IVT > 6 h after stroke onset were excluded, 56 patients who did not undergo follow-up brain CT after recanalization were excluded, and another 2 patients were excluded because of a lack of rt-PA dose record data. Furthermore, 168 patients who received endovascular treatment were also excluded. Thus, a total of 1293 patients with onsetto-needle time < 6 h met our study criteria. Among them, 33 (2.6%) experienced mSITS-MOST-defined sICH.
The demographic and baseline characteristics of the patients with sICH are summarized in Table 1. Compared with patients without sICH, sICH patients had more severe stroke (defined by higher NIHSS scores) on admission, longer DNT, increased history of coronary heart disease, higher blood glucose on admission, higher neutrophil ratio and white blood cell counts, higher rt-PA doses (0.9 mg/kg vs. 0.6 mg/kg or 50 mg/person), and increased rates of cardioembolism.
Fisher's exact test: atrial fibrillation, coronary heart disease, diabetes mellitus, dyslipidemia, use of thrombolysis medications, dose of rt-PA, dose of urokinase, and stroke subtype.
When these three predictive factors were aggregated, the multi-adjusted OR (95% confidence interval) of having sICH was increased significantly with an increasing number of concurrent predictive factors (P for linear trend < 0.001) ( Table 4).
Discussion
In this multicenter retrospective study, we observed that sICH occurred in 2.6% of patients by the mSITS-MOST definition in our research, which is comparable to incidence reported in the TIMS-China trial (2.0%) [5]. However, patients who received only intravenous rt-PA within 4.5 h of AIS onset were recruited for the TIMS-China trial. Furthermore, our report showed that higher NIHSS score, higher white blood cell count on admission, and delayed recanalization treatment were independently associated with sICH after IVT. Our study seems to be the first multicenter retrospective study conducted in northern China to investigate the predictors of sICH with AIS patients treated with rt-PA or UK within 6 h, which may better reflect real-world practices.
Compared with patients without sICH, sICH patients had higher blood glucose on admission in our research (mean 9.1 versus 8.3 mmol/L, P = 0.045) ( Table 1). In addition, we found no statistical association between admission blood glucose and increasing the risk of sICH post IVT according to mSITS-MOST criteria (OR = 1.938; P = 0.124) ( Table 2), a finding reported in prior studies [13,14]. However, in previous studies [5,15], elevated baseline glucose level was shown to be a risk factor for sICH in acute ischemic stroke patients treated with IV rt-PA, in which the sICH was diagnosed based on the European Cooperative Acute Stroke Study II (ECASS-II) definition. Certainly, the mechanism behind this phenomenon requires further research.
High white blood cell (WBC) counts are known to be involved in the inflammatory process of AIS [16]. A study by Tiainen et al. reported that higher WBC counts at admission were significantly associated with sICH in AIS patients treated with IVT [17], and similar findings were obtained in our research. In contrast, another study indicated that elevated WBC counts failed to independently predict sICH [18]. In our research, the best predictor of sICH was the WBC count in blood tests, and a value ≥9000/mm 3 indicated a 3.6-fold increased risk for sICH. The different results of previous studies may have been caused by the different times of blood sampling due to the dynamic changes in WBC counts after stroke [19]. The mechanism behind the association between elevated WBC counts and sICH is not completely understood, which may be partly because AIS causes WBCs to migrate into the brain, where they can cause brain edema and injury by initiating inflammatory cascade reactions and releasing inflammatory cytokines [20].
Among the clinical risk factors, DNT and NIHSS scores were independently associated with sICH in our study, and these results are consistent with those of previous studies [5,13]. The elapsed time from hospital admission to the thrombolytic bolus was defined as door-to-needle time (DNT). The benefit of IVT for patients with AIS is time-dependent. The clinical benefit from IVT declines rapidly (time is brain), and every minute counts [21]. In our report, DNT turned out to be the most reliable predictor of sICH among the other parameters assessed, which revealed that patients with a DNT > 50 min had a 4.5-times higher risk of developing sICH than those with a DNT ≤ 50 min, and the difference was statistically significant (P = 0.003) when adjusted for multivariate analysis. The National Institutes of Health Stroke Scale (NIHSS) score is a tool used to objectively quantify stroke impairment of the stroke and is the most commonly used stroke outcome scale [22]. Higher NIHSS scores are generally associated with the more severe ischemic stroke, which is reflected by large areas of injured blood vessels that are prone to bleeding after IVT. In our research, a higher initial NIHSS score on admission increased the risk of sICH, which was similar to the findings of previous reports [23,24]. We found that patients with NIHSS scores > 10 had a 3.4-times higher risk of sICH (using NIHSS scores ≤10 as reference). If a patient that had an acute stroke and receiving IVT has an NIHSS score > 10, the physician must be cautious and should check the patient's white blood cell count. It is worth noting that the NIHSS-assessed stroke severity was milder in the current study than in previous studies, which may explain why the sICH incidence (2.6%) was lower than that in previous reports [6,7].
A linear relationship showing that an increasing number of concurrent multiple risk factors is associated with an increased likelihood of having sICH was found. These findings suggest that increased monitoring is needed in order to reduce the risk of sICH after IVT for AIS patients in northern China.
Our study was conducted at multicenter hospitals, which including secondary hospitals and tertiary hospitals, and this is the greatest advantage of this study. Considering the research hospitals were located in rural and urban areas in northern China, care should be taken in comparing these results with those of previous studies because regional and racial characteristics may act as sources of bias.
This study has some limitations. First, as a retrospective study, the data may have generated system biases. Involved centers may have followed different protocols for IVT. Second,pretreatment Alberta Stroke Program Early CT Score (ASPECTS), hyperdense middle cerebral artery signs and other important parameters that may influence the risk of sICH, were not assessed in the present study. Third, the sample size was relatively small, and the obtained findings should be verified in future studies with larger samples. | 2022-01-03T14:47:14.066Z | 2022-01-03T00:00:00.000 | {
"year": 2022,
"sha1": "59eda4a2609bfb8283e990684bdf8abd101696fd",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "59eda4a2609bfb8283e990684bdf8abd101696fd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
229630340 | pes2o/s2orc | v3-fos-license | Formal Verification and Performance Analysis of a New Data Exchange Protocol for Connected Vehicles
In this article, we focus on the usage of MQTT (Message Queuing Telemetry Transport) within Connected Vehicles (CVs). Indeed, in the original version of MQTT protocol, the broker is responsible “only” for sending received data to subscribers; abstracting then the underlying mechanism of data exchange. However, within CVs context, subscribers (i.e., the processing infrastructure) may be overloaded with irrelevant data, in particular when the requirement is real or near real-time processing. To overcome this issue, we propose MQTT-CV; a new variant of MQTT protocol, in which the broker is able to perform local processing in order to reduce the workload at the infrastructure; i.e., filtering data before sending them. In this article, we first validate formally the correctness of MQTT-CV protocol (i.e., the three components of the proposed protocol are correctly interacting), through the use of Promela language and its system verification tool; the model checker SPIN. Secondly, using real-world data provided by our car manufacturer partner, we have conducted real implementation and experiments. The obtained results show the effectiveness of our approach in term of data workload reduction at the processing infrastructure. The mean improvement, besides the fact that it is dependent of the target application, was in general about 10 times less in comparison to native MQTT protocol.
applications and services. Because of the diversity of devises as well as the heterogeneity of their related software, the data communication protocol in these applications plays an important since it abstracts the data exchange between all the components.
Currently, the most widely adopted communication protocols in IoT systems are MQTT [2], XMPP [3], and others. In this paper, we focus on MQTT; an application layer protocol that is based on a publish/subscribe messaging model for distributing data between networked applications through a message broker. Given the high level of abstraction it offers, its processing lightness and its implementation easiness, MQTT has been used in many real applications and services including Connected Vehicles (CVs). Indeed, MQTT is one of the protocols used by PSA Group 1 to gather and leverage data from connected vehicles [4]. The authors in [4] assert that PSA Group vehicles can send roughly 170 different types of data ranging from vehicle identification number, GPS coordinates, engine rounds per minute to the current angle of the steering wheel, etc. This amount of data has great value for automotive manufacturers as well as for third parties since it allows the development of several applications and services in different domains (improve driver's safety, enhance mobility experience, personalize insurance costs, etc.).
In PSA experience [4], MQTT is used as a communication protocol that connects, through a broker, vehicles as publishers and PSA automotive infrastructure as a subscriber. Data is then collected from vehicles and processed in both off-line and on-line (real or near real-time) fashions by PSA Big Data infrastructure. As previously mentioned, the usage of MQTT has been motivated by the numerous features it presents in terms of fast communications, processing lightness and easiness of implementation; i.e, integration within a large platform. This last feature is particularly appreciable in industrial environments where software compatibility and rapid deployment are vital requirements.
Nevertheless, in the original version of MQTT, the role of the broker is limited to data forwarding (i.e., transmission) between publishers and subscribers. Hence, using it as it is in the context of CVs will lead to the following problematic issues: r The number of CVs (publishers) is very huge (expected to be in the order of millions) and hence the infrastructure is supposed to support a very heavy workload since the brokers are responsible for sending data without any processing or filtering. The processing task is then located at the infrastructure layer. Nevertheless, several applications are interested only in a particular part of the sent data and do not require it in its entirety. In some other cases, only data that is greater or less than a certain threshold is of interest. For instance, an after-sales application could be interested to track vehicles that have their engine's temperature exceeding a certain value. In this case, as all data is processed at the infrastructure layer, the latter has to support a huge workload. r The interaction system, which is constituted of CVs, a broker, and the infrastructure, must be reliable because it involves very sensitive applications (e.g., emergency applications, drivers' safety applications, etc.). Hence, it is mandatory, in this context, to formally guarantee the correctness of any protocol responsible of connecting CVs to the infrastructure. 2 In order to reduce the infrastructure's workload, two ways are possible: (a) the first one is filtering unnecessary data at the sources; i.e., at the vehicles. So, only "valuable" data are sent to the infrastructure. This supposes that an extra-software, controlled from the infrastructure, has to run on the vehicle in order to perform such processing. From real experience (i.e., Group PSA), this approach has not been considered for security reasons since vehicle are considered as sensitive hosts and hence the number as well as the complexity of running software have to be strictly controlled and reduced only to necessary ones. (b) The second way is to perform filtering at the intermediate layer (i.e., the broker). However, this was not possible in the original version of MQTT protocol.
In this paper, we focus on a new variant of the MQTT protocol, we named MQTT-CV (MQTT for Connected Vehicles). In our proposal, parts of the data processing are handled by the broker layer, leading thus (as proven by the real experiments we conducted) to a significant reduction in the workload that was addressed for the infrastructure. In fact, the infrastructure can define some conditions (mainly filtering) on the received data from the broker. In other words, the broker will send to the infrastructure only data that satisfies the defined conditions. By doing so, the workload of the infrastructure can be substantially reduced at the expense of a negligible processing cost at the broker, as stated by our experiments. This infrastructure workload reduction can have a significant impact on the overall performance of the system because of the huge number of CVs.
Furthermore, because of the sensitivity of several automotive applications, we provide in this paper formal proofs of the correctness of our proposed protocol (i.e., the three components will behave as they are supposed to do). To this end, we used components formal validation based on Promela language [6] and the model checker SPIN [7], [8].
To the best of our knowledge, this is the first research work addressing both formal verification and performance improvement of a data exchange protocol specifically tailored to CVs infrastructures.
The rest of the paper is organized, as follows: works related to our proposition are presented in Section II. In Section III, we succinctly present the MQTT protocol and provide a brief introduction to the model checker SPIN and Promela language. Section IV describes the proposed MQTT-CV protocol. Our formal analyzing approach of MQTT-CV is presented in Section V. Section VI presents and comments on the obtained experimental results. Finally, Section VII concludes the paper and gives some directions for future works.
II. RELATED WORK
Besides the increasing number of data exchange protocols proposed for IoT applications, only few of them have been adopted in the context of CVs [4]. Furthermore, to the extent of our knowledge, only few research works have addressed formal verification of such protocols. The work presented in [9] for instance proposes to formally model the publish/subscribe protocols to specify their essential properties such as minimality and completeness. This work, however, has not considered the verification aspect. In [10], the authors have proposed a formal model, based on Petri nets, to specify the publish/subscribe protocols in the domain of Grid computation. In [11], Zigbee (which is widely used in IoT) has been formally modeled and verified using the Event-B formal method.
Concerning the security properties, the authors in [12] have presented a general discussion on the security issues/requirements of the publish/subscribe protocols in the field of Internet-based peer-to-peer systems. In the same context, in [5], the author has proposed to analyze the MQTT protocol using a formal approach based on timed-message passing process algebra. This approach, which focuses on verifying the security properties related to the protocol vulnerability against attackers, demonstrates that there are some scenarios in which MQTT fails to fulfill the QoS requirements. Some other performance evaluation methods, that assess the MQTT protocol with regards to its different QoS levels, were also proposed in [13], [14].
Several works which are close to our proposition were also proposed. In more exact words, these works have adopted a model checking technique to verify the reliability properties of the publish/subscribe systems. For instance, the solutions proposed in [15], [16] define a general framework that aims to verify the publish/subscribe systems by model checking. The main difference between these approaches and ours lies in the fact that while these solutions are general, our proposition focuses on MQTT-CV analysis and safety properties verification. In [17], the authors utilized a probabilistic model checking to model and validate the publish/subscribe systems. The validation in this work was carried out using the PRISM model checker. Another work which uses a probabilistic model checking was proposed in [18]. This work allows analyzing the quality of prediction in service-oriented architectures. To summarize, we can say that compared with state-ofthe-art solutions, the originality and contribution of the work presented in this paper are (1) the proposition of MQTT-CV (a publish/subscribe protocol dedicated to connected vehicles), (2) its specification with Promela language, and (3) its analysis/verification with the SPIN model checker. Additionally, real implementation and experimentation have been conducted to demonstrate the effectiveness of our approach in terms of noticeable performance improvement (i.e., workload reduction) in the context of automotive big-data infrastructures.
III. BACKGROUND: MQTT, SPIN AND PROMELA
For the easiness of presentation, we start by introducing the original MQTT protocol as well as the model checker SPIN and its related language Promela.
A. MQTT Protocol
MQTT [2] is a publish-subscribe protocol designed to be open, simple, lightweight, and easy to implement. Moreover, MQTT is a machine-to-machine protocol designed to allow devices with small storage and processing power to communicate with each other over low-bandwidth and eventually unreliable networks (with a high abstraction regarding the underlying network functions). Before receiving data from other devices, a subscriber subscribes to a given topic at the broker by the mean of a subscription command. Then, at each time a data is published on that topic, it will be immediately forwarded to all subscribers. Similarly, a publisher can publish data by the mean of a publish command. Fig. 1 summarizes the MQTT communication paradigm.
In practice, MQTT gives the flexibility to connect multiple publishers to multiple subscribers via a main central entity called Broker. The number of connected devices to a given MQTT broker depends on the intrinsic capacity, in terms of computing power and network bandwidth, of the underlying platform which runs it. In the context of connected vehicles, millions of cars are expected to be connected. MQTT also handles the quality of service for the delivered messages. In fact, it offers three levels of QoS. The first level (QoS 0, called at most once) represents the case where the sender issues a message only once and does not wait for any acknowledgment. The second level (QoS 1, called at least once) guarantees the delivery of messages at least once by seeking acknowledgment for every sent message (a message can be sent/received multiple times). The third and last level (QoS 2, called exactly once) guarantees that each message is delivered only once to the recipient(s) in question.
B. Spin Model Checker and Promela Language
SPIN is one of the world's most popular, and arguably one of the world's most powerful, tools for detecting software defects in concurrent system designs [8]. More specifically, SPIN is an open-source tool that has been developed at Bell Labs by Gerard Holzmann, and since has been applied, so to speak, to everything; from the verification of complex call processing software (used in telephone exchanges) to the validation of intricate control software for interplanetary spacecraft [8].
In SPIN, a formal specification is built using Promela; an imperative language close to C programming language (variables declaration, data types, etc.). A Promela program is composed of a set of processes that are defined with the statement proctype.
To perform system analysis and verification, SPIN transforms each process into an automaton. Promela supports nondeterminism and parallelism through: 1) The selection statement which describes a nondeterministic choice among those guarded conditions prefixed by '::'. if :: sequence[::sequence]* fi 2) The predefined operator; i.e., run, used to create a new process, which will be asynchronously executed with the currently active ones. run process_name ([ argument list ]) ; In Promela, repetitive instructions are expressed using a dostatement; do :: sequence[::sequence]* od, which is an if-statement caught in a cycle. Promela also allows describing communication between processes via an explicit message passing channel. Both synchronous and asynchronous communications are supported. In the former (i.e., synchronous communication), the channel works in a rendezvous mode with a zero capacity. Whereas, in the latter (i.e., asynchronous communication), the channel works as a FIFO buffer with a non-zero capacity. The send/receive operations are, respectively, denoted by: r name ! arguments which sends messages to channel specified by name. r name ? arguments which receives messages from channel specified by name. Finally, as regards atomic (indivisible) sequences, they can be expressed using the atomic {...} or d_step {...} statements. More details about Promela grammar can be found in [6].
IV. OUR PROPOSAL: MQTT-CV
In the context of connected vehicles (CVs), the publishsubscribe paradigm is composed of vehicles (publishers), automotive infrastructures (subscribers), and a broker. In general, depending on the target application, infrastructures do not need to process all the received data but only a sub-set of it. For instance, in a traffic congestion detection application, the infrastructures will be interested only in the positions of vehicles having their corresponding speed below 30 km/h. However, the current version of the MQTT broker does not support any filtering and consequently will forward all the received data to the infrastructures.
To overcome this issue which has a deep impact on the overall performance of automotive infrastructures, we propose MQTT-CV; an MQTT variant for connected vehicles. The key idea behind our proposal is to allow the broker performing (i.e., processing) some constraints/filtering tasks defined by the subscribers (i.e., infrastructures) on the received data before forwarding it. By doing so, we expect to reduce the infrastructures workload and hence improving the overall system performance. Our proposition raises however the following issues: r Protocol correctness: given the sensitivity of several CVapplications related to driver's safety, the new proposed protocol must remain correct in the sense that the new introduced tasks in the broker will not alter the general behavior of the three components; i.e., publisher-broker-subscriber. Here, it is a matter of software verification.
r Computing overhead at the broker: because of the huge workload faced in CV-applications, any slight processing introduced in the platform hosting the broker must remain below a certain threshold. In other words, the new broker tasks must not slow down the overall system by causing delays on messages sent to the infrastructure. Concerning the first issue, we have paid particular attention to it since we prove in this paper (through the use of well-known software verification techniques) that our proposal remains correct (see next Section). The second issue has been addressed through real implementation and experimental validation that uses realistic data sets.
MQTT-CV defines the interaction between vehicles, broker, and automotive infrastructures as presented by the three following steps: 1) First, vehicles send data to the broker about a specific topic. For example, a vehicle can publish, on the topic "temperature," its collected ambient temperature. 2) Second, infrastructures register their interests in certain topics available at the broker. Infrastructures can also impose conditions on the data values that must be sent to them according to their subscriptions. For example, an infrastructure can require only temperatures that exceed 40°C. 3) Finally, before distributing data (received from vehicles) to infrastructures, the broker filters it by applying the conditions defined by infrastructures. We point out that in this paper, we focus on one message processing within the broker (more precisely message filtering operations). The processing of more complex operations such as messages aggregation, complex request processing and composition, etc. is left for future work because of its complexity. We also mention that whilst our proposal targets specifically CV-applications (big-data context), it can be easily deployed/adopted in other contexts where the broker has to filter unnecessary data to relieve the burden on subscribers.
V. MQTT-CV CORRECTNESS
In this section, we provide correctness proofs of our proposal MQTT-CV. To this end, we used the well-know SPIN model checker and its associated Promela language. Generally, the proofs are depicted in four steps: (a) the use of a representative case study, (b) this case study is modeled using UML language and (c) implemented in the Promela language. (d) Finally, formal verification is performed using SPIN. The aforementioned steps are described in the following subsections.
A. Case Study
We use a case study in which there are two different vehicles 3 (vehicle 1 and vehicle 2), two automotive infrastructures (infrastructure 1 and infrastructure 2), and one broker (see Fig. 2). We suppose that the two vehicles send data about two topics related to ambient temperature and vehicle speed, whereas infrastructure 1 has subscribed for both topics (i.e., temperature and vehicle speed), and infrastructure 2 has registered only for the speed topic. We also suppose that infrastructure 2 has not defined any conditions/restrictions on the received data from the broker (condition set to true) and infrastructure 1 has imposed the following conditions on data: (a) temperature must exceed 40°C and (b) vehicle speed must be less than 30 km/h. Concretely, the automotive infrastructures in this example might exploit the collected ambient temperatures to analyze temperature evolution in certain geographic zones of interest. Concerning the collected vehicles' speeds, they might be exploited, for example, to decide to broadcast an alert to drivers in the case where the speed of a set of other drivers (in their direction) has suddenly, and considerably decreased. The goal here is to inform the concerned drivers about a possible accident or road obstacle.
B. UML Modeling
Before formally analyzing MQTT-CV, we propose a UML sequence diagram (Fig. 3) that models a scenario of interaction between the main components implementing this protocol; that is, broker, vehicles, and automotive infrastructures. This model will be utilized later in Section V-C to implement MQTT-CV using Promela language.
As Fig. 3 demonstrates each of the three components is modeled with a lifeline. The infrastructure starts interactions by sending a reqsub message that allows it to subscribe to a specific topic. The broker responds by sending an acknowledgment ack. After that, two interactions occur in parallel (see the UML parallel combined fragment). In the first interaction, the vehicle sends data related to both topics; temperature and vehicle speed. Whilst in the second one, the broker sends the received data while (1) respecting the infrastructure conditions and (2) taking into account its chosen topics. Actually, for this last interaction, the broker considers three possibilities (defined by topics to which the infrastructure has registered): the infrastructure can choose the temperature topic, the speed one, or both.
C. Promela Implementation
To be able to use the model checker SPIN, MQTT-CV must be implemented in Promela language. To do so, we consider the UML protocol specification described in Fig. 3 and the case study presented in Section V-A and Fig. 2. In more specific words, we associate a Promela process (proctype statement) to each interacting component (one broker, two vehicles, and two automotive infrastructures). In SPIN, these five processes will be executed in a parallel fashion and will be launched by the init process: The interactions between these concurrent processes can simply be implemented via the communicating channels allowing to send/receive integers. Indeed, this abstraction of data exchange is sufficient to simulate and verify the proposed protocol. The channels we have defined to implement MQTT-CV are: r chan_reqsub1 and chan_reqsub2: are, respectively, used by the first and second subscribers to request a subscription (from the broker) on one or many topics. The subscriber sends an integer to indicate the desired topics. r chan_acksub1 and chan_acksub2: used by the broker to, respectively, send an acknowledgment to the first and second subscribers after their subscription requests.
r chan_cvbroker_tmp and chan_cvbroker_spd: used by connected vehicles to send data about topics defined in the broker. As their names indicate, these two channels are, respectively, dedicated to the temperature and speed topics. r chan_brokersub1_tmp and chan_brokersub1_spd: used by the broker to send data (i.e., temperature and speed) to the first subscriber (automotive infrastructure 1). r chan_brokersub2_tmp and chan_brokersub2_spd: used by the broker to send data (i.e., temperature and speed) to the second subscriber (infrastructure 2).
In the next subsections, we provide and discuss the Promela code that we propose to implement the subscribers, publishers, and broker.
1) Subscribers Code: The Promela process modeling infrastructure 1 can be implemented using the proctype statement, as follows: The process parameter topicsub determines topics to which this first infrastructure requests to subscribe. More exactly, the value 1 (topicsub == 1) means that the subscription will be done for the temperature topic. The value 2 is for the speed one, and 3 is to indicate both topics. The two first internal variables tempsub1 and spdsub1 are used to, respectively, receive temperature and vehicle speed from the broker. As a matter of fact, after receiving the expected message from the broker (through chan_acksub1 channel) and finding that respsub1 is equal to 1, Subscriber1 process will wait for data corresponding to the topic(s) in which it has subscribed. As specified in the UML sequence diagram depicted in Fig. 3, this last step will be executed in a repetitive way using the do-statement (second inner-loop of Fig. 3). Fig. 4 shows the automaton generated by SPIN after executing proctype Subscriber1(3). As indicated above, in this case, the subscription is requested for both the temperature and speed topics (topicsub == 3). The generated automaton, demonstrated in Fig. 4, describes the scheduling of infrastructure 1 actions when it interacts with the broker through the corresponding communication channels. These different actions are defined by the transitions labels. More specifically, according to this automaton, infrastructure 1 starts by sending a message that requests a subscription from the broker. After receiving a response from the latter, infrastructure 1 will reach a state in which it will wait for data reception. Note that there are no deadlock states in this automaton.
The Promela code of infrastructure 2 is relatively close to the first one. 2) Publishers Code: The following Promela code shows the implementation of the first considered connected vehicle: The above code corresponds to the first inner-loop of Fig. 3. Note that this process defines two internal parameters; temp1 and speed1. These two variables, which represents temperature and speed, are, respectively, initialized to 30 and 130, and will be sent to the broker through their corresponding channels. The scheduling of vehicle 1 actions is described in Fig. 5.
We mention that in the Promela code of the second connected vehicle (which has not been shown in this paper), the temperature has been set to 45 and speed to 20.
3) Broker Code:
The Promela code that implements an example of the proposed MQTT-CV broker is presented below.
As this code clearly states, after receiving subscription requests from both infrastructures, the broker will repeatedly receive data from vehicles and forward it to those subscribers. Note that the broker can utilize conditions and filter data sent to the subscribed infrastructures. Indeed, as previously mentioned, in the considered case study, the conditions concern only data that will be sent to Subscriber 1 (which has registered for both temperature and speed topics). More exactly, the broker will provide infrastructure 1 with only temperatures that have exceeded 40 and speed values that are lower than 30 km/h. For instance, in the latter scenario, that is, when infrastructure 1 receives, in the same time, many of these speed values from vehicles that are located in the same geographic zone, it can deduce that there is an obstacle (or maybe an accident) preventing vehicles from normally flowing.
The Broker process actions are scheduled by the automaton depicted in Fig. 6. Note that there are no deadlocks in this automaton. Note also that the size of this automaton is important when compared with those of vehicles and subscribers. This shows that it is difficult to manually analyze the broker behavior (or, in general, the behavior of any other complex system). That is why it is very interesting to exploit SPIN to automatically simulate and verify this protocol. In fact, this will be the object of the next section.
D. SPIN Formal Verification of MQTT-CV
After implementing MQTT-CV components using Promela language, we focus, in this subsection, on SPIN verification of our proposal. Fig. 7 represents an extract of a random MQTT-CV simulation performed using SPIN. As shown in this figure, the different processes interact by sending messages using communicating channels which are defined as integers. For example, the broker starts by receiving the message 3 in channel 1 from Subscriber 1 (1!3). This indicates that Subscriber 1 is requesting a subscription to both topics; temperature and vehicles' speed. The broker responds by sending message 1 through channel 6 to Subscriber 1. The interactions continue by sending data from Vehicle 1 and Vehicle 2 to the broker, and then, from the broker to Subscriber 1 and Subscriber 2. Note that the broker applies a filter before sending data to Subscriber 1. As Fig. 7 shows, Subscriber 1 receives only the value 20 for speed (less than 30 km/h condition) and 45 for temperature (greater than 40°C condition).
1) MQTT-CV Simulation With SPIN:
We mention that SPIN also offers the possibility to execute guided simulations (i.e., steps, that will be executed, can be chosen in advance).
The simulation results, we obtained, have shown that the proposed MQTT-CV protocol behaves correctly in some specific interaction scenarios. However, to prove that the protocol is reliable in general, one must decide on the validity of all possible behavior cases and scenarios. In other words, to prove its correctness, MQTT-CV must be formally verified. This can be done by specifying and verifying the properties that this protocol must satisfy.
2) Safety Properties Verification: In this section, we verify the safety properties which confirm that MQTT-CV always stays in the allowed states in which nothing abnormal would happen. More exactly, we focus on the safety properties related to deadlock states. The latter are states from which a system cannot progress (i.e., from which no transitions are enabled). Actually, in general, the reachability of deadlock states is a consequence of a wrong system specification. For example, in our publish-subscribe system, a deadlock state might be a scenario in which the broker sends a message to a subscriber, while the latter is not ready to receive it.
It is worth mentioning that the absence of deadlocks in each process of the system (automata presented in Fig. 4, 5, and 6) does not guarantee the absence of deadlocks in the whole system. In other terms, this means that the safety verification must consist of checking the non-reachability of deadlock states in the automaton corresponding to the whole system (i.e., the transition system corresponding to the interaction between all processes). This automaton, provided by SPIN, is obtained by calculating the asynchronous product of the automata corresponding to each considered process (broker, subscribers, and publishers).
After using SPIN to verify the property of deadlocks absence in MQTT-CV, we obtained the result shown hereafter.
This result demonstrates that the interacting MQTT-CV processes do not reach any deadlock states (errors: 0), and also shows information on the generated automaton (size, etc.).
E. Formal Verification of Temporal Properties
The Promela specification of MQTT-CV protocol described in the previous section allows only the simulation with SPIN of MQTT-CV behaviors and the verification of a part of its safety properties, for example those related to deadlock states. In this section, we focus on the specification and the verification of MQTT-CV liveness properties. Informally, a liveness property asserts that program execution eventually reaches some desirable states, which also means that system eventually will do something good, for example by producing desired outputs [19].
In our case, to prove that our protocol behaves correctly, by allowing the broker to send only the required data by the infrastructures, we need to verify some liveness properties, like: r p1 : always when a broker receives a temperature t, where t ≥ 40 • C, this data will be sent to infrastructures. r p2 : always when a broker receives a temperature t, where t < 40 • C, this data will be sent to infrastructures (this property should not be satisfied). r p5 : always when a broker is active and the received temperature is greater or equal to 40°C (in our case it is equal to 45°C), then infrastructure 1 will receive this data. r p6 : always when a broker is active and the received temperature is less than 40°C (in our case it is equal to 30°C), then infrastructure 1 will receive this data (this property should not be satisfied). r p7 : always when the broker is active and the received speed is less or equal to 30 km/h (in our case is equal to 20 km/h), then infrastructure 1 will receive this data. r p8 : always when the broker is active and the received speed is greater than 30 km/h (in our case it is equal to 130 km/h), then infrastructure 1 will receive this data (this property should not be satisfied). Notice that the properties p1 to p4 concern the broker behavior, and p5 to p6 are related to the interactions between the broker and infrastructure 1 4 (or Subscriber 1). In order to prove the correctness of MQTT-CV, we propose to verify whether the broker behaves correctly by applying the filter and sending the right data, and whether the subscribers (infrastructures) receive right data from the broker. We notice also that in the section V-D we verified a global safety property related to deadlock lock states. So we proved that all processes (broker, subscribers, vehicles) behave without deadlock.
The verification of these properties will ensures that our protocol is correct with regard to the filtering of data that will be sent to infrastructures. Nonetheless, to proceed with their verification with SPIN, it is necessary to specify them with Linear Temporal Logic (LTL) [20]. It is a mathematical logic with modalities referring to time, which allows to express temporal properties on the system behaviors. Moreover in Promela, we need to determine which process (Vehicles, Broker, infrastructures) is sending/receiving what to/from whom at any time of execution. Furthermore, we need also to know what message is exchanged between the processes. In other word, it is necessary to be aware of all event occurrences during the process interactions.
However, with the implementation of MQTT-CV protocol as it is proposed in the precedent section, the system state does not change when a message exchange holds between processes through the channels. To overcome this issue, we propose to associate flags (a boolean variable) to each sending and receiving event. This allows to keep track of the actions performed by the processes and their environment reactions. By doing so, SPIN will generate transition system corresponding to Promela processes in which, transition will be enabled by sending and receiving messages, and each state will be specified by the flags that indicate: the entity (process) that performs the last action, the last performed actions, the message used in the last action, the entity to/from which the message was sent/received. So, in Promela, for each process, each message, and send/receive events, a flag is declared. These flags are updated together with each send/receive event, using a atomic statement to ensure the values assignment in one execution step. For example, in the following we present the corresponding Promela code of the process Broker, enriched with the flags.
Send and receive flags indicate that the process is respectively sending or receiving messages. For example, the flags msg_speed, and msg_tmp refer respectively the last speed and temperature message sent. And the flags lf _broker, lf _v1, lf _inf 1, indicate the last process (in this case broker, vehicle1, and infrastructure1) sending or receiving data. These flags are update at each sending and receiving events.
To verify the LTL properties p_1 − p_8 expressed informally above, we specify them with LTL and Promela as described in the following listing: We obtained the following results after their verification with SPIN: r p1 and p3 are verified, which confirms that the broker send the received data that meet the conditions of the specified filter. r however, p2 and p4 are not verified, which insures that the broker does not send the data that are not required by the infrastructures.
r p5 and p7 are verified, which confirms that infrastructure 1 receives data sent by the broker which respect the specified filter.
r however, p6 and p8 are not verified, which insures that infrastructure 1 does not receive the data that do not respect the specified filter.
VI. EXPERIMENTAL VALIDATION OF MQTT-CV
To conduct our experiments and assess both MQTT and the proposed MQTT-CV solution, we have opted for Eclipse Mosquitto [21]. In a nutshell, this tool, written in C, is a message broker that implements the MQTT protocol versions 3.1 and 3.1.1. Actually, given its lightweight feature (i.e., its lightweight technique of carrying out messaging using the publish/subscribe model), Mosquitto is also suitable for IoT messaging and low-power single-board devices (such as low-power sensors, mobile devices, cell phones, embedded microcontrollers, etc.). Furthermore, Mosquitto, which is free, open-source, and available for both Linux and Windows platforms, provides a C library for implementing and launching MQTT publishers and subscribers through their respective mosquitto_pub and mosquitto_sub command lines. This gives developers the ability and freedom to completely modify/adapt the system behavior according to their needs and preferences. Finally, like any other MQTT broker, Mosquitto allows creating and connecting several publish/subscribe clients.
To ease its reading and understanding, the remainder of this section has been organized in three subsections, as follows. First, Section VI-A details the experimental environment configuration (vehicles, broker, and infrastructure settings,...). Second, Section VI-B provides the evaluation criteria according to which MQTT-CV and MQTT will be compared. Finally, Section VI-C presents the obtained results along with their corresponding interpretation and analysis.
A. Validation Settings and Parameters
For the sake of comparison and evaluation, besides the original MQTT functionalities offered by Mosquitto, we have reused the provided C source code to implement our proposed MQTT-CV broker and its corresponding subscribers/publishers. More in detail, in order to (1) check the proper operation of our solution, and (2) to compare its performance with that of the basic MQTT broker, we have considered a connected vehicles scenario in which there are one broker (MQTT or MQTT-CV), one automotive infrastructure (i.e., one subscriber), and n vehicles (publish clients).
The validation environment has been set in such a way that the broker, automotive infrastructure and vehicles will be executed separately. In other terms, we have used three physical machines. The first one was used to run the broker (alternatively MQTT or MQTT-CV). The second one was used as the automotive infrastructure. The third one was utilized to run the implemented processes (mosquitto_pub) that simulate vehicles. The details of the three utilized computers are depicted in Table I. The major objective behind executing the broker (MQTT-CV or MQTT) and its corresponding clients (i.e., infrastructure and vehicle processes) on different physical machines was to avoid affecting the obtained results (presented in Section VI-C).
The three following points will briefly talk about the specifics of the three main components of our validation experiments; namely, the vehicles, automotive infrastructure, and broker.
r First, note that temperatures collected and sent by our virtual vehicles (processes) to the broker have not been generated through a random process, but are, in fact, real data that has been collected by real vehicles. This temperature data set, provided by PSA Group, is related to the external sensed temperature and internal oil temperature of vehicles.
r Second, no specific tasks were performed by the implemented automotive infrastructure client, except printing the received temperature to the screen.
r Third, and finally, as for the implemented MQTT-CV broker, it proposes several predefined filtering functions. It also gives subscribers the ability to formulate their own requests/conditions on data they desire to receive (lessthan, greater-than, ...). For instance, a subscriber can express its interest in receiving only data that exceeds a certain threshold (e.g., temperatures that are higher than 36°C). In our experiments, two scenarios have been considered. In the first one, the infrastructure requires only temperatures that exceed 8°C (which represents 50% of the total data received by the broker). While in the second scenario, the infrastructure demands only temperatures that surpass 14°C (which represents 2% of the total data received by the broker). The validation scenario starts as follows. First, the broker (MQTT-CV or MQTT) is launched (respectively through the mosquitto-cv or mosquitto command lines). Second, the automotive infrastructure (the only subscriber in the system) is created and connected to the broker through the mosquitto_sub command: Finally, processes that represent vehicles are created, connected to the broker, and their corresponding implemented publishing mechanism starts reading and reporting real data. In order to evaluate the scalability of both MQTT-CV and MQTT brokers, the number of participating vehicles was progressively increased through the gradual increase of Mosquitto processes that simulate them. During this whole working phase, the considered evaluation metrics (detailed below in Section VI-B) were continuously recorded at the level of both the broker and infrastructure. The goal is to evaluate the effect of the proposed solution on these two entities. Note that there is no difference in the vehicles' performance in both architectures; MQTT and MQTT-CV. In both cases, vehicles read and report the same data for fair comparison purposes.
The amount of data sent by each vehicle is set to 727 178 items. Of course, vehicles report this data (to the broker) with a certain data rate of x messages/second. However, in order to be more accurate, rather than tracking and reporting the number of connected vehicles or their data rate (i.e., number of published messages per second), we have measured the broker load in terms of received "publish" messages over one minute. In Mosquitto, this number (moving average) of published messages can be obtained by simply creating a client and subscribing it to the following provided system topic: $mosquitto_sub -v -id bLoad -h IP_address -t \ $SYS/broker/load/publish/received/1min-q 0 As a last point in this first subsection, we mention that as was the case for the automotive infrastructure during its subscription (-t topic -q 0), each vehicle in the system also specifies the lowest available Quality of Service (i.e., 0) to publish its data (using the mosquitto_publish() function). In other words, in these experiments, the only considered QoS is 0. Recall that MQTT offers three levels of QoS (0, 1, and 2). A higher QoS is more reliable but requires a higher latency and higher bandwidth. These different levels of QoS determine how the publisher-broker and broker-subscriber communications will take place. In more specific words, 0 means that packets will be delivered once (no confirmation), 1 means that they will be delivered at least once (confirmation required), and finally, 2 means they will be delivered exactly once (a four-step handshake).
B. Validation Metrics
The two main criteria we used for the evaluation of MQTT-CV and MQTT are the CPU and the RAM usage. r CPU usage: this criterion, which is expressed in percentage, represents the processor load in both the broker and infrastructure machines. The goal is to compare our MQTT-CV approach with MQTT and measure their effect on the broker and infrastructure performances (advantages and disadvantages). To collect this first information (i.e., CPU load), we have considered several Linux tools/commands such as the wellknown top, ps, and PowerShell Core (Get-Process, etc.). r RAM usage: this evaluation metric measures the quantity of memory that has been consumed by both the broker and infrastructure entities during their working stages. We mention that this information has been also collected using the Linux tools mentioned above. The RAM usage, collected by these tools, is expressed in percentage. The obtained results, depicted in the next (last) subsection, show the effect of both MQTT-CV and MQTT on resources' consumption. In fact, the obtained results can be used as a gauge to estimate the impact that the additional functionalities (filtering operations,...) of MQTT-CV solution can have on the complexity/performance of both the broker and infrastructure. For instance: r Will the MQTT-CV broker consume less or more resources (CPU and RAM) due to its specific data processing tasks? This question can be asked differently. Which task consumes more CPU and RAM: (1) transmissions (i.e., creating and sending packets) or (2) local data processing? r As regards the automotive infrastructure, intuitively speaking, fewer received messages mean less processing. As was the case for the previous point, this one also remains to be confirmed through the obtained results. These last two addressed points are very crucial because, in addition to the Connected Vehicles context, the MQTT-CV logic can be also considered in many other publish/subscribe models. For example, in IoT applications, the broker can be installed on a low-power device (e.g., a low-power sensor device, embedded microcontroller,...). The same applies to subscribed clients. Therefore, on the one hand, it would be very beneficial not to burden these low-power clients with useless data. Rather, the broker must feed them with only data that interests them. On the other hand, data processing (filtering, etc.) tasks performed by the broker must not move the burden from clients to the broker itself. In other words, these tasks must not drastically augment resources consumption at the broker level.
C. Validation Results
In the following, we start by depicting and discussing the obtained broker results. Then, we move to the automotive infrastructures' performance. Note that the obtained results (whether those relating to the broker or infrastructure) were plotted against the broker load which is expressed in million publish messages/minute. 1) Brokers Results: Fig. 8 shows the amount of CPU consumed by MQTT-CV and MQTT brokers to perform their respective tasks. As previously mentioned, we have considered two MQTT-CV scenarios. In the first one, the MQTT-CV broker sends only temperatures that exceed 8°C, which represents 50% of the total received data from vehicles. This broker is denoted as "MQTT-CV Broker with 50% filter" in Fig. 8. In the second scenario, the MQTT-CV broker forwards only temperatures that exceed 14°C, which represents 2% of the total data received by the broker (denoted as "MQTT-CV Broker-98% filter"). Indeed, this high value of 98% has been chosen in order to more closely monitor the MQTT-CV broker behavior in those extreme circumstances and to show the filtering effect on it.
To summarize, the main difference between MQTT-CV and MQTT resides in the fact that the MQTT broker does not inspect the received data or apply any treatment to it. This basic broker acts just like a relay or bridge for subscribers that are interested in that data. As for our MQTT-CV broker, it processes data before forwarding it to subscribers. As explained above, in this simulation scenario, MQTT-CV forwards only data that is larger than a certain threshold imposed by the automotive infrastructure. In general, we can say that MQTT sends more packets, whereas MQTT-CV performs local computations and sends fewer messages. Fig. 8 reveals that the applied filter does not augment the MQTT-CV resources' consumption, but on the contrary, it reduces it. First, according to Fig. 8, we can say that the more a broker sends data the more CPU it will consume, and vice versa. Second, note that the filter threshold specified by the infrastructure considerably affects the MQTT-CV broker performance because, in fact, it is tightly related to the number of sent packets (to infrastructure). In other terms, depending on the set threshold, less or more messages will be sent to the infrastructure. And, more sent packets means more CPU consumption.
Regarding RAM usage, the obtained results (not depicted here given their similarity) have shown that RAM consumption is constant and very low for both MQTT and MQTT-CV. In more exact words, the recorded RAM usage is 0.05% for MQTT and MQTT-CV regardless of the considered load and applied filters. We conclude that neither the local processing nor the huge number of sent packets has affected the RAM consumption.
2) Infrastructures Results: The objective behind these performance results is to confirm the benefits that the MQTT-CV broker would allow the infrastructure to gain. As previously stated, fewer received messages will (without a doubt) relieve the infrastructure and allow it to consume less CPU. Fig. 9 confirms this intuitive expectation and shows that the added MQTT-CV functionalities relieve the infrastructure and allow it consume less CPU when compared with the MQTT infrastructure.
As was the case for the brokers, the obtained results (not depicted here given their similarity) have also shown that space complexity (RAM consumption) is constant throughout the operation of both MQTT and MQTT-CV infrastructures. The recorded RAM usage was 0.25% regardless of the received data quantity.
Based on the obtained results, we conclude that MQTT-CV is more efficient than MQTT in terms of CPU usage. First, the MQTT-CV broker consumes less CPU because it sends fewer messages. Second, the MQTT-CV infrastructure also consumes less CPU because it receives fewer messages. These results can be also interpreted as follows: local data processing is more efficient (and less energy-consuming) than transmissions. Moreover, we also conclude that the added filtering functions (e.g., greater than, less than, etc.) do not augment RAM and CPU consumption. Instead, they allow the broker to reduce its need for resources (by reducing the number of sent packets). Finally, we estimate that with QoS 1 and QoS 2, MQTT performance will be worse because, in this case, more messages will be sent.
VII. CONCLUSION AND FUTURE WORKS
This paper presented a proposal that aims to overcome the MQTT protocol drawbacks in the context of connected vehicles. These drawbacks are mainly related to (1) the huge volume of data sent by connected vehicles to automotive infrastructures (through a broker), and (2) the protocol reliability regarding the safety properties. To remedy these limitations, first, we have proposed a variant of MQTT, named MQTT-CV, which aims to alleviate automotive infrastructures (subscribers) in terms of data that will be sent to them by the broker (according to their topic subscriptions). In other words, these infrastructures will store/process only data that is important to them. Second, to ensure the reliability of MQTT-CV, which is a critical system that involves the drivers' safety, we have formally analyzed it.
More in detail, to specify the interaction between its different components (vehicles, broker, and automotive infrastructures), we have modeled MQTT-CV using the UML sequence diagram. After that, we have (1) implemented MQTT-CV using Promela language, and (2) utilized SPIN to perform simulations that allow analyzing some iteration scenarios. Finally, using SPIN, we have verified that MQTT-CV satisfies the safety property related to deadlock states, and liveness properties that express temporal constraints on MQTT-CV behaviors. In more exact words, we have proven that the broker, vehicles, and automotive infrastructures behave correctly and that MQTT-CV will never enter a deadlock situation. Moreover, we proved also, that the broker ensures data filtering before their sent.
As future works, we intend to improve MQTT-CV and make it able to handle more complex conditions that automotive infrastructures can express on proposed topics. In addition to this, we plan to consider the imposed conditions at the verification level. | 2020-12-03T09:01:53.095Z | 2020-12-01T00:00:00.000 | {
"year": 2020,
"sha1": "7b9d91ced138845fa87d75220a914aac5b926542",
"oa_license": "CCBY",
"oa_url": "https://hal.archives-ouvertes.fr/hal-03186608/file/Chouali2020.pdf",
"oa_status": "GREEN",
"pdf_src": "IEEE",
"pdf_hash": "61b48959bf9b51d1dde08cea0613b22ec9c5e9c6",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
264515833 | pes2o/s2orc | v3-fos-license | Retromer stabilization using a pharmacological chaperone protects in an α-synuclein based mouse model of Parkinson’s
Abstract Background In the present study we assessed the protective effects of a pharmacological approach to stabilize the retromer complex in a PD mouse model. Missense mutations in the VPS35 gene are a rare cause of familial PD. The VPS35 protein is a subunit of the retromer cargo recognition complex and has a variety of functions within neurons, many of which are potentially relevant for the pathophysiology of PD. Prior studies have revealed a role for the retromer complex in controlling accumulation and clearance of α-synuclein aggregates. We previously identified an aminoguanidine hydrazone, 1,3 phenyl bis guanylhydrazone (compound 2a), as a pharmacological stabilizer of the retromer complex that increases retromer subunit protein levels and function. Methods Here, we validate the efficacy of 2a in protecting against αSynuclein pathology and dopaminergic neuronal degeneration in a PD mouse model generated by unilateral injection of AAV-A53T-αSynuclein in the substantia nigra. Results Daily intraperitoneal administration of 2a at 10 mg/Kg for 100 days led to robust protection against behavioral deficits, dopaminergic neuronal loss and loss of striatal dopaminergic fibers and striatal monoamines. Treatment with 2a activated αSynuclein degradation pathways in the SN and led to significant reductions in aggregates and pathological αSynuclein. Conclusion These data suggest retromer stabilization as a promising therapeutic strategy for Parkinson’s disease leading to neuroprotection of dopaminergic neurons and rescue in the accumulation of pathological and aggregates αSynuclein. We identified 2a compound as potential clinical drug candidate for future testing in Parkinson’s disease patients.
Background
Parkinson's disease (PD) is the second most common neurodegenerative disorder affecting 1% of people over 65 and 4.3% of those older than 85 [1].A key pathological feature of PD is loss of dopaminergic (DA) neurons in the substantia nigra pars compacta (SNpc) resulting in degeneration of the nigrostriatal tract and dopamine (DA) de ciency that contributes particularly to the motor features of PD [2].The neuropathological hallmark of PD is the occurrence of intracellular inclusions, Lewy bodies and Lewy neurites, with a topologically predictable progression [3], which include aggregated αSynuclein (αSyn) [4].A convincing set of data from genetic, animal models and biochemical studies supports the view that accumulation and aggregation of αSyn in DA neurons represents a key pathogenic event [5].
The VPS35-D620N mutation is a cause of late-onset autosomal dominant PD [6][7].The VPS35 gene encodes a component of the heteropentameric retromer complex, which mediates retrograde transport of cargo proteins from endosomes to the trans-Golgi network, plasma membranes [8] and mitochondria to peroxisomes [9].VPS35 is a subunit of the cargo recognition complex, formed by VPS35, VPS26 and VPS29, which has a variety of functions within neurons, many of which are potentially relevant for the pathophysiology of PD [10].In particular VPS35 controls the regulation and maintenance of neuronal signaling events: i) downregulation of some receptors [11], ii) synaptic plasticity [12][13], iii) tra cking of proteins in dendritic spines [14] and iv) DA transporter (DAT) recycling in DA neurons [15].VPS35 also is a putative modulator of mitochondrial dynamics, fusion and ssion [16][17][18], and controls the accumulation and clearance of pathological forms of αSyn [19][20][21].
Several lines of evidence indicate that VPS35 in uences αSyn accumulation, including: i) αSyn accumulation in induced pluripotent stem cell (iPSC)-derived tyrosine hydroxylase-positive (TH + ) DA neurons generated from PD patients with the VPS35-D620N mutation [22]; iii) co-localization of VPS35 and αSyn in intracellular inclusions in PD patient brains [19]; iv) αSyn accumulation upon reduction of VPS35 protein levels in αSyn-overexpressing PD mice [19]; v) accumulation of αSyn and degeneration of DA neurons in the SN of VPS35 de cient or VPS35-D620N mutant mice [20].On the other hand, genetic manipulations to increase expression of wild type VPS35 attenuates the accumulation of αSyn aggregates and reduces neuronal loss and astrogliosis in a transgenic PD mouse model overexpressing αSyn [19].
The molecular mechanisms leading to neuroprotection by VPS35 against αSyn accumulation are uncertain, but recent studies highlight a pivotal role for VPS35 in control of the main αSyn clearance pathways [10].
Recently retromer dysfunction has been linked to different neurodegenerative disorders, with VPS35 protein levels found to be reduced in vulnerable regions in Alzheimer's disease (AD) [23], Pick disease, progressive supranuclear palsy [24] and Amyotrophic lateral sclerosis (ALS) [25].Pharmacological stabilization of the retromer complex increases retromer subunit levels in the CNS and protects against neurodegeneration in ALS [25] and AD [26] mouse models.Starting from a previously published compound R55 [27], we identi ed a pharmacological chaperone, an aminoguanide hydrazone, 1,3 bis phenyl guanylhydrazone (2a), as retromer stabilizer that binds the retromer complex at an active binding site between the VPS29 and VPS35 subunits and leads to a nearly 2-fold increase in protein levels of these 2 subunits in vitro and in vivo in the CNS [25,27].In SOD1 G93A mice, daily IP administration of 2a leads to: i) increased protein levels of retromer subunits in spinal cord ventral horn MNs; ii) attenuation of locomotor impairment; iii) increased MN survival and protection of sciatic nerves; iv) signi cant reduction in SOD1 aggregates and in the total ubiquitinated protein pro le; v) increased protein levels of two VPS35 well-known cargoes, Sortilin and CI-MPR [28-29], associated with an increase in mature cathepsin D (CTSD) in MNs [25].
In the current study we demonstrate neuroprotective e cacy of 2a in blocking DA neuronal degeneration and nigrostriatal tract and αSyn pathology in a PD mouse model overexpressing A53T-αSyn following unilateral injection into the SN of AAV-A53T-αSyn [30].Daily IP administration of 2a at 10 mg/Kg protects against behavioral de cits, loss of DA neurons in the SNpc and loss of striatal dopaminergic bers and striatal monoamines.Treatment with 2a also activates the main αSyn degradation pathways and reduces total and pathological αSyn aggregates.
Animals
All mice were housed under a 12-hour light/12-hour dark cycle with ad libitum supply of standard chow.The ambient temperature was 21 ± 2°C, and the humidity 55 ± 10%.Mice were housed using a stocking density of 3-5 mice per cage in individually ventilated caging.Mouse studies at BIDMC were approved by the local Institutional Animal Care and Use Committee (IACUC).
Aims and Study Design
Aims: First compound 2a was assessed for its ability to increase retromer-subunit protein levels in PDrelated areas (striatum and SN) as shown in Fig. 1A.Subsequently we tested the impact of pharmacological stabilization of the retromer complex by 2a in an αSyn-based mouse model of PD [30].
To generate the PD mouse model, wild type mice at 12 weeks of age were injected stereotaxically into the SN with AAV1/2 -A53T-αSyn vector or AAV1/2 empty control, and then beginning 20 days after the AAV injection an investigator blinded to the prior procedure started daily IP injections of 2a or saline as a control for 100 days.Behavioral analyses to evaluate motor function were performed by Rotarod and Cylinder tests at 113 to 120 days post stereotaxic AAV injection.At 120 days post injection (after 100 days of daily IP injections with 2a or saline control) neuropathological analyses were performed in striatum and SN.In xed brains we assessed the effect of 2a on: i) protection against αSyn-induced loss of DA (TH+) neurons in the SN, ii) protection of striatal TH + bers, iii) retromer subunit immunoreactivity (VPS35 and VPS26) in TH + neurons, iv) retromer function by analyzing immunoreactivity of CI-MPR, a retromer cargo protein, in TH + SN neurons, and v) pathological αSyn (p-Ser-129 αSyn) accumulation.In fresh (un xed) striatum and SN brain tissues we analyzed the effect of 2a on: i) total and oligomeric αSyn in the SN, ii) striatal monoamines, and iii) αSyn degradation pathways, including CMA, macroautophagy and lysosomal pathways (Fig. 1F).
Behavioral assessment
Cylinder test: Spontaneous forelimb usage was assessed by the cylinder test at 120 days after unilateral AAV injection in 4 groups: AAV1/2-empty and AAV1/2 -A53T-αSyn vectors treated with saline or 2a compound.Mice were placed in a transparent plexiglass cylinder of 12 cm diameter and 30 cm height and were video recorded for 10 minutes or 30 times of rearing, whichever came rst.Rearings of the mice were analyzed for the number of touches of the inner surface of the cylinder with either the right (ipsilateral), the left (contralateral) or both forelimbs simultaneously.The nal data were presented as the percentage of contralateral forelimb use by calculation with the equation: (contralateral forelimb + both forelimbs)/(contralateral forelimb + ipsilateral forelimb + both forelimbs × 2) × 100.The calculated percentage re ects the degree of asymmetric use of the affected forelimb as follows: 50% = symmetric use of both forelimbs; <50% = preference for the intact (ipsilateral) forelimb; >50% = preference for the affected (contralateral) forelimb.Furthermore, we provided two representative videos recorded during the execution of the cylinder test to evaluate the e cacy of 2a compound in the rescuing the asymmetry in AAV1/2 -A53T-αSyn mice daily treated with 2a compound (10 mg/Kg) compared to the control mice with saline solution.The use of these videos for publication has been approved by the local Institutional Animal Care and Use Committee (IACUC) at BIDMC.
Rotarod test
Motor activity of mice receiving 2a or vehicle was assessed 113 days after the AAV injection.Brie y, mice were trained for 1 min on a static rotor and for 1 min at constant speed (4 rpm) two times and then two trials were performed over two consecutive days (one per day).Each trial consisted of 3-test sessions with 15 min intervals between sessions.For each session, mice were placed on an accelerating rotor (4-40 rpm) and the latency to fall was recorded, with a maximum limit set at 300 seconds.
2a synthesis
The new pharmacological chaperone 1,3 bis phenyl guanylhydrazone (2a) has been previously designed, synthesized, and patented [25] starting from R55 [27].Thiophane scaffold has been replaced with a phenyl ring and isothioreas has been substituted with guanylhydrazone to increase the permeability through lipophilic biomembranes [25].2a was synthetized by MedKoo Biosciences, the purity of the compound is 99%, structure and molecular weight have been con rmed by 1 H-NMR and MS analyses.
The lyophilized powder stored at -80°C.2a powder was solubilized in saline solution at 10 mg/Kg prior for the daily IP administration.
For the immunohistochemistry the sections were incubated with the primary antibody anti-TH (Millipore, AB152, rabbit polyclonal), and the sections were processed with secondary antibody incubation with or without an ABC kit (Vector Laboratories) and 3'-diaminobenzidine as a chromogen (DAB, Vector Laboratories) on the next day.The sections were mounted on Superfrost Plus Microscope Slides (Fisher Scienti c) and coverslipped with Vectashield antifading mounting medium (Vector Laboratories) for uorescence imaging or dehydrated and coverslipped with Permaslip medium (Alban Scienti c) for neuronal counting.For immuno uorescence sections were visualized using confocal microscopy, Leica DM6 CS equipped with ×10, ×20, and ×60 objectives and equipped with super-sensitive HyD detectors.Fluorescence signals were recorded as square 8-bit images (1024 × 1024 pixels) and images were postprocessed to generate maximal projections of Z-stacks (acquired with a 0.4-0.8µm step) and crosssectional pro les of the Z-stack (acquired with a 0.3 µm step) and pseudo-colored using Leica.
Neuronal Counting and densitometry
For neuronal counting, the midbrain of each mouse was sectioned using a Leica cryostat machine into six series of 40 µm coronal sections and one series was stained with anti-TH antibodies for DA (TH+) neuronal counting.The serial brain sections were then scanned with NanoZoomer XR Digital slide scanner (Hamamatsu) for neuronal counting with the QuPath v0.2.0 software ((https://qupath.github.io)as previously described [30].
For striatal densitometry, TH-stained sections were imaged using a light microscope tted with a camera (SPOT).Images were captured using a 10 × objective and xed exposure settings.Densitometry was performed to quantify the intensity of TH immunostaining in the striatum ipsilateral to the stereotaxic injection and in the contralateral striatum.The average intensity of TH staining in a xed region of the striatum was quanti ed with ImageJ software (NIH).The relative optical density of TH + bers was normalized by subtracting the background intensity of the cortex, and the percentage of ipsilateral over contralateral relative density was calculated and used for statistical analysis.Analysis determines the percentage of positive staining area [%area = (stained area/region area) * 100)].To calculate % of stained area, the same threshold value was applied to all sections with the same antibody staining.All pixels above the threshold value were considered positive.All striatal densitometry and neuronal counting data for the mice were obtained by an investigator blinded to treatment groups.
Immunoblot
Brain tissues (SN and striatum) were dissected from saline perfused mice and rapidly homogenized in 250 µL of RIPA lysis buffer (Millipore 20.188; 50mM Tris-HCl pH 7.4; 150mM NaCl, 0.25 deoxycholic acid, 1% NP40), 0,5% of Triton-X100 and protease inhibitor cocktail (Sigma 100X).Brain extracts were homogenized using a tight-tting glass Potter tissue grinder (1 ml; Wheaton) and then sonicated at a frequency of 20 kHz (10 times-1s).Brain samples were ultracentrifuged at 120K x g for 30 minutes at 4°C and the supernatants collected as detergent-soluble fraction; pellets were resuspended in 300 µL of Ripa buffer, 2% of SDS and Protease Inhibitors, homogenized and sonicated as above and centrifuged at 17K x g for 30 minutes at 4°C and supernatants were collected as the SDS-soluble fractions.In the enriched detergent-soluble and SDS-soluble fractions protein concentration was measured using the BCA protein assay kit.Next, for each sample, 20 µg of protein extract from the SN was loaded on 4-12% Midi-PROTEAN TGX Stain-Free precast gels for PAGE (Bio-Rad).Gel electrophoresis was performed at 200 V for 45 minutes and then protein transferred on nitrocellulose membranes (0,4µm pore).Blots were blocked in 5% of milk in PBS and 0.1% Tween-20 (PBS-T) for 1 hour before exposure to primary antibodies that are incubated overnight at 4°C on a shaker.For the analysis of αSyn aggregates we blotted detergent-soluble and SDS-soluble fractions with anti-αSyn antibody (BD Biosciences, clone 42, mouse monoclonal).For the analysis of αSyn degradation pathways the detergent-soluble fractions were blotted with anti-p62 (BD Biosciences, 610497, mouse puri ed), anti-LC3B (Cell Signaling, D11, rabbit polyclonal), anti-Hsc70 (Proteintech, 10654-1-AP, rabbit polyclonal), anti-LAMP2A (SantaCruz, H4B4 sc-18822, mouse monoclonal) and anti-CTSD (Abcam, ab75852, rabbit monoclonal); CTSD protein levels were revealed by exposing the membrane for 20 minutes to 0.4% PFA and blocking with 5% of BSA in PBS-T.Retromer complex subunits were detected using anti-VPS35 (Abcam, ab157720 N-terminal, rabbit polyclonal), anti-VPS26 (Abcam, ab23892, rabbit polyclonal) and retromer function studied using anti-Sortilin antibody (Abcam, ab166640, rabbit polyclonal); all antibodies were normalized using anti-β-Actin (Millipore, A5441, mouse monoclonal).Blots were incubated in appropriate HRP-labeled secondary antibodies for 1 hour at room temperature and visualization of protein bands was performed using ClarityMax-ECL (Bio-Rad) according to the manufacturer's directions and images were acquired on ChemiDoc (Bio-Rad).Quanti cations were done using the Image Lab 6.0.1 software (Bio-Rad).
Measurement of DA and DA metabolites
Mice were sacri ced at 17 weeks post-AAV-A53T or AAV-empty stereotaxic injection in the SN and after 100 days of administration of 2a compound (10mg/Kg) or saline as control.The brains were rapidly removed and placed into a chilled brain matrix and sliced into 1 mm thick coronal sections on ice.The sections were then placed in ice-cold saline.The striatum was dissected from these 1mm sections, snapfrozen and used for HPLC analysis of DA and its metabolites at the Neurochemistry Core, School of Medicine, Vanderbilt University.Levels of DA and its metabolites were normalized to ng/mg of protein input and statistically analyzed with GraphPad Prism software.
Statistical analysis
We assessed normality of data applying either Kolmogorov-Smirnov test (with Dallas-Wilkinson-Lille for P value) or D'Agostino & Pearson omnibus normality test.For comparisons of the mean differences between 2 groups we used unpaired t-tests.When 3 or more groups were compared, we used analysis of variance (ANOVA) tests followed by Tukey's multiple Comparison test or by Bonferroni multiple comparison test.Statistical analyses were performed using PRISM 8 (GraphPad Software).P-values less than 0.05 were considered statistically signi cant.
Results 2a increases VPS35 retromer-subunit protein levels in PDrelated brain areas
We previously demonstrated that the retromer stabilizer 2a crosses the blood brain barrier (BBB) and increases VPS35, VPS26 and VPS29 retromer-subunit protein levels in the spinal cord of wild type mice and in an ALS mouse model (SOD1 G93A ) [25].To assess the effect of 2a in increasing retromer-subunit protein levels in PD-related brain areas (striatum and SN), 2a was administrated by daily IP injections at three different doses (1 mg/Kg, 5 mg/Kg and 10 mg/Kg) for 15 days in wild type mice (Fig. 1A).After 15 days the brains were dissected and analyzed for VPS35 protein levels in the striatum (Fig. 1B & D) and SN by immunoblot (Fig. 1C & E).Administration of 2a at 5 mg/Kg and 10 mg/Kg signi cantly increased VPS35 protein levels approximately 1.5 fold (****p < 0.0001) and 2.5 fold (****p < 0.0001), respectively, in the striatum (Fig. 1B & D), and approximately 2 fold (**p < 0.0023) and 2.3 fold (***p = 0.0009), respectively, in the SN (Fig. 1D-E).We previously demonstrated that a magnitude of increase in VPS35 protein levels of about 2-fold in the spinal cord with daily IP injections of 2a is associated with neuroprotection in an ALS mouse model [25], and thus selected 10mg/Kg for the current studies.
The pharmacological chaperone 2a rescues motor defects in αSyn mice
We generated the PD mouse model by injecting AAV-A53T-αSyn or an AAV-empty vector control unilaterally in the SN and then 20 days later we began daily IP administration of 2a (or Saline solution as control) for 100 days (Fig. 2F).After 93 days of IP injections (113 days post viral vector injection) we assessed motor function on the rotarod test (Fig. 1G).We found a signi cant decrease (****p < 0.0001, Fig. 1G) in the latency to fall for AAV-A53T-αSyn treated daily with saline, whereas this de cit was prevented by daily treatment with 2a (**p = 0.002, indicating that 2a treatment signi cantly rescued the αSyn-induced motor de cit. One week after the rotarod test (after 100 days of daily 2a IP injections) αSyn-induced asymmetry of forelimb use was assessed by the cylinder test (30).The AAV-A53T-αSyn injected mice treated with saline showed a signi cant reduction (****p < 0.0001, Fig. 1H) in use of the contralateral forelimb compared to control mice injected with AAV-empty vector and subsequently treated with saline.The de cit in contralateral forelimb use in the AAV-A53T-αSyn injected mice was signi cantly rescued by administration of 2a (****p < 0.0001, Fig. 1H).Two additional video les recorded during the execution of the cylinder test analyzing the asymmetry in AAV-A53T-αSyn injected mice treated with saline compared to AAV-A53T-αSyn injected mice treated with 2a compound, show these in more details (see Additional le 1 and Additional le 2).In the video we counted the number of touches of the inner surface of the cylinder with either the right (ipsilateral), the left (contralateral) or both forelimbs simultaneous.AAV-A53T-αSyn injected mice treated with saline were lethargic, with a tremor and predominantly they were using the ipsilateral forelimb (see Additional le 1).Conversely, AAV-A53T-αSyn injected mice treated with 2a 10 mg/Kg for 100 days were more active when added to the cylinder and the spontaneous forelimb use was more intense and simultaneous indicating a rescue in the asymmetry associated to this PD mouse model (see Additional le 2).2a protects DA neurons in the SNpc of αSyn mice We next assessed the effect of 2a on DA neuronal loss in the SNpc (Fig. 2A-E) by quantifying tyrosine hydroxylase immunoreactive (TH+) neurons in the SNpc (Fig. 2).In the control group (AAV-empty vector mice treated with saline) there was no signi cant loss of TH + neurons on the injected side compared to the contralateral side (Fig. 2A & E).Administration of 2a at 10 mg/Kg for 100 days in the control group did not change the number of TH + neurons (Fig. 2B & E). 120 days after unilateral AAV-A53T-αSyn injection, saline-treated mice showed a signi cant ~ 35% reduction of TH + neurons ipsilateral to the AAVinjected side compared to the contralateral side (**p = 0.019, Fig. 2C & E), as well as a signi cant decrease compared to TH + neurons of both the ipsilateral (**p = 0.0018, Fig. 2A & E) and contralateral sides (**p = 0.0078, Fig. 2A & E) of the AAV-empty vector saline treated controls.Administration of 2a for 100 days in AAV-A53T-αSyn injected mice signi cantly attenuates the loss of DA (TH+) neurons (*p < 0.0029, Fig. 2D & E).
2a protects striatal DA bers loss in αSyn mice
To test the impact of 2a on αSyn-induced loss of DA neurites projecting to the striatum, we measured the density of TH + terminals in the striatum of brain sections (Fig. 3A-E).Notably, the relative immunostaining intensity of TH + bers was signi cantly decreased by nearly 50% in AAV-A53T-αSyn mice treated with saline (****p < 0.0001, Fig. 3C & E) compared to control AAV-empty vector injected mice treated with saline (Fig. 3A & E) and compared to AAV-empty vector mice treated with 2a (Fig. 3B & E).Daily IP administration of 2a (10 mg/Kg) for 100 days in AAV-A53T-αSyn mice signi cantly protected against loss of striatal TH + bers (**p < 0.001, Fig. 3D & E).These results show that 2a protects against αSyn-induced loss of DA neurons in the SNpc and against loss of TH + striatal terminals in a mouse model of PD.
2a protects against loss of striatal DA and DA metabolites in αSyn mice We analyzed levels of DA and its metabolites including 3,4-dihydroxyphenylacetic acid (DOPAC), 3methoxytyramine (3-MT) and homovanillic acid (HVA) in the ipsilateral striatal hemisphere of AAV-A53T-αSyn mice compared to the control AAV-empty vector mice treated with saline or 2a (Fig. 3F-I).Levels of DA were signi cantly reduced by about 60% in AAV-A53T-αSyn mice treated with saline compared AAVempty mice treated with saline (**p < 0.0078, Fig. 3F) and by about 50% compared to AAV-empty mice treated with 2a (**p < 0.004, Fig. 3F).Treatment with 2a protected AAV-A53T-αSyn mice against this αSyninduced loss of striatal DA (**p < 0.0068, Fig. 3F).Furthermore, 2a administration prevented the decline of DA metabolites DOPAC (*p = 0.034, Fig. 3G) and HVA (*p = 0.0147, Fig. 3I) with only a non-signi cant trend for 3-MT (Fig. 3H).We found a signi cant increase in striatal serotonin (5-HT) upon administration of 2a (compared to saline) in AAV-empty mice, but no effect of 2a versus saline on 5-HT levels in the AAV-A53T-αSyn injected mice (Fig. 3J).No effects from AAV-A53T-αSyn or from 2a treatment were seen for striatal norepinephrine (NE, Fig. 3K).These results show that the AAV-A53T-αSyn PD mouse model recapitulates key features of PD, including motor behavioral de cits, degeneration of DA neurons in the SNpc, loss of striatal TH + bers, and reduction in the striatal DA and DA metabolites.Furthermore, these results demonstrate signi cant protection from 2a treatment against each of these Syn induced de cits.
2a increases retromer protein levels and function in the SNpc of αSyn mice
We assessed the impact of 2a on VPS35 and VPS26 immunoreactivity in DA neurons in SNpc in AAV-A53T-αSyn mice by immuno uorescence (IR) (Fig. 4).We found a signi cant increase in VPS35-IR (Fig. 4, A-D & E, *p = 0.0494), number of VPS35 puncta (Fig. 4A-D & F, ***p = 0.0004) and VPS35 puncta size (Fig. 4A-D & G, ****p < 0.0001) in ipsilateral DA neurons of A53T mice treated with 2a (10mg/Kg for 100 days) compared to ipsilateral A53T mice treated with saline.2a led to a signi cant increase in the number of VPS26 puncta (H-K & M, *p = 0.00174) and VPS26 puncta size (Fig. 4, H-K & N, ****p < 0.0001) in ipsilateral DA neurons of A53T mice treated with 2a compared to ipsilateral A53T mice treated with saline.We also assessed if the signi cant increase in number of VPS35/VPS26 puncta and VPS35/VPS26 puncta size is associated with a boost in retromer function by analyzing two well-know VPS35 cargoes, Sortilin and CI-MPR (Fig. 5) [28][29].These 2 cargo proteins previously have been found to be increased in MNs of SOD1G93A mice by treatment with 2a [25].We analyzed CI-MPR-IR in the SNpc of AAV-A53T-αSyn mice treated with 2a or saline solution as control in the contralateral and ipsilateral hemispheres (Fig. 5A-L), and speci cally in DA neurons ipsilateral to the AAV injection (Fig. 5M-R).We found a slight but statistically signi cant increase in CI-MPR immunoreactive intensity in TH + neurons in the ipsilateral SNpc of AAV-A53T-αSyn injected mice (p = 0.045), with a larger magnitude increase induced by treatment with 2a (p < 0.0001, Fig. 5M-S).Sortilin protein levels in the SN were analyzed by immunoblot and we found a signi cant ipsilateral reduction in Sortilin immunoreative intensity in AAV-A53T-αSyn injected mice treated with saline compared to the contralateral (non-injected) side (**p < 0.0078, Fig. 5T-U).Administration of 2a in AAV-A53T-αSyn injected mice completely prevented this αSyninduced reduction in Sortilin (****p < 0.0001, Fig. 5T-U).Together, these data suggest that 2a treatment increases retromer subunit levels as well as retromer function in the SN.
2a attenuates the accumulation of pathological αSyn in the SNpc of αSyn mice Prior studies have shown that a genetic manipulation to increase VPS35 protein levels leads to a reduction in the accumulation of αSyn aggregates in the hippocampus of a PD mouse model overexpressing human wild type αSyn [19].Here we analyzed the effect of pharmacological stabilization of the retromer complex by treatment with 2a by daily IP injections for 100 days in AAV-A53T-αSyn mice (Fig. 1F).We analyzed αSyn immunoreactivity in DA neurons of the SNpc, labeling TH + neurons (red) and phospho-serine-129 αSyn (pSer-129, green) by double immuno uorescence (Fig. 6A-N).We found a robust increase in the uorescence intensity of pSer129 αSyn-IR (≈ 20 folds increase) in the ipsilateral SNpc of AAV-A53T-αSyn injected mice treated with saline (****p < 0.0001, Fig. 6A, C-D & O), indicating the accumulation of pathological αSyn in individual DA neurons.Treatment with 2a signi cantly attenuated pathological pSer129 αSyn accumulation in the SNpc of these mice (****p < 0.0001, Fig. 6B, F-H & O).We did not detect pSer129 αSyn immunoreactivity in the contralateral SNpc of AAV-A53T-αSyn injected mice treated with saline (Fig. 6A & I-K) or with 2a (Fig. 6B & L-N).
We also analyzed the effect of 2a versus saline on the clearance of αSyn aggregates in the ipsilateral SN of AAV-A53T-αSyn injected mice by immunoblot (Fig. 7A-C).Brain extracts from the ipsilateral SN of AAV-A53T-αSyn mice were dissected, fractionated by ultracentrifugation (100Kxg for 45 minutes) into detergent-soluble and SDS-soluble fractions and αSyn aggregates were analyzed and normalized to βactin protein levels (Fig. 7A-C).The SN of AAV-A53T-αSyn mice had substantial amounts of soluble αSyn oligomers (Fig. 7A) and insoluble αSyn aggregates extracted using SDS detergent (Fig. 7B).Administration of 2a resulted in a strong clearance of αSyn aggregates in both the detergent-soluble (***p < 0.0001, Fig. 5A & D) and the SDS-soluble fractions (***p < 0.0001, Fig. 5B & D), con rming the results of the double-immunolabeling (Fig. 6).Together, these results demonstrate a strong effect of 2a treatment in attenuating the accumulation of αSyn in the SN.
2a boosts αSyn degradation pathways in the SN of αSyn mice
We hypothesized that the reduction of αSyn induced by 2a treatment may re ect an impact of 2a on αSyn degradation pathways.It has been already reported that VPS35 interact with WASH-FAM21 subunit to control of autophagosome formation in the late stage of the macroautophagic process [31], and VPS35 it is a key component of the CMA activity by controlling the retrograde tra cking of LAMP2A receptor [20].Therefore, we analyzed the effect of the pharmacological chaperone 2a on αSyn degradation pathways, macroautophagy, chaperone mediated autophagy (CMA) and the endo-lysosomal pathway in the SN of AAV-A53T-αSyn injected mice treated with saline or 2a by immunoblot analysis (Fig. 7D-M).p62 and LC3B are two macroautophagic markers and their protein levels increased in autophagy impairments as well as well as LC3-positive and p62-positive structures have been found prominent in autophagyde cient cells [32].LC3B and p62 both were signi cantly decreased in ipsilateral SN brain extracts of AAV-A53T-αSyn injected mice treated with 2a compared to control mice treated with saline (*p = 0.026 for p62, Fig. 7D & E, and p = 0.041 for LC3B, Fig. 7D & F).Treatment with 2a in AAV-A53T-αSyn mice resulted in signi cant increases in CMA markers (Hsc70 and LAMP2A) compared to saline controls (*p = 0.017 for Hsc70, Fig. 7G & I; **p = 0.0013 for LAMP2A, Fig. 7H & J).We also assessed the effect of 2a on lysosomal activity by analyzing Cathepsin D (CTSD) protein levels, the main lysosomal hydrolase involved in αSyn degradation.We found that 2a treatment, compared to saline-treated controls, increased both pro-CTSD (p = 0.028, Fig. 7K-L) and mature CTSD (p = 0.0447, Fig. 7K & M) in the ipsilateral SN of AAV-A53T-αSyn mice.
These data indicate that 2a treatment leads to activation of macroautophagy, an increase in CMA, and to increases in lysosomal functions, which are pathways that are known to regulate αSyn degradation [10].These data suggest that 2a may lead to neuroprotection by promoting clearance of αSyn through activation of these degradation pathways, although further studies will be needed to con rm this hypothesis.
Discussion
In the present study we assessed the neuroprotective effects of a pharmacological approach to stabilize the retromer complex [25] in a PD mouse model overexpressing mutant (A53T) αSyn in the SN [30].We found that 2a treatment increases VPS35 immunoreactivity (Fig. 4), the number of VPS35/VPS26 puncta and puncta size (Fig. 4) and retromer functions (Fig. 5) in DA neurons in the SNpc of AAV-A53T-αSyn mice.This pharmacological approach rescues behavioral de cits (Fig. 1G-H), protects against loss of DA neurons in the SNpc (Fig. 2), and attenuates loss of striatal DA bers (Fig. 3A-E) and striatal monoamines (Fig. 3F-H) in AAV-A53T-αSyn mice.Notably, the increase in functions of the retromer complex was associated with a robust reduction of pathological pSer129-αSyn in DA neurons of the SNpc (Fig. 6), as well as rescues in soluble and insoluble total αSyn accumulation (Fig. 7) in the SN.
The molecular mechanisms leading to neuroprotection by VPS35 against αSyn toxicity are still unclear but recent studies highlight a pivotal role for VPS35 in control of the main αSyn degradation pathways [10].First, VPS35 is a key modulator of autophagosome formation in the late stage of the macroautophagic process [31] and we now report an increase in macroautophagy upon treatment with 2a in AAV-A53T-αSyn mice, characterized by a signi cant reduction in the autophagic markers, p62 and LC3B, in the ipsilateral SN (Fig. 7D-F).Furthermore, VPS35 mediates retrieval transport of the LAMP2A receptor, a key protein for CMA, to lysosomes [20]; therefore, the 2a-induced signi cant increase in CMA markers (LAMP2A and Hsc70) in the ipsilateral SN of AAV-A53T-αSyn mice (Fig. 7G-J) indicates that 2a also activates CMA.Finally, VPS35 acts on regulating lysosomal homeostasis [33] and lysosomaldegradation system by managing the sorting of lysosomal hydrolase receptors [29], as well as Sortilin and CI-MPR, that we found to be signi cantly increased in the ipsilateral SN by treatment with 2a in AAV-A53T-αSyn mice (Fig. 5).These lysosomal hydrolase receptors act directly in the sorting of misfolded proteins [29] and can regulate the clearance of misfolded proteins indirectly by controlling tra cking and maturation of lysosomal hydrolases that lead to misfolded protein degradation [29].Sortilin and CI-MPR, control tra cking and maturation of CTSD, the major enzyme that leads to the degradation of αSyn [34], from the trans-Golgi network to lysosomes.We found that administration of 2a signi cantly increased pro and mature forms of CTSD protein levels in the ipsilateral SN of AAV-A53T-αSyn mice (Fig. 7K-M).Together these data showed that the 2a-induced increase in retromer protein levels and functions results in a boost of lysosomal activity and promotes the clearance of pathological and aggregated forms of αSyn.
Recently retromer dysfunction has been linked to different neurodegenerative disorders, including ALS [25] and AD [23].A signi cant reduction in levels of retromer subunits (VPS35 and VPS26) has been reported in IPSC-derived motor neurons (MNs) from ALS patients, in the ventral horn MNs from ALS postmortem explants and in MNs from an ALS mouse model (SOD1 G93A ) [25].Furthermore, neuron-speci c deletion of VPS35 leads to the selective degeneration of ventral horn MNs in the spinal cord accompanied by protein inclusions and marked gliosis [35].We previously showed that pharmacological stabilization of the retromer complex by treatment with 2a in an ALS mouse model (SOD1 G93A mice) increases retromer subunit protein levels (VPS35, VPS29 and VPS26) and markers of retromer functions (Sortilin and CI-MPR) and protects against degeneration of spinal cord ventral horn MNs [25].We now demonstrate that retromer stabilization by 2a in the AAV-A53T-αSyn PD mouse model promotes the clearance of pathological misfolded aggregates of αSyn, (Fig. 6 and Fig. 7) and protects DA neurons from degeneration.In support of a crucial role of VPS35-retromer subunit in controlling protein aggregation and accumulation of misfolded proteins, previous studies have been shown that depletion of VPS35 leads to misfolded protein accumulation in yeast [19].
Chen and collaborators have been shown that genetic manipulations of VPS35, did not rescue αSyninduced neurotoxicity and VP35 and αSyn fail to interact and modulate neurodegeneration [36].In contrast, a prior study showed that using lentiviruses to increase VPS35 induces neuroprotection and a reduction in αSyn accumulation in the hippocampus of αSyn overexpressing mice [19].Furthermore, two independent studies on induced pluripotent stem cell-derived neurons patients harboring the p.D620N mutation have shown αSyn pathology and clearance impairment [19,37].VPS35 de cient mice resulted in the accumulation of αSyn aggregates [14], and recently a VPS35 knock-in mice recapitulate a spectrum of cardinal features of PD including a dramatically accumulation of αSyn in SNpc from 15 month of age without Tau pathology [38].A case report of atypical parkinsonism associated with rare sequence variants in VPS35 (c.102 + 33G > A) and FBX07 showed Lewy bodies in SN and Syn immunoreactivity in midbrain, pons, medulla, basal ganglia, amygdala, hippocampus and neocortical cortex [39].The current study shows that a pharmacological stabilizer of VSP35 leads to potent neuroprotection with robust attenuation of pathological αSyn accumulation in the SN in a mouse model of PD, thus supporting the potential testing of pharmacological strategies to target VSP35 for neuroprotection in PD.However, although we have demonstrated that compound 2a increases VPS35 and retromer functions, further studies are needed to assess the possibility that 2a also could have an impact on other pathways that contribute to its neuroprotective effects.
No strategies have yet been proven clinically to slow or block the progression of PD.Mutations in the VPS35 gene are a rare autosomal dominant cause of familial PD [7 − 6] and accordingly, VPS35 controls mechanisms of relevance to PD pathogenesis: i) pathways for clearance of abnormal proteins, including αSyn aggregates [19][20][21], ii) mitochondrial function by regulating mitochondrial dynamics [16-18], and iii) regulation and maintenance of neuronal signaling events [11][12][13].The data presented here in a pathophysiologically relevant αSyn-based PD mouse model highlight the neuroprotective potential of increasing retromer levels to block the accumulation of pathological αSyn and to protect against neurodegeneration in PD.
However, an in-depth analysis is needed to precisely elucidate the molecular mechanism/s controlled by the retromer stabilization that leads to the protection of DA neurons and the clearance of toxic αSyn aggregates.Transcriptomic and proteomic studies may be helpful to evaluate the potential effects of 2a on other pathways that may act in parallel or synergistically with retromer stabilization to contribute to the neuroprotective effect.
Conclusion
Treatment with 2a, a VPS35-targeting small molecule (pharmacological chaperone), leads to robust protection against αSyn toxicity in an AAV-A53T-αSyn model of PD.These data support the potential for future clinical testing of 2a or other VPS35-targeted therapies for disease-modifying effects in PD.Treatment with 2a protects against of striatal DA bers and monoamines in AAV-A53T-αSyn injected mice: A-D) TH immunohistochemistry in representative striatal sections of male mice treated with 2a (10 mg/Kg for 100 days) at 120 days post-injection of AAV-A53T-SNCA, scale bar is 500 mm.E) Relative optical density of bers in the ipsilateral striatum compared with the contralateral side of male mice in each group (n=6-8 mice per group).Signi cance determined by one-way ANOVA.Error bars represent mean ± SD F-K) Evaluation of ipsilateral striatal monoamine in mice treated with 2a (10 mg/Kg for 100 days) or saline solution at 120 days post-injection of AAV-A53Tor AAV-empty as control; F) mean DA levels, G) mean of 3,4-Dihydroxyphenylacetic acid levels (DOPAC), H) mean levels of 3-methoxytyramine (3-MT), I) mean levels of homovanillic acid (HVA), J) mean levels of 5-hydroxytryptamine (5-HT) and K) mean levels of Norepinephrine (NE); the values are expressed as fold change (7-9 mice per group).
Signi cance determined by one-way ANOVA.Error bars represent mean ± SD.
Figure 2 Treatment
Figure 2
Figure 4 The
Figure 4
Figure 5 The
Figure 5
Figure 7 Treatment
Figure 7 | 2023-10-28T05:06:07.818Z | 2023-10-19T00:00:00.000 | {
"year": 2023,
"sha1": "fac309737df12b208f87ff74735078c2c1c3ac0c",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-3417076/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "fac309737df12b208f87ff74735078c2c1c3ac0c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233869143 | pes2o/s2orc | v3-fos-license | Expression profiles of circular RNAs in colon biopsies from Crohn's disease patients by microarray analysis
Abstract Background Circular RNAs (circRNAs) are involved in various diseases and serve as biomarkers. The present study aimed to investigate unique expression profiles of circRNAs in colon tissues of Crohn's disease (CD) and search novel biomarkers for the diagnosis. Methods Differentially expressed (DE) circRNAs in biopsies from four CD patients, four ulcerative colitis (UC) patients, and four healthy controls (HC) were screened by microarray. Hsa_circ_0062142 and hsa_circ_0001666 were verified in another expanded validation cohort. Bioinformatics analysis was applied to predict the function of two DE circRNAs. Receiver operating characteristic (ROC) curves were constructed to evaluate the diagnostic value of CD. Results The top 10 upregulated circRNAs in CD compared with HC were hsa_circ_0000691, hsa_circ_0001666, hsa_circ_0004183, hsa_circ_0009024, hsa_circ RNA_405324, hsa_circ_0002003, hsa_circ_0085323, hsa_circ_0040994, hsa_circ_0062142, and hsa_circ_0048148; the top 10 downregulated circRNAs were hsa_circ_0049356, hsa_circ RNA_405443, hsa_circ RNA_403556, hsa_circ_0092328, hsa_circ_0003979, hsa_circ_0074491, hsa_circ_0023461, hsa_circ RNA_406237, hsa_circ_0034044, and hsa_circ RNA_400564 (fold change in descending order). The expression levels of hsa_circ_0001666 and hsa_circ_0062142 in CD were significantly higher than those in UC and HC (p < 0.01). ROC curves suggested the favorable diagnostic value of hsa_circ_0062142 and hsa_circ_0001666 (AUC = 0.803 and 0.858, respectively, p < 0.01). In silico analysis indicated that these circRNAs may be involved in the progress of CD. Conclusion Hsa_circ_0062142 and hsa_circ_0001666 may play critical roles in the pathogenesis and serve as potential biomarkers of CD.
| INTRODUC TI ON
Crohn's disease (CD) and ulcerative colitis (UC) are two major subtypes of inflammatory bowel disease (IBD). CD is characterized by transmural inflammation that affects any segments of gastrointestinal tract.
More than 50% of the patients with CD also present complications of stricturing and fistulas within 10 years, followed by significant morbidity and disability. The incidence and prevalence of CD have been increasing in Asia 1,2 ; however, the etiology of CD is not yet understood.
It is widely accepted that multiple factors involving gene, environment, microbe, and the mucosal immune system interact in a complex mechanism. Although CD and UC possess different clinical, radiographical, endoscopic, genetic, histological and immunological characteristics, occasionally, clinical symptoms, and histological features are overlapping. Currently, common biomarkers in serum and feces, such as the anti-Saccharomyces cerevisiae antibody (ASCA), antineutrophil cytoplasmic antibody (ANCA), and fecal calprotectin, have limited value in differentiating CD from UC. 3 Especially, when the inflammation is localized to the colon, differential diagnosis is challenging. It is estimated that about 10% of these colonic IBD patients are diagnosed with IBD unclassified (IBDU). 4 Thus, novel efficient diagnostic biomarkers for CD are required for accurate diagnosis, appreciate treatment, and gain new insights into the mechanisms.
Circular RNAs (circRNAs), a novel class of noncoding RNAs, are featured with a covalently closed loop that lacks either 5′-3′ polarity or poly-adenylated tail. CircRNAs used to be regarded as the nonfunctional byproducts of mRNA splicing. However, with the development of RNA sequencing technology, circRNAs occur ubiquitously and show spatial and temporal specific expression. [5][6][7] Recent studies have shown that circRNAs regulate gene expression at the transcriptional or post-transcriptional level by functioning as microRNA sponges, interacting with small nuclear RNA (snRNA) or RNA polymerase II in the nucleus, and binding to transcription factors. 8,9 Emerging evidence demonstrated that circRNAs were involved in various diseases including cancer, 10,11 diabetes, 12 Alzheimer's disease, 13 and atherosclerosis. 14 Moreover, circRNAs were reported to be used as diagnostic or prognostic biomarkers based on specific expression profiles and high biological stability. [15][16][17][18][19][20] Nevertheless, the expression profiles and potential roles of circRNAs in CD are limited.
In present study, we investigated the expression profiles of circRNAs in colon tissues of CD by microarray. Two differentially expressed circRNAs were verified by quantitative real-time polymerase chain reaction (qRT-PCR); also, their diagnostic value as potential biomarkers for CD was evaluated. Furthermore, the potential function of the two selected circRNAs in CD was predicted by bioinformatics analysis.
| RNA extraction and quality control
Total RNAs were isolated from pinch tissues using TRIzol reagent (Invitrogen Life Technologies) according to the standard protocol.
The yield and purity were determined with a NanoDrop ND-1000 (Agilent), and the integrity of RNAs was checked by 1% formaldehyde denaturing agarose gel electrophoresis.
| Quantitative real-time PCR
Complementary DNA was reversely transcribed from total RNAs with random primers and SuperScript TM III Reverse Transcriptase (Invitrogen). qRT-PCR was conducted using master mix (Arraystar) on ViiA 7 Real-time PCR System (Applied Biosystems). The reaction conditions were as follows: 95°C for 10 min and 40 cycles of 95°C for 10 s, and 60°C for 60 s. All of the qRT-PCR reactions were run in triplicate. β-actin was used as the internal control. Divergent primers, rather than convergent primers, were designed to specifically amplify circRNAs. The sequences of the primers were listed in Table 2. The relative expression levels of circRNAs were normalized to β-actin and calculated by using the 2 − ΔΔCt method. The appearance of a single-peak in the melt-curve indicated that the amplification was specific.
| Bioinformatics analysis
The circRNA/microRNA/mRNA network was conducted to pre-
TA B L E 2
The sequences of primers used in RT-PCR for validation
| Statistical analysis
All statistical analyses were performed using SPSS 11.0 (SPSS Inc.).
The expression level of each circRNA was represented as fold change using the 2 − ΔΔCt method. Results were expressed as the mean ± standard deviations or medians (quartiles) when they fit.
| General expression profiles of CircRNAs in CD
Pinch biopsies from four CD patients, four UC patients, and four healthy controls were used to explore the expression profiles of circRNAs by microarray. In total, 9200 out of 13,617 circRNAs were detected by the microarray platform. As illustrated in Figure 1, 182 circRNAs exhibited significantly differential expression between
The candidate circRNAs were shown to be upregulated in CD, exonic types, and non-derivation from chromosome Y. The qRT-PCR data were consistent with those from microarray. The expression levels of 6 circRNAs (hsa_circ_0067185, hsa_circ_0001666, hsa_ circ_0027774, hsa_circ_0004183, hsa_circ_0028912, and hsa_ circ_0062142) were significantly upregulated in CD than those in UC and HC. Among these, the level of hsa_circ_0001666 increased significantly and successively in HC, UC, and CD ( Figure 2). In view of the significant difference, hsa_circ_0004183, hsa_circ_0001666, and hsa_circ_0062142 became the candidates for further study.
UC. 26
In the present study, UC patients were enrolled as patient con- 29 and breast cancer 30 ) and Alzheimer's disease. 13 Based on the circRNA/microRNA/mRNA network and bioinformatics analysis of differentially expressed circRNAs, we observed that the predicted microRNAs of hsa_circ_0062142 and hsa_circ_0001666 are involved in several cellular processes, such as epithelial to mesenchymal transition (EMT), Th17 differentiation, and carcinogenesis.
Fibrosis is a common feature of CD. EMT might be a major contributor to the pathogenesis of fibrosis in CD, owing to activated fibroblasts being recruited in the inflamed intestinal tract. 31 Reportedly, hsa_circRNA_102610 is upregulated in PBMCs of CD patients and promotes intestinal epithelial cells proliferation and TGF-β1-induced EMT by sponging hsa-miR-130a-3p. Thus, hsa_circRNA_102610 was inferred to participate in the mechanism of CD. 32 Among the predicted miRNAs of the two DE circRNAs, hsa-miR-199a-5p, hsa-miR-34b-5p, and hsa-miR-23b were previously reported to be as- EMT. 33 Hsa-miR-34b-5p was found to regulate the mRNAs of EMTtranscription factors and play a role in the metastasis and progression of colorectal cancer. 34 Hsa-miR-23b was identified to suppress EMT and the metastasis of hepatocellular carcinoma. 35 In the potential target miRNAs, hsa-miR-30c and hsa-miR-20b are deemed as autoimmune-deregulated miRNAs due to their positive and negative regulation of Th17 differentiation, respectively. 38 Hsa-miR-326 was identified to promote Th17 differentiation in multiple sclerosis. 39 Furthermore, as a chronic intestinal inflammation, CD possesses an increased risk for colorectal cancer. Hsa-miR-574-5p, hsa-miR-133b, hsa-miR-133a-3p, hsa-miR-30b-5p, and hsa-miR-326 in the ceRNA network were proved to play suppressive roles in colorectal cancer by inhibiting cell proliferation, invasion, and migration. [40][41][42][43][44][45] Nevertheless, the present study has some limitations. First, the number of subjects is small, and the results need to be substantiated in a larger cohort. Second, the circRNA/miRNA/mRNA network was only predicted by bioinformatics analysis. Experimental research is required in the future.
In summary, the current study identified the comprehensive circRNA expression profiles in colon tissues of CD compared with tissues of UC and healthy controls. We showed that two circRNAs (hsa_circ_0062142 and hsa_circ_0001666) were significantly upregulated in colon tissues of CD than those in UC and healthy controls.
Bioinformatics analysis indicated that hsa_circ_0062142 and hsa_ circ_0001666 might be involved in the progression of CD. Together, these findings suggested that circRNAs play a crucial role in the pathogenesis of CD and might provide potential diagnostic biomarkers and therapeutic targets for CD.
ACK N OWLED G EM ENTS
This work was supported by grants from the National Natural
CO N FLI C T O F I NTE R E S T
The authors declare no conflict of interest.
DATA AVA I L A B I L I T Y S TAT E M E N T
The original data of this research are available from the corresponding author on request. | 2021-05-07T06:22:55.912Z | 2021-05-06T00:00:00.000 | {
"year": 2021,
"sha1": "715e4059f63f3cc51c1161480705bdee368d9114",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1002/jcla.23788",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ee6e6432bb8a633d04c94635fbd8bafc478ee28a",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
14816169 | pes2o/s2orc | v3-fos-license | Susceptibility of Marmosets (Callithrix jacchus) to Monkeypox Virus: A Low Dose Prospective Model for Monkeypox and Smallpox Disease
Although current nonhuman primate models of monkeypox and smallpox diseases provide some insight into disease pathogenesis, they require a high titer inoculum, use an unnatural route of infection, and/or do not accurately represent the entire disease course. This is a concern when developing smallpox and/or monkeypox countermeasures or trying to understand host pathogen relationships. In our studies, we altered half of the test system by using a New World nonhuman primate host, the common marmoset. Based on dose finding studies, we found that marmosets are susceptible to monkeypox virus infection, produce a high viremia, and have pathological features consistent with smallpox and monkeypox in humans. The low dose (48 plaque forming units) required to elicit a uniformly lethal disease and the extended incubation (preclinical signs) are unique features among nonhuman primate models utilizing monkeypox virus. The uniform lethality, hemorrhagic rash, high viremia, decrease in platelets, pathology, and abbreviated acute phase are reflective of early-type hemorrhagic smallpox.
Introduction
Variola virus (VarV) and monkeypox virus (MPXV), the etiological agent of smallpox and monkeypox diseases respectively, are members of the genus orthopox in the family Poxviridae which consists of large (about 200Kb), double stranded-DNA viruses. Through coordinated vaccination efforts, naturally occurring VarV has been eradicated. Cessation of routine vaccination has left the global population with no, or waning, immunity. Reintroduction of wild type or a modified version of variola virus into a now susceptible population would be socially and economically devastating. Moreover, there is an increased incidence of other orthopoxvirus infections such as monkeypox, cowpox, and vaccinia viruses [1]. Of these, monkeypox virus is especially worrisome because it is more frequently reported and more severe in humans [2,3]. The outbreak in the United States in 2003 heightened our awareness and concerns as the disease was capable of infiltrating the western hemisphere [4].
Smallpox and monkeypox have a very similar clinical presentation. Both have an incubation time of approximately 10 days, followed by fever and concomitant appearance of rash. Most cases of smallpox were categorized as ordinary-type smallpox, in which a centrifugal exanthem progressed through multiple stages (e.g., macules, papules, vesicles, pustules, scabs). Mortality in ordinary smallpox was about 30%. This is in sharp contrast to the hemorrhagic form (early and late-type) of the disease, which was almost uniformly lethal and lacked progressive skin lesions. Monkeypox is not known to have a true hemorrhagic presentation in humans, although it was noted that a patient from the 2003 United States outbreak had hemorrhage within skin lesions [5]. Monkeypox is less severe than smallpox (> 10% mortality), and clinically resembles discrete ordinary or modified forms of smallpox [6]. Lymphadenopathy is thought to be characteristic of monkeypox, a feature not reported by clinicians who have previously treated smallpox afflicted individuals [6].
To date, there are no Food and Drug Administration (FDA) licensed antiviral therapies to combat a smallpox, or any other poxvirus, outbreak. Humans are the only known host of variola virus. Since exposure of humans to variola virus would be unethical and field studies with surrogates are not feasible, the development and licensure of potential countermeasures will rely on efficacy in animal models through FDA's "Animal Rule" [7,8]. Within these regulations, survival (decrease in mortality) is the generally accepted outcome for assessing the benefit of a biologic or therapeutic in a model system that parallels human disease [7][8][9]. Also, the etiologic agent at a dose and route similar to human exposure should be part of the test system. The limited access to, and restricted host range of, variola virus has precluded the development of such models. In fact, the only variola virus based model to be used for assessing efficacy of a test article is the semi-lethal intravenous macaque model [10,11]. These issues have prompted the use of an appropriate surrogate virus, that is, one that causes smallpox-like disease in humans-namely, monkeypox virus.
Intravenous (IV) infection of macaques with MPXV is the predominant NHP model system for the development of smallpox countermeasures (reviewed by [12]). Shortly after a high dose inoculation, macaques become febrile, develop a characteristic progressive rash, viremia, and subsequently succumb to infection [10,13,14]. Although the model provides a good representation of severe lesional disease, it unfortunately, like variola virus in macaques, entails a large bolus of virus administered through an unnatural route. Other nonhuman primate (macaque) models utilizing MPXV, such as aerosol, intrabronchial, and intratracheal inoculation models, capture the essence of a natural infection, but still require a high dose of virus and sacrifice lesional burden for route of administration [15][16][17][18]. Moreover, all macaque models utilizing MPXV tend to have an abbreviated incubation period. Although macaques are susceptible to infection with monkeypox virus (by definition), thus far, a faithful, fulminant disease under conditions that recapitulate human infection (e.g., low dose, natural route) have not been successful in macaques.
It was the goal of our lab to either improve upon the MPXV-macaque intravenous model or to change the host to another more susceptible NHP species. We were unsuccessful in lowering the dose by administering sucrose gradient purified MPXV or a fresh preparation of MPXV that incorporated the extracellular form of the virus (Mucker, unpublished). Therefore, we turned to the common marmoset (Callithrix jacchus) as a potential susceptible host. Marmosets are becoming a more common host for modeling viral diseases. Such diseases include: Lassa fever, Eastern Equine Encephalitis, Ebola, Marburg, Dengue, Rift Valley Fever, Hepatitis C, and influenza [19][20][21][22][23][24][25]. The genetic similarity to humans, small size, availability, and relative safety are all attributes contributing to the increasing use of marmosets [26]. Outbreaks of poxvirus in marmoset colonies have been documented [27,28]. An orthopox model utilizing calpox virus was developed concomitant with our studies and showed that marmosets were indeed susceptible to low levels of a poxvirus by a natural route [29,30]. Calpox virus is a relatively new orthopoxvirus and was originally identified in marmosets in 2006 [28] Although resembling cowpox virus, calpox virus has yet to be adequately characterized and is not known to cause disease in humans. Monkeypox virus has been reported in captive marmosets housed in the Rotterdam Zoo in 1964 [31,32]. Although a few marmosets were exhibiting mild signs of illness, only a single animal (one that succumbed to disease) was confirmed to have monkeypox [31,32].
Using a dose-down strategy, we show that marmosets are highly susceptible to low doses of monkeypox virus via the intravenous route and have an incubation period (pre-clinical signs) more characteristic of monkeypox and smallpox disease.
Virus Preparation and Cells
The monkeypox virus strain Zaire (V79-I-005) stock was previously described [10,14]. The virus was plaque titrated on Vero E6 cells (ATCC CRL-1586). Inoculum was diluted to the initial target dose and 1:10 dilutions of this stock were made for subsequent inoculums. All inoculums were subsequently back titrated.
Hematology and Quantitative PCR
Hematological data was generated on an ACT 10 Beckmann Coulter using whole EDTA blood. Since "in-house" hematological reference values were unavailable, reference values from Johnson-Delaney, 1994 for white blood cells and platelets [33] and Adams et al. 2008 for lymphocyte numbers [20] were utilized. Extractions were performed using Qiagen QiAMP DNA Blood Minikit according to manufacturer's instructions using 50uL of EDTA blood, except for the heat inactivation step (10 min @ 56°C) in lysis buffer which was extended for 1 hour to ensure inactivation of the virus Quantitative PCR was performed as previously described [10,34].
Plaque Reduction Neutralization Assays (PRNT) and Blood/Serum Titration
Plaque reduction neutralization assays (varying serum/plasma, constant virus) were performed on all animals pre-infection and on the lowest dosing group post infection. Briefly, serum or plasma samples were diluted 1:10 and heat inactivated for 30 minutes in a 56°C waterbath. Serially 1:4 dilutions were performed in EMEM and monkeypox virus was added for a target of 100 PFU/well. The samples were incubated at 4°C overnight.
Both the neutralization assay and blood/blood component samples were titrated in a similar manner. One hundred microliters of sample was adsorbed to decanted Vero E6 cells for one hour. A liquid overlay of EMEM containing 2% heat inactivated serum was added to each well and incubated at 37°C for 5 days. Crystal violet was used to stain and elucidate plaques.
Nonhuman Primates and Inoculation
Eighteen adult male marmosets (Callithrix jacchus) weighing greater than 300 grams were screened for neutralization activity to monkeypox virus previous to infection. The inoculum was prepared as described in the "virus preparation" section and 300 uL was loaded into 1 mL syringes. Groups of three marmosets were infected intravenously via the tail vein, or saphenous vein if infection via the tail vein was unachievable. Blood from the femoral vein was acquired within two minutes of infection. Phlebotomy and physical examinations were performed every three days post exposure, except where noted. Rectal temperatures were performed on the first iteration, the latter half of the second iteration, and subsequent iterations. Microchips (BMDS) were used to identify and ascertain temperature during the first half of the second iteration, but ceased due to equipment failure. Reference temperature ranges from Johnson-Delaney, 1994 [33] were utilized and are provided.
Necropsy
A necropsy was performed on all animals, either as soon as death occurred from infection or after humane euthanasia of terminally ill or moribund animals. All tissues were immersionfixed in 10% neutral buffered formalin for a minimum of 21 days, according to Institute protocol.
Histology and Immunohistochemistry
Formalin-fixed tissues for histologic examination were trimmed, processed, and embedded in paraffin according to established protocols [35]. Histology sections were cut at 5μm, mounted on glass slides, and stained with hematoxylin and eosin (H&E). Immunohistochemical staining was performed on replicate tissues sections using an Envision + kit (DAKO, Carpinteria, CA). Normal splenic tissue (USAMRIID) served as the negative control; the positive control was spleen from a known monkeypox virus infected nonhuman primate (USAMRIID); and normal rabbit serum (USAMRIID) was used as the negative control. Briefly, sections were deparaffinized in xyless, rehydrated in graded ethanol, and endogenous peroxidase activity was quenched in a 0.3% hydrogen peroxide/methanol solution for 30 min at room temperature. Slides were washed in phosphate buffered saline (PBS) then sections were incubated in the primary antibody, a non-commercial rabbit polyclonal antibody (USAMRIID) against vaccinia virus, diluted 1:3500 for 60 minutes at room temperature. Sections were washed in PBS and incubated for 30 min with Envision + rabbit secondary reagent (horseradish peroxidase-labeled polymer) at room temperature. Peroxidase activity was developed with 3,3'-diaminobenzidine (DAB), counterstained with hematoxylin, dehydrated, cleared with xyless, then coverslipped.
Electron Microscopy
Selected sections of formalin-fixed liver were trimmed for electron microscopy, post fixed in a mixed aldehyde fixative followed by osmium tetroxide, contrasted in ethanolic uranyl acetate, dehydrated in an ascending series of ethanol, infiltrated in a mixture of propylene oxide and resin, and embedded into EMBed 812 resin. Thin sections (~80 nm) were mounted on copper EM support grids and counter stained with uranyl and lead salts. Samples were examined on a JEOL 1011 transmission electron microscope at 80kV. All supplies for electron microscopy were from Electron Microscopy Sciences (Hatfield, PA) unless otherwise noted.
Ethics Statement
Research was conducted under in compliance with the Animal Welfare Act, PHS Policy, and other Federal statutes and regulations relating to animals and experiments involving animals. The facility where this research was conducted is accredited by the Association for Assessment and Accreditation of Laboratory Animal Care, International and adheres to principles stated in the Guide for the Care and Use of Laboratory Animals, National Research Council, 2011. All animal experiments were approved by USAMRIID's Insitutional Animal Care and Use Committee.
Animals were housed in individual metal cages meeting current standards for the duration of the housing period in biocontainment level 3. Room environment is centrally controlled by an HVAC system that maintains room humidity and temperature. Animals were provided pelleted commercially available feed and potable water was provided at libitum from an automatic watering system. In addition, animals received supplemental foods, treats and fruits daily. Animals are provided manipulanda (toys, metal mirrors), foraging devices, treats and fruits as enrichment. Treats and extra fruits were increased while in biocontainment. Euthanasia was performed when the animal(s) met the criteria for euthanasia using a score sheet for intervention or when found moribund. The scoring systems was based on recumbency, prostration, dyspnea, and responsiveness. Animals were oberserved and scored at least twice daily by trained personnel. In addition,husbandry and general assessments were conducted at least once daily by Veterinary Division caretakers. Animals requiring euthanasia were anesthetized and subsequently euthanized with a pentobarbital based solution following the AVMA Guidelines on Euthanasia. Of the experimentally exposed animals, eleven met the criteria and six succumbed to poxvirus-related disease. The six animals that succumbed exhibited signs of disease, but did not meet criteria for euthanasia. Even with multiple checks per day, it was not possible to implement early endpoint euthanasia for all animals on this study. In addition, one animal succumbed to an experimentally unrelated condition. Anesthesia was also provided prior to performing phlebotomy. Analgesics were withheld to avoid any known and/or potential alterations of the disease process The scientific justification was approved for withholding analgesia by the IACUC, as required by applicable laws and regulation.
Disease Development
Groups of three marmosets (a total of eighteen) were intravenously exposed to six decreasing doses of MPXV. We chose a starting target dose of 5x10 7 PFU as this dose is commonly used in the intravenous macaque model [12]. Animals were challenged with either 2.4x10 7 , 9.5x10 5 , 7.8 x10 4 , 5.0x10 3 , 510, or 48 PFU, as established by titration of the inoculums. All animals developed a similar disease course and died or were euthanized by day 15 post-infection (Fig 1). One animal died from causes unrelated to experimental infection, and data pertaining Survival curve for marmosets intravenously exposed to decreasing doses of monkeypox virus. Groups of animals (n = 3) were exposed to 10 fold reductions in viral titer in six different experiments. In order to proceed to the next lower dose, 100% of the animals had to succumb. Due to the severity of the disease, one dose (5x10 6 PFU) was not evaluated. to this animal was excluded. The major differences between the doses were the temporal onset of disease and the phenotypic presentation of rash (Figs 2 and 3). These differences can be generally categorized into three groups: (1) animals succumbing relatively quickly, characterized by generalized hemorrhagic manifestations, (2) animals that survived longer and had more focal/discrete epidermal lesions, and (3) animals that were intermediate (between 1 and 2) for both survival and rash presentation. More specifically, animals in the highest dose group had definable clinical signs on day two, decreased activity, anterior abdominal matting of haircoat, and an unkempt appearance. By day three, a cutaneous rash (generalized anterior erythema and few petechiae), significant lymphadenopathy, and pronounced lethargy were observed in two of three animals. These clinical signs became more severe, with animals becoming somnolent before succumbing to the disease by day 9. In fact, the disease was so severe that we opted to skip a challenge dose. In contrast, lymphadenopathy and rash in the lowest dose group were not observed until at least day 9 and the earliest time point for euthanasia in this group was on day 14. The lesions in this group were much more discrete and were composed of flat, welldefined lesions that appeared dark red but never progressed through the typical stages of classic orthopoxvirus disease (Fig 2). The disease manifested in animals infected with 9.5x10 5 PFU and 510 PFU, resembled their higher or lower dose counterparts, respectively. Animals in the remaining two dosing groups (7.8 x10 4 and 5.0x10 3 PFU) had slightly mixed presentations, with generalized hemorrhagic manifestations and few focal lesions. The appearance of the rash and lymphadenopathy relative to the day they succumbed/euthanized tended to be more animal specific, rather than dose specific. For instance, there was variation from 0-10 days between onset of rash/lymphadenopathy and death (Fig 3).
Viral Load and Hematology
To test whether marmosets exhibit a systemic and circulating infection, a feature of smallpox disease and reported correlate of infection in other nonhuman primate models [10,14], we tested blood for the presence of viral genomic material using an established QPCR assay [34]. Animals were bled on days -3, 0 (post infection), 3, 6, 9, 12, 15 post infection on which QPCR and hematology was performed. Quantifiable detection of circulating viral genomes was dose dependent and ranged in individual animals from days 3 to 9 and between groups (mean) from days 3 to 8 (Figs 3, 4A and 4B). Animals in all groups produced a striking increase in circulating viral genomes (Fig 4B). This change ranged from about 3 to 5 logs, with the greatest increase generated by the lowest two challenged groups where no genomes were quantifiable until at least day 6 in one of three animals (Fig 4). The post infection bleeds (Day 0), which were acquired less than 2 minutes after inoculation, suggest some variation in dosing during inoculation (Fig 4B). Affirmation of circulating infectious virus was obtained by plaque titration of both serum and whole EDTA blood from the lowest dosing group (Fig 4C).
Mobilization of white blood cells is a hallmark of most viral infections. Like genomic viremia, rash, and lymphadenopathy, changes in the white blood cell counts were temporal, depending on dose (Fig 5A). The greatest increases occurred on samples obtained on days 6 and 9 for most animals. In the lowest dose group, all three animals had their largest increase on day 12 (Fig 5A). Changes in lymphocyte number followed the same trend, increasing over time (Fig 5B).
In most animals tested (16 of 17 animals) there was a marked decrease in the number of platelets in the last 3 to 6 days of the disease course (Fig 5C). These changes appear to be dose dependent: Animals in the highest three dose groups (8 of 9 animals) dropped below the reference range [33] between days 3 and 6 post exposure, with 7 of 8 of these animals dropping on day 6. Whereas 4 of 5 animals in the 5x10 3 PFU and 510 PFU groups and 2 of 3 animals in the 48 PFU group crossed this threshold on day 9 and 12 post exposure, respectively. Animal #17 (48 PFU) had low values on day 0 (85,000 platelets/μL) and day 3 (69,000 platelets/μL) post Temporal evaluation of white blood cells (WBC) and platelets in marmosets exposed to decreasing doses of monkeypox virus. Values were obtained using EDTA whole blood using a Beckman ACT 10 Hematology Analyzer. Absolute values by group are presented for WBC, A, and lymphocytes, B, and platelets, C. Notice the increase and temporal shift for the lower dose groups in reference to the total WBC and lymphocytes, A and B. Decreasing platelet numbers were also apparent. Reference ranges are from [33] (WBC and platelets) and [20] (lymphocyte #). exposure. Values for this animal rebounded (692,000 platelets/μL) to slightly above baseline levels on day 6 and, subsequently, dropped below the reference value on day 14. It is difficult to say whether this fluctuation was a legitimate finding in this animal or if the two early samples were falsely lowered (e.g., due to pseudo-thrombocytopenia and platelet clumping).
Weight and Temperature
Smallpox is known to cause a fever immediately preceding the onset of lesions. To assess whether marmosets responded in a similar manner, rectal temperatures were captured when hands-on procedures were conducted (ie., under anesthesia). After the first iteration (highest dose group), we attempted to collect more temperature data (with and without anesthesia) by implanting microchips (animals #4, #5 and #6). Unfortunately, equipment failure precluded this activity. Temperature data had a similar trend in all dosing groups. Although no discernable fever was detected (with the possible exception of animal #2 and animal #4 on days 2 and 4, respectively), which is probably due to the basal variability in marmosets and the infrequency of data capture, animals became hypothermic towards the final stages of disease ( Fig 6A). On the other hand, weight tended to decrease slightly and rebound, approaching or surpassing the pre-infection weight (Fig 6B). This is most likely due to an inability of the diseased animal to adequately maintain fluid homeostasis. Evaluation of weights and temperature in marmosets exposed to decreasing doses of monkeypox virus. Absolute values of temperatures, A, and weight, B, are provided. Temperatures were rectally acquired except for 9.5 x 10 5 PFU dosing group, in which subcutaneous implants (BMDS) were used until failure of the equipment (day 6 post exposure). Data for groups/individuals have been color coded to match previous graphs and normal ranges from [33], A, are provided as a shaded box.
Plaque Reduction Neutralization
Samples from the low dose group were assessed for their ability to neutralize monkeypox virus on post infection days 9, 12, and terminal days (days 14 or 15). In terms of plaque counts, plasma or serum from these samples did not neutralize monkeypox virus to an appreciable degree (data not shown). It was noted that plasma/serum collected on days 9 and 12 seemed to reduce the presentation of comets (secondary plaque formation) relative to the concentration of heat inactivated serum present (Fig 7C). Comets are formed by the release of the extracellular form of poxviruses (extracellular enveloped virions, or EEV) and can be inhibited by antibody specific for the extracellular form, with or without complement ( [36][37][38][39][40]). Samples of the same animals on Days 14 or 15 were similar to controls and showed no apparent reduction in comet formation (data not shown).
Plaque titration of the Day 14 and Day 15 serum alone produced a more focused plaque morphology, relative to the virus control (Fig 7A and 7B), suggesting an anti-EEV response. Unlike samples utilized for the neutralization assay, titration samples were not heated to inactivate complement (as this inactivates the virus). In this case, any role of complement in reducing comet formation can not be ruled out, as the heat inactivated samples did not reduce comets for Day 14 and 15 samples. It is possible that antibody neutralization (from the Day 14 and 15 samples) is complement dependent or that, because of the high titer of virus present in these samples, the antibody was not in excess and could not inhibit comets when additional virus was added for the neutralization assay. Further evaluation is needed to confirm the presence of EEV specific antibody in these samples.
Gross Pathology
Gross pathologic findings for all animals were similar, regardless of dose group, with the exception of cutaneous lesions. All 18 animals had an erythematous rash (Fig 2) in the anterior chest, abdominal and inguinal regions, but there were varying degrees of petechiae and ecchymoses on the face, chin, chest, abdomen, axillary and inguinal areas, forearms, and scrotum, depending on dose group. There were discrete areas of hemorrhage (petechiae and ecchymoses) rather than a generalized erythematous rash in animals that survived longer (i.e. lower dose groups).
Histology, Immunohistochemistry, and Ultrastructure
Histopathologic lesions were not dose-dependent. All animals had lesions attributable to MPXV exposure consistently observed in the lymph nodes, spleen, liver, adrenal glands, and bone marrow (S1 Table). Lesions in the lungs and skin were also seen across all dose ranges, but varied more in severity. There was lymphoid depletion (18/18) and necrosis (14/18) within Experimental Monkeypox Disease in Marmosets one or more lymph nodes (Fig 9A). In the spleen there was significant white pulp depletion with areas of necrosis (17/18; Fig 9C).
There was hepatocellular degeneration and loss (17/18), with most affected cells containing prominent eosinophilic intracytoplasmic inclusions (Fig 10A). There were significant bone marrow alterations with depletion of white blood cell precursors, often with areas of necrosis (17/18). The only exception, in the animal that was found dead on day 4 due to causes unrelated to experimental infection, was no significant splenic, hepatic, or bone marrow pathology likely due to the animal succumbing quickly relative to exposure. There were varying degrees of pathology in the skin and mucous membranes in all 18 animals ranging from mild epithelial hyperplasia with vacuolar degeneration and multinucleated syncytial cells, to vesicular and hemorrhagic dermatitis with necrosis ( Fig 11A). In all 18 animals there were lesions in the adrenal glands, ranging from mild vacuolar degeneration to necrosis (Fig 12A) and hemorrhage. In some areas of degeneration there were prominent eosinophilic intracytoplasmic inclusions within adrenal cortical cells. Other lesions present, but less consistent across dose ranges, included hemorrhage and edema within the lungs, heart, gastrointestinal tract, genitourinary system, and meninges.
All histopathologic lesions attributed to MPXV were associated with varying amounts of immunoreactivity, with antigen identified predominantly in the cytoplasm of epithelial cells and resident/infiltrating macrophages (Figs 9B, 9D, 11B, and 12B). Hepatocytes were strongly immunoreactive with antigen concentrated in the intracytoplasmic inclusions (Fig 10B). There was immunoreactivity in tissues with no morphologic alterations as well. This was concentrated in the basal aspects of epithelial surfaces and within endothelial cells, fibroblasts, and histocytes within subepithelial and submucosal tissues and connective tissue surrounding lymph nodes and other organs.
Experimental Monkeypox Disease in Marmosets
Ultrastructural examination of hepatocytes revealed that inclusions previously noted were comprised of viroplasm and virons (Fig 10C). Virions varied in their state of maturation from membrane shells containing viroplasm to mature virions in which lateral bodies could be observed. The inclusions averaged approximately 5 x 3 microns in size though this was highly variable. Lamellar structures were occasionally seen within the inclusions, as were cellular organelles including endoplasmic reticulum and free ribosomes (Fig 10D). At least one inclusion incorporated material consistent with that described by Zaucha [41] and which is thought to be used to generate viral "shells".
Discussion
Here we provide evidence of the susceptibility of marmosets to monkeypox virus and propose its utility as a low dose model of smallpox/monkeypox disease. Eighteen animals were intravenously inoculated with descending doses of monkeypox virus Zaire, and all but one succumbed to a hemorrhagic-like poxviral disease by day 15. One animal died on day 4 due to causes unrelated to a poxvirus insult. Onset of clinical disease varied in a dose-dependent manner, with a delay in onset of clinical signs in the lowest dosed (48 PFU) animals until day 9 post infection. Theoretically, lower doses of monkeypox virus could have been tested. The ability to accurately quantify infectious virus in the inocula, coupled with the probability of animals not receiving any virus (being such a low dose), and the likelihood of being able to reproduce a similar outcome, are just a few reasons lower doses were not evaluated. This is the first report of a nonhuman primate model utilizing monkeypox virus with such a prolonged incubation-an incubation period similar to individuals afflicted with smallpox. Therefore, this model could help alleviate two of the major critiques concerning other test systems: 1). the high dose required to achieve a satisfactory outcome and, 2). the rapid onset of disease relative to exposure. Since the intravenous exposure eclipses the primary viremia and prodromal phase of the disease, the onset of clinical signs is generally much more rapid in other nonhuman primate models of orthopoxviral disease.
The severity of disease, based on lesion presentation, was dose dependent as well. Generalized erythema present in animals at higher doses became more focal/discrete hemorrhages at lower doses, and one animal (#18) developed short-lived macules/papules. Shortly after the onset of the manifestation of rash (0-6 days), the animals succumbed to disease. This lesion phenotype following the prolonged incubation period, in conjunction with the ascribed thrombocytopenia and gross pathology, is reminiscent of what has been reported in early-type hemorrhagic cases of human smallpox [42]. The high mortality of monkeypox in marmosets also supports this supposition, as only hemorrhagic cases of human smallpox were comparably lethal.
The appearance of circulating genomic material (viremia) prior to onset of clinical manifestations also reflects what has been demonstrated in smallpox reports for early-type hemorrhagic smallpox (for review, see [43]). Relative to the macaque, nonhuman primate models utilizing either monkeypox or variola viruses and the same PCR assay, marmosets have a greater genomic viremia with approximately a 1-2 logs more genomes detected in intravenously infected macaques [10,14,44,45]; and up to two logs in those exposed by respiratory route [17]. Kramski (2010) intravenously inoculated marmosets with calpox, an isolate from a poxvirus outbreak in a colony of marmosets in Germany, with similar dosing to this report (1.25x10 7 and 1x10 4 PFU) that proved fatal (Kramski, et al. 2010). In contrast to our study, which clinically produced mainly hemorrhagic type disease at these doses, Kramski, et al. 2010 reported defined papular lesions (2-3mm) and Matz-Rensing et al. 2012 reported papulovesicular lesions, depending on survival time [29,30]. In our study, animals developed petechiae and ecchymoses, and no vesicles at the comparable lower dose (approximate 10 4 PFU), suggesting that marmosets develop a more severe disease and are likely more susceptible to monkeypox virus than calpox virus. It is possible that more lesions characteristic of ordinary type smallpox would be observed if an even lower dose of MPXV were used (e.g., <10 PFU).
Other clinical manifestations, or lack thereof, sharply contrast those reported by Kramski [29] and Matz-Rensing [30]. Animals exhibited signs of illness, such as greasy and matted haircoat, lymphadenopathy and decreased activity, in some instances six days (as little as two) before succumbing to disease. Kramski et al. 2010 did not observe clinical signs until one or two days prior to death. Another novel observation from our study is that weights in some animals increased prior to death. The increase is more noticeable in the highest challenge group where subcutaneous edema was present in 2/3 animals. This phenomenon would suggest abrogated absorption and fluid imbalance as the cause of weight gain prior to death.
Monkeypox virus induced a similar high genomic load as those induced by intravenous calpox virus (Kramski, et al. 2010). In contrast, Kramski et al 2010 reported a range of 10 6 to 10 9 genome equivalents per milliliter, whereas in our study 4 of 6 marmosets had greater than 10 9 genomes per milliliter, with the lowest and highest levels being 5.8x10 7 and 8.9x10 9 genomes per milliliter, respectively. It is possible that these differences are due to assay-to-assay variation and/or that the virus had more time (6-10 versus 4-7 days) in which to replicate. It is important to note that blood could not be obtained for all animals at the time of death and that the maximum genome load reported for these animals may be an underestimate. Finally, high blood genome levels coincide with an increase in white blood cells. This suggests that a majority of the virus is most likely cell associated.
The susceptibility of marmosets to both calpox virus and monkeypox virus implies a favorable virus-host interaction. It is tempting to assume that marmosets are immunologically deficient relative to all poxviruses, but there is no empirical evidence to support this notion. On the contrary, evidence actually supports that outcome is more likely poxvirus specific. The outbreak reported by Gough, et al. 1982 [27] is a good case in point. In this report a Tanapox-like virus infected multiple New World nonhuman primates, including marmosets. All the animals cleared the poxvirus insult, but 6 animals succumbed to a secondary bacterial infection. But in the case where marmosets are experimentally exposed to MPXV or calpox virus, death is almost certain. The difference in virulence between poxviruses in a given host is not surprising as poxviruses have adapted to certain host(s), such as the restriction of variola and camelpox viruses to humans and camelids respectively, and this is clearly illustrated by the susceptibility of macaques to MPXV relative to VARV. Although the question of the marmoset as a favorable host for variola virus modeling can only be solved empirically, the aforementioned relationship becomes an advantage when contemplating the use of marmosets for variola virus infection, i.e., a less favorable virus host relationship between variola virus and marmosets (less virulent relative to monkeypox virus in marmosets) could allow more immune control and have more clinical attributes of ordinary smallpox.
Classic poxviral lesions (enanthema and exanthema) were not features of MPXV disease in marmosets in the current study, in contrast to calpox in marmosets where papulovesicular lesions were described [30]. Cutaneous lesions in marmosets in our study were characterized by petechiae, ecchymoses, and hemorrhage, similar to lesions described in the hemorrhagic form of smallpox in humans [42]. Lymphoid depletion and necrosis, and hematopoietic necrosis were also observed in our study, and are features reported in hemorrhagic smallpox in macaques and humans [46]. In contrast, lymphoid hyperplasia was reported in the marmoset calpox model, suggesting an immune response in the calpox model but ineffective immune response in the marmoset MPXV model. Supporting this notion, blood samples from animals that survived the longest (lowest dose group) failed to effectively neutralize monkeypox virus. It was noted that the samples qualitatively exhibited an effect on virus spread (comets). Further studies are required to elucidate the true neutralizing effect, as the assays were performed in the absence of complement, utilized only IMV, and were without consideration for inherent viral antigen (virus antigen present from the disease). Hepatocellular degeneration with single cell necrosis and intracytoplasmic inclusion bodies were features common to both the marmoset model of calpox and of MPXV.
Much like the VARV macaque model, there was widespread immunohistochemical staining of macrophages in various tissues of the MPXV marmoset model, suggesting that this cell type is fundamental in the pathogenesis of the disease. Other antigen positive tissues common to both the hemorrhagic variola macaque model and the MPXV marmoset models include epithelial, testicular, adrenocortical, and hepatic, as well as endothelial cell immunopositivity. One key difference, however, is that secondary bacterial infection is thought to potentiate hemorrhagic smallpox in macaques [46] but secondary bacterial infection was not observed in our study with marmosets. Our findings reflect those found by Bras, 1952 in humans, in which he found that in a majority of the 177 fatal smallpox cases, pathology revealed an absence of bacteria [42]. It should be noted that these patients received antibiotics and yet there were still cases of hemorrhagic disease.
Since there are no active cases of smallpox and because of ethical considerations, new medical countermeasures must be tested and approved utilizing animal models via what is commonly referred to as the "Animal Rule" [7,8]. The Food and Drug Administration has released draft guidelines to help meet requirements for therapeutic licensure [9]. Among the recommendations within this document are the utilization of the etiologic agent at a dose reflective of human disease, producing an appropriate disease course, in a susceptible host. Nonhuman primate models utilizing variola virus or monkeypox virus have yet to meet these criteria. In this report, we provide evidence for the susceptibility of marmosets to monkeypox virus with a prolonged incubation period more indicative of smallpox and utilizing one of the agents (monkeypox virus) to which therapeutics are being developed, increasing the confidence that the therapeutic will be efficacious in a real world scenario. In essence, the monkeypox marmoset model complements the intravenous macaque model because it adds "low dose" and "extended incubation" to the test system. Therefore, the MPXV/marmoset model has the potential to bring the scientific community one step closer to fulfilling the "Animal Rule".
As the MPXV/marmoset model is in its infancy of development, further studies are required to optimize and produce a pragmatic model. These experiments include defining an optimized route of inoculation (respiratory route), characterizing the pathogenesis and host responses, and assessing the predictive nature of the model. Additionally, model development with marmosets should include trials with variola virus. The importance of low dose variola model cannot be understated. This feat has yet to be accomplished and could accelerate the approval of prospective therapeutics.
Supporting Information S1 Table. Summary of pathological features of marmosets exposed to monkeypox virus. (XLSX) | 2016-05-12T22:15:10.714Z | 2015-07-06T00:00:00.000 | {
"year": 2015,
"sha1": "eaa06f6d40468a5627cb277c4bff46a8a7a726b5",
"oa_license": "CC0",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0131742&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "eaa06f6d40468a5627cb277c4bff46a8a7a726b5",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
80094195 | pes2o/s2orc | v3-fos-license | Comparative RNA-Sequencing Analysis Benefits a Pediatric Patient With Relapsed Cancer
Clinical detection of sequence and structural variants in known cancer genes points to viable treatment options for a minority of children with cancer.1 To increase the number of children who benefit from genomic profiling, gene expression information must be considered alongside mutations.2,3 Although high expression has been used to nominate drug targets for pediatric cancers,4,5 its utility has not been evaluated in a systematic way.6 We describe a child with a rare sarcoma that was profiled with whole-genome and RNA sequencing (RNA-Seq) techniques. Although the tumor did not harbor DNA mutations targetable by available therapies, incorporation of gene expression information derived from RNA-Seq analysis led to a therapy that produced a significant clinical response. We use this case to describe a framework for inclusion of gene expression into the clinical genomic evaluation of pediatric tumors.
Clinical detection of sequence and structural variants in known cancer genes points to viable treatment options for a minority of children with cancer. 1 To increase the number of children who benefit from genomic profiling, gene expression information must be considered alongside mutations. 2,3 Although high expression has been used to nominate drug targets for pediatric cancers, 4,5 its utility has not been evaluated in a systematic way. 6 We describe a child with a rare sarcoma that was profiled with whole-genome and RNA sequencing (RNA-Seq) techniques. Although the tumor did not harbor DNA mutations targetable by available therapies, incorporation of gene expression information derived from RNA-Seq analysis led to a therapy that produced a significant clinical response. We use this case to describe a framework for inclusion of gene expression into the clinical genomic evaluation of pediatric tumors.
CASE SUMMARY
Patient 1 was diagnosed at 8 years of age with a left tentorial-based CNS sarcoma after a 2-week history of nausea, lethargy, and diplopia. Clinical workup confirmed that the tumor was primary to the brain (Figs 1A and 1B). Histology revealed a mitotically active, epithelioid-to-spindled cell tumor in patternless sheets, interrupted by thick fibrous bands and foci of necrosis (Figs 1C to 1D). Immunohistochemistry revealed diffuse positivity for vimentin, desmin, neuron-specific enolase, epithelial membrane antigen, and CD99 (Figs 1E to 1H). Focal immunohistochemical positivity was observed for pan-cytokeratin (AE1/AE3) and synaptophysin. The tumor was negative for glial fibrillary acidic protein (GFAP), Wilms tumor 1 (WT1), myo-D1, myogenin, smooth muscle actin, nonphosphorylated and phosphorylated neurofilament protein, CD34, CD31, HMB-45, S-100, leukocyte common antigen, and BAF47/INI-1 (retained nuclear positivity). The Ki67 proliferative index was 9%. A diagnosis of desmoplastic small round cell tumor (DSRCT) was favored initially. 7 Because EWSR1 breakapart fluorescence in situ hybridization confirmed an EWSR rearrangement but concomitant WT1 breakapart fluorescence in situ hybridization was negative, the molecular criterion for DSRCT was not met, and a final diagnosis of poorly differentiated sarcoma, not otherwise specified, was rendered. The patient received six cycles of induction chemotherapy-ifosfamide, carboplatin, and etoposide-followed by autologous stem-cell transplantation with a high-dose preparative regimen of carboplatin, thiotepa, and etoposide as well as 54 Gy of focal radiation to the location of the original tumor. After a 2-year remission, the tumor recurred with numerous pulmonary lesions in all lobes. The histologic characteristics of the metastasis were identical to the primary tumor. The patient enrolled in the Personalized OncoGenomics (POG) 3 study, which offers whole-genome sequencing (WGS) and transcriptome sequencing and analysis to identify drivers and potential therapeutic options of relapsed solid tumors for children and adults in British Columbia.
Biopsy material from a lung metastasis was characterized with WGS and RNA-Seq, and peripheral blood was characterized with WGS. 3 The analysis of the sequencing data revealed an EWSR1-ATF1 gene fusion (Appendix Fig A1, online only); although this finding is most suggestive of a clear cell sarcoma, no immunohistochemical support for this diagnosis could be established. 8,9 The POG team identified three somatic variants of unclear therapeutic significance: PDGFRA p.V299F, PRKCB p.D341N and SVIL p.L1374R. No germline singlenucleotide variants with established cancer relevance were detected. 10 Although no therapy was available to target the EWSR1-ATF1 fusion protein directly, the POG RNA-Seq-derived gene expression analysis identified high expression of downstream genes IL6 and JAK1. The finding of the JAK1 overexpression was corroborated by comparative RNA-Seq analysis at the University of California Santa Cruz.
COMPARATIVE RNA-SEQ ANALYSIS
In accordance with the US Food and Drug Administration guidelines, 6 we focused on relative rather than absolute gene expression levels and sought to develop a framework for the analysis of RNA-Seq data from patients. We compared the RNA-Seq-derived tumor gene expression profile of patient 1 with similarly derived profiles of 10,668 samples that represented 38 pediatric and adult tumor types studied by the The Cancer Genome Atlas (TCGA) and Therapeutically Actionable Research to Generate Effective Treatments (TARGET). 11,12 RNA-Seq reads from different laboratories were reanalyzed with a single computational pipeline to reduce batch effects. 13 We searched for tumors in this homogeneously processed compendium in which expression profiles were similar to those of patient 1 by using TumorMap. 14 The tumor gene expression profile for patient 1 resembled lung cancers (Fig 2A), the site of the metastasis. The metastatic sample contained 76% tumor cells, estimated by a POG pathologist, which suggested that most of the gene expression information came from the tumor cells. Lung adenocarcinoma (LUAD) samples formed four groups in the TumorMap ( Fig 2B) and the sarcoma of patient 1 clustered with the 354 LUAD tumors of the terminal respiratory unit and proximal-inflammatory molecular subtype (Fig 2C), associated with the activation of receptor tyrosine kinases (RTK). 15 To define the transcriptional programs that drove placement of the patient's tumor with the lung cancers, we conducted Gene Set Enrichment Analysis 16 with genes differentially expressed between the LUAD cluster that contained the tumor of patient 1 (n = 354) and the remaining samples in the compendium (n = 10,314); we also repeated this analysis and compared the cluster for patient 1 with the remaining LUAD samples (n = 529). Both analyses revealed the overexpression of members of the IL6/JAK/STAT3 signaling pathway (Appendix Fig A2), which suggests that the activation of shared signaling programs likely contributed to the tumor transcriptional phenotype of patient 1 in addition to the site of the metastatic sample. We next searched for genes that were significantly overexpressed in the patient's tumor compared with the whole compendium and compared with only the sarcomas by using outlier statistics 3,17 (Data Supplement). To explicitly subtract the effect of the lung cell expression, we also searched for outlier genes compared with 529 LUAD tumors; 787 genes, including druggable targets JAK1, ALK, NTRK1, and CCND1, emerged as overexpression outliers in all analyses (Data Supplement).
MOLECULAR RATIONALE FOR CLINICAL DECISION MAKING
We speculated that the activation of RTKs contributed to JAK over-expression in patient 1's tumor. 18,19 Increased expression of ATF1 and its transcriptional targets, TOP2A, CALCA, and IL6, was observed, presumably as a result of constitutive transcriptional activation by EWSR1-ATF1 ( Fig 3A; Appendix Fig A3). ATF1 can activate the transcription of JAK1, 8 providing another potential mechanism for the observed high expression of the IL6/JAK/STAT3 pathway. Consolidating the fusion-based and the RTK-based mechanisms of IL6/JAK/STAT3 activation, we reconstructed a candidate pathway driving patient 1's cancer (Fig 3, Appendix Fig A4). The POG molecular tumor board suggested targeting either JAK (with ruxolitinib) or ALK (with crizotinib). A decision was made to use ruxolitinib given (1) the over-expression of ATF1 target genes, (2) the over-expression of JAK1, and (3) the available pediatric dosing information. 20 In addition, ruxolitinib was favored over crizotinib because it targets downstream of both EWSR1-ATF1 and the overexpressed RTKs, whereas crizotinib only targets the RTKs. We recognize that a combination therapy targeting both ALK and JAK may have been appropriate on the basis of the molecular findings. However, we were unable to use drug combinations that have not been through phase 1 testing, highlighting the need for more pediatric combination therapy trials.
CLINICAL RESPONSE
At the initiation of therapy with single-agent ruxolitinib, patient 1 had severe nausea and lethargy, was mostly bed-ridden, and had a Lansky play-performance score 21 of 60 ( Fig 4). Within a week of ruxolitinib initiation, his mother reported a dramatic improvement in his energy level and complete resolution of his nausea. The patient tolerated this therapy without significant toxicity and had stabilization of the previously rapidly growing lung nodules by RECIST, 22 and his Lansky score improved to 90 to 100 for 5 months. The patient then exhibited a sudden enlargement in one lung lesion, detected during routine imaging, although most of the other lesions remained stable. Ruxolitinib was discontinued, and focal palliative radiation was ascopubs.org/journal/po JCO™ Precision Oncology 3 administered to the one lesion in the left lower lobe for pain control. Within 2 months of ruxolitinib discontinuation, the symptoms of nausea, extreme fatigue, and weight loss returned, and the lung lesions progressed. The family requested that ruxolitinib be restarted for quality of life, and the patient again showed dramatic improvement in clinical status and an unexpected prolonged period of stable disease until dose reduction because of myelosuppression was required. After the dose reduction, rapid progression of the pulmonary lesions resulted in death 23 months after the original relapse.
DISCUSSION
To our knowledge, this is the first report of a pediatric patient with cancer who benefited 4 ascopubs.org/journal/po JCO™ Precision Oncology from cross-tumor gene expression comparisons. Cross-tumor analyses have been used in the TCGA 11 and POG studies 3,23,24 ; however, a computational framework is necessary for their clinical implementation. This case is also, to our knowledge, the first report of use of a JAK inhibitor to treat a sarcoma. Previous functional studies implicated STAT3 as an oncogene in sarcomas 25 ; the current case report builds on this work and prompts investigation into the potential clinical utility of targeting this pathway. Of note was the patient's marked and rapid clinical response to treatment, which suggests that response may have been related to the modulation of cytokine expression by the medication. Although the clinical benefit of ruxolitinib was apparent, it was challenging to quantify. Ultimately, a randomized clinical trial is necessary to assess the benefit of molecular approaches compared with the standard of care.
The case also highlights tumor heterogeneity: although the majority of the metastases remained stable, one lesion rapidly became resistant. Despite documented progressive disease, this case benefited from resumption of the medication: the patient's clinical status and many metastatic lesions were responsive to retreatment. A serial molecular analysis of the heterogeneous lesions could inform the mechanisms of resistance; however, it was not pursued because of the family's wishes. To characterize the intratumor heterogeneity of therapeutic response, we consider follow-up biopsies, and those decisions are weighed against the risks to the patient.
Reference compendium
We obtained The Cancer Genome Atlas (TCGA) 11 13 We extracted tumor samples from these data sets and combined them into a single cohort that contained multiple adult and pediatric tumor types (N = 10,668). The expression of 18,357 protein-coding genes was measured (Data Supplement).
Gene expression outlier analysis
Gene-level reads per kilobase of transcript per million mapped reads (RPKM) expression measurements for patient 1's tumor were generated according to the previously published method. 3 We used these data to compute FPKMs by dividing each value by two. We proceeded to quantile normalize the expression values, using theoretical exponential distribution (rate parameter = 1) as a background distribution (Bolstad BM, et al: Bioinformatics 19:185-193, 2003). During this procedure, we normalized expression quantiles within each tumor sample to match quantiles of an exponential distribution (rate parameter = 1). We then performed gene expression outlier analysis 3,17 to identify transcripts significantly enriched in the patient's tumor compared with 10,668 cancer samples in the reference compendium (pan-cancer outlier analysis). We also performed the same analysis using only the 529 lung adenocarcinoma (LUAD) tumors as a reference (LUAD-only outlier analysis) and using 262 sarcoma tumors as a reference (sarcoma-only outlier analysis). Gene expression outliers were identified as described, 17 with the exception of the use of a more stringent interquartile range of 2.0. We analyzed the outlier genes for enrichment of specific pathways and signaling networks that could be targeted by available therapies using MSigDB . The similarity space is represented as a graph and is used as an input into OpenOrd. OpenOrd treats the similarities as spring constants and searches for a configuration among the samples that produces an arrangement to minimize the spring tension of the system as much as possible. We use hexagonal packing for space conservation in the projected two-dimensional plane. For each sample in the full correlation matrix, we extracted samples with top six correlation values to compose a sparse matrix of the top six nearest neighbors. We used this sparse matrix to construct a sparse similarity graph for the samples in the cohort and applied the OpenOrd method to derive the initial (x, y) positions in the map.
Furthermore, to avoid overlapping and crowding samples in the dense graph components, OpenOrd (x, y) coordinates are snapped to their nearest hexagon to arrange all of the samples on a tiling of regular hexagons. With OpenOrd (x, y) coordinates, each sample is placed in a grid cell. If the predetermined cell is occupied, the sample is snapped to an empty grid cell within a minimal distance from the original cell. Multiple samples that compete for a location will thus spiral around a central hexagon in the neighbors around the central location. Therefore, dense clumps are separated so that they can be viewed on approximately the same scale as the distances that separate them. Hexagons were selected as the shape for the grid cell to illustrate that there are no inherently preferred axis-aligned directions in the OpenOrd output.
Google Maps Application Programming Interface (API; https://developers.google.com/maps/documentation/javascript/reference) is used to load and visualize the resulting layout in a browsing environment. The API provides the ability to interactively navigate, zoom, and explore various annotations of locations on the map, analogous to Google Maps and Google Earth applications.
We applied the TumorMap method to the reference cohort of 10,668 tumors together with the tumor of patient 1 by using transcriptional profiles of 18,357 genes (Data Supplement). Of note, the reference compendium contained 262 heterogeneous sarcomas from the TCGA 11 cohort.
Statistical Robustness of TumorMap Placement
The TumorMap method belongs to the family of nearest neighbor classification methods. It projects the similarity space of the high-dimensional genomic profiles into Euclidean space by using only top neighbors of every sample in the cohort. We refer to these top neighbors as the local neighborhood of a given sample. Therefore, it is important to evaluate how robust these local neighborhoods are under small perturbations. Specifically, we wanted to assess whether the local neighborhood of patient 1 remains stable when only subsets of genes are used to compute pairwise similarities between samples.
We subsampled, without replacement, gene expression features at 80% of the original gene features. We repeated this procedure 1,000 times and computed the patient's local neighborhood across all N = 1,000 subsampled spaces. We then compared each of these local neighborhoods computed under perturbation to the true local neighborhood computed with the complete data set. We computed a local neighborhood specificity (LNspecificity) score as follows: where S is a set of nearest neighbors for either true or subsampled computation and |S| is the nearest neighbor set cardinality.
This score represents average overlap, as a fraction, of the true local neighborhood and the perturbed local neighborhoods across all subsampling iterations. The higher this score is, the more overlap we see across all the iterations and the more similarities we see among local neighborhoods under perturbations. The SP score of patient 1 was 0.885, which indicated that 88.5% of the top neighbors were consistently the same across 1,000 perturbations to the gene expression profiles, from which the TumorMap visualization was computed.
Identification of Genes Associated With the Patient 1 Cluster in TumorMap
We identified genes differentially expressed between the LUAD cluster containing patient 1 and the 10,668 tumor samples in the compendium by using the Linear Model for Microarray Analysis method (Ritchie ME, et al: Mucleic Acids Res 43:e47, 2015). The gene set enrichment analysis (GSEA) 16 of the resulting list of genes revealed that IL6/JAK/STAT3 signaling pathway annotation was significantly enriched among these genes (false discovery rate q value, 1.787 e−4 with 39 genes in the leading edge).
We also identified differentially expressed genes between the LUAD cluster containing patient 1 and the remaining LUAD tumors in the compendium (Data Supplement). The GSEA analysis 16 of this gene list revealed that members of the IL6/JAK/ STAT3 signaling pathway were still driving the creation of the LUAD cluster containing patient 1's tumor (false discovery rate q value, 0 with 36 genes in the leading edge).
Gene Expression Percentile Analysis
For the druggable targets in the reconstructed candidate driver pathway , we computed the ranked percentiles of the gene expression levels in patient 1's tumor compared with the TCGA LUAD (n = 529) and TCGA SARC (n = 264) cohorts. For each sample in the cohort as well as for patient 1, we ranked genes relative to the expression of all genes within that sample (ie, the highest expressed gene was ranked as 1). The percentiles are displayed in Figure 3
Differential Expression Analysis Relative to Normal Tissues
For all of the genes in the reconstructed candidate driver pathway that are targetable by cancer drugs (Fig 3) we computed the fold change between the expression in patient 1's tumor and in normal cells. We obtained normal tissue expression data for 16 different tissues from the Illumina Human Body Map 2.0 database (GEO Accession No. GSE30611). For each gene, we mean-aggregated normal expression into a single value and computed log change of the expression of that gene in patient 1 compared with the aggregated normal value.
Histologic Techniques and Immunohistochemistry
The tumor tissue was routinely fixed in 10% buffered formalin and then was processed in an automated fashion. Processed tissue was then embedded in paraffin wax, and the resultant blocks were sectioned at 4 µm for both hematoxylin and eosin staining and immunohistochemistry. All microscopic slides were prepared via standard automated techniques. Immunohistochemical slides were stained on the Ventana BenchMark XT Autostainer (Ventana Medical Systems, Tucson, AZ) with the Ventana iVIEW universal DAB detection kit. Primary antibodies used for immunohistochemistry are noted in Table A1 and the Data Supplement. All primary antibodies were either diluted or received as a ready to use prediluted solution from the relevant vendor.
Fluorescence In Situ Hybridization
Fluorescence in situ hybridization analysis was performed on formalin-fixed paraffin-embedded tumor tissue. Overall, 200 nuclei were quantified with the EWSR1 (22q12) dual-color breakapart probe (Vysis; Downers Grove, IL). Likewise, 200 nuclei were quantified with the WT1 (11p13) dual-color breakapart probe (prepared at the BC Cancer Agency). Standard, internally derived thresholds were followed to determine definitive presence of a translocation that involved the EWSR1 and WT1 loci. Fig A4. Full candidate pathway that represents molecular drivers of tumorigenesis in the sarcoma of patient 1. We reconstructed this pathway on the basis of the outlier analysis, differential gene expression analysis, copy number information and literature mining (Appendix Methods). A simplified version of this pathway is presented in Figure 3A. Both EWSR1-ATF1 and receptor tyrosine kinases PDGFRB, NTRK1, ALK, and FGFR1 can contribute to the activation of IL6/JAK/STAT3 signaling. All gene expression outliers depicted in this figure were significant in all three comparisons: patient 1 versus all cancers, patient 1 versus lung adenocarcinomas, and patient 1 versus sarcomas. JAK1, the molecular target of ruxolitinib, is indicated with a yellow lightning bolt. TCGA, The Cancer Genome Atlas. | 2019-03-17T13:12:31.814Z | 2018-04-19T00:00:00.000 | {
"year": 2018,
"sha1": "aad04cc8d58e771b2862cd7c13e0c9a2fda0eb0f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1200/po.17.00198",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "28e598ef47d7f51b0b896f1caa8ce7983be93081",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3619977 | pes2o/s2orc | v3-fos-license | Effects of incretin treatment on cardiovascular outcomes in diabetic STEMI-patients with culprit obstructive and multivessel non obstructive-coronary-stenosis
Background No proper data on prognosis and management of type-2 diabetic ST elevation myocardial infarction (STEMI) patients with culprit obstructive lesion and multivessel non obstructive coronary stenosis (Mv-NOCS) exist. We evaluated the 12-months prognosis of Mv-NOCS-diabetics with first STEMI vs.to non-diabetics, and then Mv-NOCS-diabetics previously treated with incretin based therapy vs. a matched cohort of STEMI-Mv-NOCS never treated with such therapy. Methods 1088 Patients with first STEMI and Mv-NOCS were scheduled for the study. Patients included in the study were categorized in type 2 diabetics (n 292) and non-diabetics (n 796). Finally, we categorized diabetics in current-incretin-users (n 76), and never-incretin-users (n 180). The primary end point was all cause deaths, cardiac deaths, and major adverse cardiac events (MACE) at 12 months of follow up. Results The study results evidenced higher percentage of all cause deaths (2.2% vs. 1.1%, p value 0.05), cardiac deaths (1.6% vs. 0.5%, p value 0.045), and MACE (12.9% vs. n 5.9%), p value 0.001) in diabetic vs. no diabetic patients at 12 months follow up. Among diabetic patients, the current vs never-incretin-users, did not present a significant difference about all cause of deaths, and cardiac deaths through 12-months. The MACE rate at 1 year was 7.4% in diabetic incretin-users STEMI Mv-NOCS patients vs. 12.9% in diabetic never-incretin-users STEMI-Mv-NOCS patients (p value 0.04). In a risk-adjusted hazard analysis, MACE through 12 months were lower in diabetic STEMI-Mv NOCS incretin-users vs never-incretin-users patients (HR 0.513, CI [0.292–0.899], p 0.021). Consequently, lower levels of glucagon-like peptide 1(GLP-1) were predictive of MACE at follow up (HR 1.528, CI [1.059–2.204], p 0.024). Conclusion In type 2 diabetic patients with STEMI-Mv-NOCS, we observed higher incidence of 1-year mortality and adverse cardiovascular outcomes, as compared to non-diabetic STEMI-Mv-NOCS patients. In diabetic patients, never-incretin-users have worse prognosis as compared to current-incretin-users. Trail registration Clinical trial number: NCT03312179, name of registry: clinicaltrialgov, URL: clinicalltrialgov.com, date of registration: September 2017, date of enrollment first participant: September 2009
Background
In general population, non-obstructive (< 50% stenosis diameter and flow fractional reserve > 0.80) noninfarcted related coronary diseases was common among patients presenting with ST-segment elevation myocardial infarction (STEMI), and were no associated with a significant increase in mortality [1]. In diabetic patients, there is a higher prevalence of multivessel disease, and of non obstructive coronary artery lesions [2,3]. To date, STEMI diabetic patients with culprit obstructive lesion and multivessel non obstructive coronary stenosis (Mv-NOCS) represent a conundrum because no proper data regarding their prognosis and management exist. So far, incretin-based therapies have shown a broad range of unique cardiovascular actions translating into cardiovascular protection [4]. Therefore, given the paucity of data in this setting, we evaluated the 12-months prognosis of Mv-NOCS-diabetics with STEMI as compared with a matched cohort of non-diabetic patients. In this research we studied clinical outcomes after first STEMI event in STEMI-Mv-NOCS diabetics vs. non-diabetics, and then divided in diabetic incretin-users vs. diabetic never-incretin-users. As first, we compared number all cause of deaths, cardiac deaths, and of major adverse cardiac events (MACE) through 12 months in diabetic STEMI-Mv-NOCS patients vs. non-diabetic STEMI-Mv-NOCS patients. Secondary, we divided diabetic STEMI-Mv-NOCS incretin users vs. never-incretin-users, and we assessed all cause deaths, cardiac deaths, and MACE through 12-months of follow up. Our study hypothesis was that, diabetics STEMI-Mv-NOCS may have worse prognosis after first STEMI event as compared to non diabetics. Secondary, STEMI-Mv-NOCS diabetics current-incretin-users may present a significantly lower rate of MACE through 12 months as compared to a matched cohort of STEMI-Mv-NOCS-diabetics never treated with such therapy. Therefore, incretin therapy may represent a validate and innovative treatment to reduce worse prognosis in a population of STEMI-Mv-NOCS diabetics. Indeed, incretin therapy may improve clinical outcomes, ameliorating the prognosis of STEMI-Mv-NOCS diabetic patients.
Methods
Consecutive 796 non diabetic and 292 diabetic patients with first STEMI and no-altered fractional flow reserve (FFR > 0.80) of Mv-NOCS (20-49% luminal stenosis), referred for coronary angiography within 12 h of clinical presentation of the clinical event, were entered in a database prospectively. STEMI was diagnosed according to international guidelines by evidence of myocardial injury (defined as an elevation of cardiac troponin values with at least one value above the 99th percentile upper reference limit), associated to symptoms consistent with myocardial ischemia, as persistent chest discomfort or other symptoms suggestive of ischemia (shortness of breath, nausea/vomiting, fatigue, palpitations, or syncope), and ST-segment elevation in at least two contiguous leads ≥ 2.5 mm in men < 40 years, ≥ 2 mm in men ≥ 40 years, or ≥ 1.5 mm in women in leads V2-V3 and/or ≥ 1 mm in the other leads [5]. In these patients, we performed an early, and immediate coronary angiography followed by percutaneous coronary intervention to have a rapid restoration of epicardial blood flow in the infarct related artery [5].Therefore, patients with no coronary disease detected by coronary angiography, presence of obstructive and Mv-obstructive stenosis, left ventricular ejection fraction less than 25%, previous myocardial infarction, previous PCI or/and coronary by-pass grafting, Tako-tsubo cardiomyopathy, myocarditis, acute or chronic infection or inflammatory diseases, hematologic disorder, malignancies, end-stage liver or renal disease, and use of steroid therapy or chemotherapy were excluded. Subjects were categorized in non-diabetic and diabetic patients [6]. Furthermore, the diabetic patients answered a specific questionnaire about medicines used for diabetes treatment before the beginning of the study, the dates of the beginning and the end of treatment, the route of administration, and the duration of use. Information from the medicine inventory during the study and this specific questionnaire was used to classify the subjects. The patients with diabetes who never used incretin, such as glucagon-like peptide 1 (GLP-1) agonists and dipeptidyl peptidase-4 (DPP-4) inhibitors, were classified as "never-incretin-users. " The patients with Conclusion: In type 2 diabetic patients with STEMI-Mv-NOCS, we observed higher incidence of 1-year mortality and adverse cardiovascular outcomes, as compared to non-diabetic STEMI-Mv-NOCS patients. In diabetic patients, neverincretin-users have worse prognosis as compared to current-incretin-users. Trail registration Clinical trial number: NCT03312179, name of registry: clinicaltrialgov, URL: clinicalltrialgov.com, date of registration: September 2017, date of enrollment first participant: September 2009 Keywords: Type 2 diabetes, STEMI, Non-obstructive coronary stenosis diabetes who had already used, for at least 6 months, GLP-1 agonists or DPP-4 inhibitors were classified as "current incretin-users". Therefore, upon emergency wards admission, all patients were assigned to undergo prompt coronary angiography. This was a multi center prospective "real world" study conducted at University of Campania "Luigi Vanvitelli", Cardarelli hospital, and Monaldi hospital (Naples, Italy), between July 2009 and July 2016. The study was conducted in accordance with the Declaration of Helsinki. The Ethics Committees of all participating institutions approved the protocol (Ethic Committee University of Campania "Luigi Vanvitelli" number: 1177). All patients were informed about the study nature, and gave their written informed, and signed consent to participate in the study. The study was retrospectively registered.
Study protocol Laboratory analysis
After an overnight fast, plasma glucose and HbA1c levels were measured by enzymatic assays in the hospital chemistry laboratory. GLP-1 levels (Active GLP-1 Specific ELISA Kit; Epitope Diagnostics) were measured after an overnight fast (at 8:00 A.M.) and after breakfast in diabetic patients. A standardized hospital breakfast for ACS patients contained 419 kcal (57% carbohydrate, 17% protein, and 26% fat). After breakfast, blood samples for the measurement of GLP-1 were obtained every 30 min over a 2-h period. The mean of the four GLP-1 evaluations was defined as the postprandial GLP-1 value. The standardized meal tolerance test and baseline evaluations were performed 5 days after STEMI.
Inflammatory markers
Routine analyses and inflammatory status, as ratio between macrophages 1 (CD68) and macrophages 2 (soluble-CD163) (M1/M2 ratio), and high sensitivity C-reactive protein (hs-CRP), were obtained on admission before coronary angiography and before full medical therapy was started.
Quantitative coronary angiography
Upon emergency wards admission, all patients were assigned to undergo prompt coronary angiography. The analyses of all angiographic data before were performed by three interventional cardiologists (M.F., M.C. and C.P.), and followed by percutaneous coronary intervention (PCI) with angioplasty and direct stenting of culprit vessel lesion [6]. Coronary stenting of culprit coronary vessel lesion was the technique of choice for all admitted patients [6]. Therefore, admitted STEMI diabetic and non diabetic patients received preferably primary PCI (92%, n 1001). On other hand, a low percentage of STEMI patients (8%, n 87) were diagnosed in non-PCI-capable hospitals, and they did not receive primary PCI. In these patients, physicians performed a thrombolytic reperfusion therapy. Moreover, 69 patients (79%) received rescue PCI, and 56 patients (65%) were treated by stent implantation. After that, these cardiologists blinded to patient categorization, reviewed selecting cases with Mv-NOCS, as coronary vessels with no-altered fractional flow reserve (FFR > 0.80), and associated to 20-49% luminal stenosis [5,7].
Coronary care unit/intensive cardiac care unit
All treated patients were then monitored and managed in Intensive Care Unit following reperfusion, by continuous monitoring, and specialized care [6] for STEMI and related acute complications (arrhythmias, heart failure, etc.) treatment.
Echocardiographic assessment
At admission patients underwent two-dimensional echocardiography as previously described [8]. This exam was used to asses heart chambers morphology, volumetry, wall contraction, cardiac valves morphology and function, and ejection fraction [8]. To asses heart chambers wall contractility we used scheme as previously described [8]. This exam was used at admission to confirm STEMI diagnosis, and during follow up to stage STEMI disease progression (6 and 12 months after STEMI).
Follow-up
After discharge from the hospital, all patients were managed and followed quarterly for 12 months after event, as outpatients, to perform clinical evaluation, routine analyses and cardiovascular evaluation (ECG, exercise ECG, echocardiography, exercise myocardial scintigraphy), as well as with the goal to maintain HbA1c level at < 7%, fasting blood glucose level of 90-140 mg/dl, and postprandial blood glucose level of < 180 mg/dl. The mean follow-up was 16 ± 3 months. Follow-up visits were performed in our outpatients clinic.
Cardiovascular endpoints
The study end point was all cause deaths, cardiac deaths, and major adverse cardiac events (MACE) at 12 months follow up.
Statistical analysis
SPSS version 23.0 (IBM statistics) was used for all statistical analyses. Categorical variables were presented as frequencies (percentages) and continuous variables as mean ± SD. For the general population of diabetics and non diabetics we calculated a sample size using a power of 80% and confidence of 95%. For comparison among diabetic never-incretin-users and diabetic current-incretin-users, a propensity score matching (PSM) was developed from the predicted probabilities of mortality and MACE by a multivariable logistic regression model. Diabetic never-incretin-users were matched to diabetic current-incretin-users on the basis of PSM. In all matched patients, the balancing property was satisfied. Overall survival and event-free survival were presented using Kaplan-Meier survival curves and compared using the log-rank test. Univariable Cox models were then used to compare event risks. Within all the diabetic and nondiabetic groups, all cause of deaths, cardiac deaths, and MACE were assessed by using multivariable Cox models with adjustment for statistically different variables at baseline and follow-up: hypertension, dyslipidemia, current smoking, ace-inhibitors, calcium inhibitors, thiazide diuretics, aspirin, statin, BMI, heart rate, HDLcholesterol, LDL-cholesterol, triglycerides levels, hs-CRP, M1/M2, and GLP-1 levels. The resulting hazard ratios (HRs) and 95% confidence intervals (CIs) were reported. To investigate the effects of GLP1 levels on cardiovascular endpoints, we evaluated STEMI outcomes at 1-year follow-up stratified by GLP-1 quartiles. A 2-tailed p value < 0.05 was considered statistically significant.
Discussion
The main results were as follows: first, in a contemporary sample of type 2 diabetic patients with STEMI-Mv-NOCS, we observed higher cumulative incidence of 1-year mortality and adverse cardiovascular outcomes as compared to non-diabetic STEMI-Mv-NOCS patients; second, in PSM diabetic patients, diabetic never-incretin-users have higher number of MACE as compared to diabetic current-incretin-users. The prognosis of patients with NOCS has been evaluated, by a recent study [9], which evidenced that among individuals without known CAD and obstructive CAD, non obstructive plaque presence enhances risk prediction of incident mortality. Moreover, [9,10] among patients with type 2 diabetes, non obstructive and obstructive stable CAD were associated with higher rates of all-cause mortality and major adverse cardiovascular events at 5 years, and this risk was significantly higher than in non-diabetic subjects. However, these studies did not provide any evidence about the influence of STEMI-Mv-NOCS management on outcomes following the cardiac event in diabetic patients. In our study after STEMI, we observed an increased
Table 2 Study endpoints in diabetics vs. overall study population, and in incretin-users vs. never-incretin-users
MACE is for major adverse cardiac events; the symbol "/" is indicating not statistical significant (p value > 0.05) incidence of cardiovascular disease in STEMI-Mv-NOCS patients, both after adjustment for baseline, and follow up cardiovascular risk factors. In this context, the poor outcomes of diabetic STEMI-Mv-NOCS as compared to non-diabetic Mv-NOCS-STEMI, observed in our study, might be explained by an abruptly increment of atherosclerosis in diabetics as compared to a more slow progression of coronary atherosclerosis extension in non-diabetics [11]. In this scenario, the diabetic status may affects several pathogenetic mechanisms, that favor the plaque instability and subsequently plaque rupture in the absence of obstructive coronary stenosis, including inflammation, endothelial dysfunction with the inability to augment coronary flow in response to stress, and coronary vasospasm. Accordingly, our data evidenced more inflammatory cells and CPR levels in diabetic than in Fig. 1 a Kaplan-Meier curve for all cause deaths. In left part all cause deaths cumulative survival curve at 360 days follow up comparing diabetic (green color) vs. non diabetic patients (blue color). In right part all cause deaths cumulative survival curve at 360 days follow up comparing diabetic incretin users (green color) vs. diabetic never-incretin-users patients (blue color). There is a statistical significant higher number of events comparing diabetic vs.non diabetic patients (p value < 0.05). b Kaplan-Meier curve for cardiac deaths. Kaplan-Meier curve for cardiac deaths. In left part all cause deaths cumulative survival curve at 360 days follow up comparing diabetic (green color) vs. non diabetic patients (blue color). In right part all cause deaths cumulative survival curve at 360 days follow up comparing diabetic incretin users (green color) vs. diabetic never-incretin-users patients (blue color). There is a statistical significant higher number of events comparing diabetic vs.non diabetic patients (p value < 0.05) no-diabetic patients ( Table 1). The present findings also show a protective effect of incretin therapies on cardiovascular outcomes in Mv-NOCS diabetic patients after STEMI. Without conditioning cardiac mortality, and all cause of deaths, incretin therapy may affect MACE at 12 months follow up. Indeed, diabetic patients treated with incretin therapies had the lowest incidence of cardiovascular events at the same level of blood glucose levels vs. never-incretin-users. In human randomized, double-blind clinical studies, DPP-4 inhibitors did not appear to reduce the risk of major adverse cardiovascular events among patients with type 2 diabetes without and with established cardiovascular disease [12][13][14]. However, definitive proof of an effect of DPP-4 inhibitors in patients with acute coronary syndrome, as well as in patients with DPP-4 inhibitors therapy before the cardiovascular event is currently lacking. In our study after STEMI, the 1-year follow-up results show a higher reduction in the MACE endpoint in patients previous treated with incretin as compared to patients without incretintherapy despite a similar severity of atherosclerotic disease (coronary stenosis < 50%; FFR > 0.80) at baseline. Moreover, both at baseline and at follow-up the currentincretin-users presented lower levels of inflammatory cells, as reported by a M1/M2 ratio and inflammatory markers as CRP, and higher GLP-1 values (Table 1). Accordingly, human studies showed that sitagliptin, vildagliptin and exenatide [15][16][17], even at a single dose, exert a potent anti inflammatory effect, and that many of these effects were persistent over a period of 12 weeks, thus suggesting that the anti-inflammatory effects of GLP-1-based therapies could help to reduce atherosclerosis progression. This concept has been recently investigated by authors [18], reporting that, in acute coronary syndromes, the cardiovascular outcomes were strictly correlated to postprandial GLP-1 levels independently from endogenous (DPP-4 inhibitors) vs exogenous (GLP-1 agonist) treatments. Therefore, patients assigned to incretin therapy may have a lesser plaque progression to an unstable phenotype, than patients assigned to other anti-diabetic therapies [18]. In our study we evidenced that patients with higher GLP-1 levels had lower number of events. Moreover, we may report a protective cardiovascular effect of GLP-1 agonist therapy on atherosclerotic plaques of patients with diabetes, as previously described [19]. However, these results may be due to the small sample size of study population, and the short time of follow-up duration, and future clinical trial have to assess this research topic.
Conclusion
The novelty of this research is to show "real world data" about clinical outcomes in diabetic STEMI-patients with culprit obstructive lesion and Mv-NOCS treated by incretins vs. standard hypoglycemic drugs. Moreover, diabetics current-incretin-users vs. never-incretin-users presented a significantly lower rate of MACE through 12 months, as represented by the evident significant abrupt decreasing of Kaplan-Meier survival curves free from MACE (Fig. 1). This study result supports incretin therapy as the best treatment of diabetics STEMI-Mv-NOCS patients. Therefore, incretin effect on the control of hyperglycemia homeostasis may be associated to other pleiotropic effects, than playing a decisive rule in the control of atherosclerotic plaque progression, and functionality in diabetic STEMI-Mv-NOCS patients. In conclusion, diabetic STEMI-Mv-NOCS patients show unacceptable rates of adverse cardiovascular events, that may be controlled, and/or reduced by incretin therapy. Indeed, tailored strategies, including incretin-based therapies, should be considered in the treatment of these patients.
(See figure on previous page.) Fig. 2 We have considered as statistical significant a p value < 0.005, with hazard ratio (HR) at 95% of confidence of interval (CI). At multivariable analysis the parameter associated with a statistical significant value (p value < 0.005) has been marked with the symbol*. Bas.Lesion length is indicating basal lesion length; HsCRP is for high sensitivity C reactive protein; Low GLP-1 is indicating lower terzile of GLP-1 (glucagon-like peptide 1) values, as < 20 pg/ml; LVEF is for left ventricle ejection fraction; M1/M2 ratio is the ration between macrophage 1 and macrophage 2 cells. VD-3 is indicating a multivessel coronary disease with 3 coronary vessels (B) Univariate and multivariate analysis of factors to predict cardiac deaths at follow up. We have considered as statistical significant a p value < 0.005, with hazard ratio (HR) at 95% of confidence of interval (CI). At multivariable analysis is the parameter associated with a statistical significant p value (p value < 0.005) has been marked with the symbol*. Bas.Lesion length is indicating basal lesion length; HsCRP is for high sensitivity C reactive protein; Low GLP-1 (glucagon-like peptide 1) is indicating lower terzile of GLP-1values, as < 20 pg/ml; LVEF is for left ventricle ejection fraction; M1/M2 ratio is the ratio between macrophage 1 and macrophage 2 cells. VD-3 is indicating a multi vessel coronary disease with 3 coronary vessels (C) Univariate and multivariate analysis of factors to major adverse cardiac events (MACE) at follow up. We have considered as statistical significant a p value < 0.005, with hazard ratio (HR) at 95% of confidence of interval (CI). At multivariable analysis is the parameter associated with a statistical significant value (p value < 0.005) has been marked with the symbol*. Bas.Lesion length is indicating basal lesion length; HsCRP is for high sensitivity C reactive protein; Low GLP-1 (glucagon-like peptide 1) is indicating lower terzile of GLP-1 values, as < 20 pg/ml; LVEF is for left ventricle ejection fraction; M1/M2 ratio is the ratio between macrophage 1 and macrophage 2 cells. VD-3 is indicating a multivessel coronary disease with 3 coronary vessels | 2018-01-03T15:02:18.149Z | 2018-01-03T00:00:00.000 | {
"year": 2018,
"sha1": "8b99da45fa189019a7d61ca74e191512688dd830",
"oa_license": "CCBY",
"oa_url": "https://dmsjournal.biomedcentral.com/track/pdf/10.1186/s13098-017-0304-3",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8b99da45fa189019a7d61ca74e191512688dd830",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
219737776 | pes2o/s2orc | v3-fos-license | Deep Sequencing of Varicella-Zoster Virus in Aqueous Humor From a Patient With Acute Retinal Necrosis Presenting With Acute Glaucoma
Abstract We report a case of acute retinal necrosis presenting with acute glaucoma preceding inflammatory signs by several days. High-throughput sequencing on aqueous humor revealed a low-level diversity in the viral genome comparable to diversity seen in cutaneous vesicles in contrast to high diversity in encephalitis.
Acute retinal necrosis (ARN) is a rare sight-threatening disease caused by reactivation of herpes simplex virus (HSV)1, HSV2, or varicella-zoster virus (VZV) [1]. Clinical signs include progressive peripheral retinal necrosis, occlusive vasculopathy, and prominent panuveitis. Incidence is approximately 1 case per 2 million people a year. Visual prognosis is poor despite antiviral treatment. The cause of the severity of ARN is unknown. Patients are often immunocompetent. Visual loss, floaters, and occasionally pain are common complaints in ARN patients. Different stages of inflammation are observed at presentation.
Deep sequencing has become an important application for detection and genetic characterization of pathogens in disease. Considering analysis of virus in intraocular fluids reports have been sparse [2,3]. Sequences of VZV registered in GenBank derive from other body compartments.
CASE PRESENTATION
A 65-year-old healthy woman presented with 5 days of discomfort in the left eye. Her visual acuity was 20/20 and she reported no symptoms of floaters or visual loss. Intraocular pressure (IOP) was 50 mmHg. The patient was examined by a senior ophthalmologist with slit lamp biomicroscopy at the tertiary referral hospital in the region of Halland, Sweden. The cornea was clear and flare or corneal precipitates were absent. No inflammatory reaction was found in the posterior segment. Because of the clear conditions, gonioscopy could be performed and no apparent pathology was detected in the iridocorneal angle. Pharmacological IOP-reducing treatment was initiated immediately.
Follow-up was scheduled 4 days later at a smaller hospital. At this time, inflammatory sign had evolved and visual acuity had dropped to 20/40. Corneal edema, flare with cells in the anterior chamber, and vitreous condensations were present. A retinal hemorrhage at the margin of the optic disc was noted. The patient was ordered corticosteroid topically at an hourly basis. There was significant deterioration during the following 7 days, and the patient was referred back to the regional referral hospital. The visual acuity was now hand movements and peripheral necrosis was evident. She was treated for suspected ARN because all criteria of ARN were met. Acyclovir was administered intravenously (15 mg/kg 3 times a day) and an aqueous tap was performed at the day of arrival. Aqueous humor was positive for VZV deoxyribonucleic acid (DNA) by TaqMan quantitative real-time polymerase chain reaction (qPCR) with a viral load of 1.1 × 10 9 DNA copies per milliliter [4].
Despite treatment, the patient subsequently deteriorated and complications occurred with peripheral circumferential detachment of the retina and hypotony. Visual acuity remained at hand movements at follow-up.
From the aqueous humor sample, DNA was extracted with the MagNA Pure LC Total Nucleic Acid Isolation Kit (Roche Diagnostics, Mannheim, Germany). No amplification by PCR or other target enrichment was done before sequencing, which was performed using the Ion Torrent/Ion S5 system (Life Technologies, Carlsbad, CA). We produced libraries of approximately 10 000 000 reads in total that were mapped to a VZV reference genome (Dumas strain; GenBank accession no. NC_0011348.1) using CLC Genomic Workbench 11 aligner (QIAGEN, Hilden, Germany); parameters are stated in the Supplementary Material. Number of reads aligned to VZV were 68.576. The complete genome was covered, with an average and a maximum coverage of 98.91 and 235 reads depth, respectively. Coverage graphs are found in the Supplementary Material. The consensus sequence was retrieved using CLC Genomics Workbench 11 extract consensus sequence tool (QIAGEN), thereafter removing sequencing error-induced indels. All regions were included and aligned against previously reported strains from GenBank. Consensus strain is visualized in a phylogenetic network together with VZV strains derived from GenBank ( Figure 1). The phylogenetic network was built using SplitsTree4, and all repeat regions or regions including gaps were removed before this analysis [5]. The intraocular viral genome clustered with clade 3 (Zos/Cli/SWE/aq/1908/2015 in Figure 1).
For comparison, aqueous humor samples of an additional VZV ARN patient was also analyzed with high-throughput sequencing. Viral load in this sample was high, albeit lower than in the first patient (8.4 × 10 6 VZV DNA copies per milliliter by PCR). This patient exhibited classic signs of ARN initially in contrast to the present case. However, both were of the same age and same sex without known immunodeficiency and with similar clinical outcome. In the second sample, 4815 reads aligned with VZV. Ninety-eight percent of the total viral genome was obtained as previously described, although with a lower average coverage of 6.28 and a maximum of 244 reads. The strain clustered with clade 1 (depicted as Zos/Cli/SWE/ aq/2109/2015 in Figure 1).
Because of poor prognosis despite adequate antiviral treatment in both patients, thymidine kinase and DNA polymerase genes of the strains were analyzed but showed no evidence of resistance mutations.
We also investigated single-nucleotide polymorphisms (SNPs) compared with the sample VZV consensus sequence and their frequencies, using the CLC Genomic Workbench 11 fixed ploidy variant detection tool (QIAGEN), which would suggest subpopulations in the same host. Frequencies of minority SNPs ranged from 1% to 67%. In Zos/Cli/SWE/aq/1908/2015 and Zos/Cli/SWE/aq/2109/2015, there were less than 10 SNPs in the whole VZV genome, indicating a low population diversity. Figure 1. Phylogenetic network based on complete varicella-zoster virus genomes excluding gap and repeat regions. The strains sequenced here and the minority consensus strains for respective strain are marked in red. The genomes of all other strains were downloaded from GenBank for comparison. The minority consensus strains cluster adjacent to respective majority strain, indicating that the detected heterogeneity is a result of spontaneous mutation introduced after primary infection rather than of multiple infections with different strains.
Heterogeneous within-host populations may be explained either by spontaneous mutations occurring after primary infection or by multiple infections with different strains. To investigate which of these scenarios that underlies the heterogeneity detected here, we created consensus sequences including all minority SNPs for respective strain. These were named Zos/Cli/SWE/aq/1908/2015(CM) and Zos/Cli/ SWE/aq/2109/2015(CM) and included in the phylogenetic analyses ( Figure 1). The minority consensus strains cluster adjacent to the majority strains, indicating that the SNPs are exclusive for these strains. This was further confirmed by manual comparison of each SNP to other VZV sequences available in GenBank. We thus conclude that these SNPs are results of spontaneous mutations after primary infection rather than multiple infections or recombination of different strains.
Patient Consent and Ethical Approval
Patient consent was obtained from both patients participating in this work. Ethical approval was granted by the Swedish Ethical Review Authority, and the tenets of the Declaration of Helsinki were followed.
DISCUSSION
We present a case with high IOP as an initial sign of ARN. Despite meticulous clinical examination, no signs of inflammation were present at first visit. Intraocular hypertension is per se not an uncommon feature in ARN. However, the mechanism behind raised pressure is believed to be secondary to inflammation such as occlusion of the trabecular meshwork by inflammatory cells, synechiae formation blocking the aqueous circulation, or, after longstanding inflammation, synechiae in the iridocorneal angle. Virus trabeculitis has also been suggested, although it is considered a speculative term.
For ocular disease, latency of VZV after chickenpox is known to be localized in the trigeminal ganglion, which makes sense for reactivations such as herpes zoster ophthalmicus with cutaneous affection. However, in ARN and in particular in this case, regions with mainly autonomous innervation are primarily affected, suggesting possible reactivation from autonomous ganglia (such as the ciliary ganglion). Richter et al [6] analyzed autonomic ganglia from human cadavers with PCR, and these were shown to harbor VZV DNA. An alternative autonomous path for reactivation for ARN was proposed.
Varicella-zoster virus is considered to have a well conserved genome of approximately 125 000 base pairs with few spontaneous mutations and geographically separated clades.
Variations in the nucleotide sequence is described as SNPs and can be used to distinguish clades phylogenetically. However, if there is a varying frequency of SNPs in the same host, this indicates subpopulations of virus. Depledge et al [7] investigated whether there is a difference in variability depending on body compartment and found a significantly higher variation in nonvesicle fluid (such as cerebrospinal fluid, bronchoalveolar lavage, serum) compared with vesicle fluid. Variation was also significantly higher in cerebrospinal fluid in encephalitis compared with meningitis. Samples from encephalitis patients had up to 169 SNPs in VZV genome compared with vesicle fluid with an average of less than 10 SNPs. Zos/Cli/SWE/aq/1908/2015 and Zos/Cli/ SWE/aq/2109/2015 in our material had less than 10 SNPs, resembling variability in vesicle fluid. To our knowledge, the variability of VZV in ocular fluids has not previously been investigated, and we have no data on variability in other ocular clinical manifestations of VZV.
A possible explanation of higher variability is reactivation of virus from multiple ganglia or neurons in encephalitis. Low variability in ARN is therefore in keeping with presumed reactivation from a single ganglion and/or combined with a bottle-neck effect in the path from the reactivating ganglia to the eye.
Our observations confirm that deep sequencing successfully can be performed on aqueous humor and data used for antiviral resistance detection and viral genotyping. The method might also be valuable for unbiased pathogen detection in ocular infections in which no causative agent has been found when using PCR or other targeting methods.
CONCLUSIONS
Acute glaucoma can be the first sign of ARN even in absence of overt inflammation. Emerging inflammatory signs in the aftermath of acute glaucoma should alert physicians on the possibility of ARN, requiring high doses of antiviral treatment to stop progression. Despite atypical presentation of a rare form of a serious VZV manifestation with a poor prognosis, the causative virus had a comparatively low genetic variability and clustered phylogenetically with other wild-type strains that are common in Europe. Variability was comparable to SNPs in VZV strains derived from vesicle fluid. Further investigations are needed to determine the genetic variability in viral ocular infection and its clinical relevance.
Supplementary Data
Supplementary materials are available at Open Forum Infectious Diseases online. Consisting of data provided by the authors to benefit the reader, the posted materials are not copyedited and are the sole responsibility of the authors, so questions or comments should be addressed to the corresponding author. | 2020-05-28T09:13:14.406Z | 2020-05-26T00:00:00.000 | {
"year": 2020,
"sha1": "3c9ef3e32702211a24c8356b09e972f53ac9800f",
"oa_license": "CCBYNCND",
"oa_url": "https://academic.oup.com/ofid/article-pdf/7/6/ofaa198/33411263/ofaa198.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5aa8ecc5c62b23e3144b19b25b1ca97bd13e79bc",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
250516348 | pes2o/s2orc | v3-fos-license | Parent and Adolescent Reports of Parental Monitoring and Sources of Parental Knowledge are Linked to Cannabis Use and Symptom Development in Adolescents
Objective. Greater discrepancies between parent and adolescent reports of parenting behaviors are associated with poorer adolescent functioning. The present research aims to build from the existing literature by examining unique parent and adolescent perceptions of parental monitoring and distinct sources of parental knowledge (i.e, parental solicitation, parental control, child disclosure) and their association with adolescent cannabis and alcohol use and disorder symptoms using cross-sectional data. Method. Parent-adolescent dyads (N = 132) were recruited from the community and the family court system. Adolescents were ages 12 to 18 (40.2% female; 68.2% White, 18.2% Hispanic). Parents and adolescents completed a questionnaire assessing the four domains of parenting behaviors. Adolescents’ substance-use behaviors and related disorder symptoms were assessed via adolescent self- report and semi-structured interviews. Results. Parental ratings of distinct parenting behaviors were higher (more favorable) than their child’s reports, as shown in prior studies. Parent-reported parenting behaviors were uniquely related to cannabis use, over and above adolescent reports and the adolescent’s age. With regard to report discrepancies, interactive effects of parent and adolescent perceptions of parental control were not statistically significant in our analysis after correcting for multiple tests. Conclusions. While most research relating parental monitoring to adolescent cannabis use relies solely on adolescent perceptions, our study suggests a unique role of parent perceptions for cannabis use and disorder symptoms, respectively. Findings support the importance of considering unique parent and adolescent perceptions of what parents know, as well as how they know it, to understand early cannabis use and problem development.
Adolescence is the developmental period most strongly associated with the initiation and escalation of cannabis use and the development of cannabis use disorder (CUD) (Volkow et al., 2021). Parental monitoring, or knowledge of the child's activities, whereabouts, and relationships, is associated with delayed initiation and levels of cannabis use (Lac & Crano, 2009;Neiderhiser et al., 2013) and substancerelated problems (Branstetter & Furman, 2013). Indeed, interventions designed to delay or prevent adolescent substance-use problems often target monitoring as a key aspect of the parent-child relationship (Dishion et al., 2003;Kobak et al., 2017;Kuntsche & Kuntsche, 2016). Further, the degree of parent-adolescent disagreement in perceptions of monitoring is an indicator of poor relationship quality that is linked to adolescent alcohol use (Abar et al., 2015), cigarette smoking (Sartor et al., 2020), composite measures of substance-use initiation (Lippold et al., 2011), and composite measures of delinquent behaviors (Hou et al., 2018;Reynolds et al., 2011). The present analysis aims to build on prior work by testing unique relations of adolescent and parent perceptions of specific parental-monitoring constructs with the adolescent's likelihood of having used cannabis and meeting one or more symptom for CUD.
Parental monitoring reflects both parental knowledge of their adolescent's actions and relationships, but also how parents learn about their adolescent's behavior (Dishion & McMahon, 1998;. Sources of parental knowledge include parental control (i.e., setting rules to control adolescent behavior), parental solicitation (i.e., engaging with the adolescent or other parents to gain information), and child disclosure (i.e., adolescent's sharing or concealing of information). Parental control and solicitation are active, parent-initiated efforts to know the activities of their child, whereas child disclosure relies on the child to self-initiate sharing information. An expanding literature suggests that parental knowledge largely depends on adolescents' spontaneous and willing disclosure of activities, friendships, and whereabouts, rather than on parents' "monitoring" of them (Hou et al., 2018;Kapetanovic et al., 2019;Racz & McMahon, 2011;. Thus, the distinction of parent-driven, active efforts to secure knowledge (i.e., solicitation and control) from child-driven processes (i.e., voluntary disclosure) is important for understanding which aspects of "monitoring" are protective against adolescent cannabis use and problem development.
Meta-analytic reviews demonstrate the protective role of adolescent perceptions of specific aspects of parental monitoring in relation to adolescent cannabis use (Lac & Crano, 2009), and the role of monitoring, broadly defined as parental knowledge, in relation to adolescent alcohol-use frequency/quantity and alcohol-related problems (Yap et al., 2017). Adolescent perceptions of parental monitoring, also broadly defined as knowledge, are prospectively linked to cannabisuse initiation (Bohnert et al., 2012;Epstein et al., 2017), and a range of alcohol-use behaviors and related problems (Yap et al., 2017). Yet, effect sizes for both cannabis and alcohol-related risks are modest, and substantial heterogeneity across studies limits accurate understanding of the effects of specific parental monitoring constructs (Lac & Crano, 2009;Yap et al., 2017). Factors contributing to heterogeneity of prior studies include use of conceptually broad and nonspecific measures of parental monitoring, as well as substance-use outcomes that vary in severity. Whereas studies evaluating monitoring effects on alcohol outcomes include frequency of intoxication, drunkenness, binge drinking, heavy drinking, level of use, escalation of use, alcohol-related problems, and severe and problematic use (Yap et al., 2017), studies of cannabis outcomes are primarily limited to less severe outcomes, such as lifetime use or recent use frequency (Bohnert et al., 2012;Epstein et al., 2017;Lac & Crano, 2009). At least one exception, however, identified a protective effect of adolescent-reported parental monitoring for a combined outcome assessing past-month frequency of negative consequences related to alcohol and other drug use (Branstetter & Furman, 2013).
Another gap in existing work assessing perceived parental monitoring, particularly in studies examining cannabis use, is that parent reports of monitoring are not assessed (Bohnert et al., 2012;Epstein et al., 2017;Keogh-Clark et al., 2021;Lac & Crano, 2009). Including both adolescents and parents as informants is important because: (1) obtaining parent and adolescent reports of parenting behaviors is standard practice in clinical settings, and (2) parent and adolescent reports of parenting behaviors often demonstrate small correlations that are thought to reflect clinically relevant information (De Los Reyes et al., 2019, 2022. Greater disagreement between parent and adolescent reports of parenting behaviors is linked to a wide range of problematic behavioral, academic, and mental health outcomes (de Los Reyes, 2011;Hou et al., 2018), including symptoms of depression, anxiety, and conduct disorder (Maurizi et al., 2012). Such discrepancies may reflect differences in perceptions, contexts (e.g., an adolescent does not observe all the times, locations, and strategies a parent uses to monitor their behavior), or underlying relationship and communication deficits (Lee et al., 2019). In fact, recent work suggests that one mechanism through which prevention interventions may reduce adolescent substance use is by decreasing discrepancies between parent and adolescent reports of parenting behaviors (Lee et al., 2019).
Understanding how parental monitoring relates to cannabis use and CUD development is further complicated by the use of aggregated outcomes that combine cannabis with other substances (Branstetter & Furman, 2013;Neiderhiser et al., 2013) and other "normbreaking" behaviors (e.g., theft, vandalism, bullying, physical fights) Voisin et al., 2012). Separating cannabis from other outcomes is important, in part, because attitudes towards cannabis and laws regulating cannabis use differ from those for alcohol or other drugs. For example, if youth view cannabis use as less harmful and more socially acceptable than other substances, it may foster child disclosure. Further, the frequency of use and the ease with which use is concealed is different for cannabis, alcohol, and other substances. For example, among adolescents who engage in substance use regularly, cannabis and nicotine use may occur daily or multiple times per day, whereas adolescent alcohol use tends to be sporadic, opportunistic, and contextually limited (e.g., on weekends, unsupervised, with peers) (Jackson, 2019;Johnston et al., 2020). Differences between use patterns and perceived harms of use may suggest that some parenting behaviors will be more effective than others for certain substances. For example, parental control may be effective for restricting alcohol use by reducing access to certain peers or unsupervised time, whereas child disclosure may be particularly relevant for cannabis use.
To our knowledge, few studies have explicitly examined the unique contribution of parent and adolescent perceptions of parental monitoring to risk for adolescent cannabis use (Cottrell et al., 2003;Rusby et al., 2018). Two studies showed nonsignificant (Cottrell et al., 2003) or modest (Rusby et al., 2018) correlations between adolescent and parent reports of monitoring, defined broadly as parental knowledge, suggesting parent-adolescent disagreement in perceptions of parental knowledge. Cottrell and colleagues (2003) showed that both parent and adolescent (ages 12 to 16) reports of lower monitoring related to adolescent alcohol use in the past six months. Only adolescent reports related to cannabis use in the past six months, however, and only adolescentreported lower monitoring uniquely related to alcohol and cannabis use over parent-reported monitoring (Cottrell et al., 2003). In a prospective study, Rusby and colleagues (2018) also showed that both parent and adolescent (ages 13 to 14) reports of lower monitoring predicted onset of alcohol use, binge drinking, and cannabis use one year later. Only adolescent-reported lower monitoring, however, uniquely predicted cannabis use onset over parent-reported monitoring, adolescent and parent reports of the parent-child relationship, and parent substance use (Rusby et al., 2018). Prior studies provide essential foundational work relating adolescent and parent perceptions of parental monitoring, broadly defined as knowledge, with adolescent cannabis use (Cottrell et al., 2003;Rusby et al., 2018), upon which the present work aims to build. Extant studies have focused on younger adolescents and onset of cannabis use; whether these associations generalize to a broader age range of youth and to other cannabis-related outcomes remains unknown. This is an important question given that risk for cannabis use and CUD onset (Han et al., 2019;Volkow et al., 2021) markedly increases as adolescents age. In addition, most studies assess only self-reported, adolescent perceptions of monitoring without exploring parent perceptions (Cutrín et al., 2021;Lac & Crano, 2009;Marceau et al., 2020;Neiderhiser et al., 2013;Rusby et al., 2018), and most examine parental monitoring operationalized strictly as knowledge (what parents know), absent of sources of this knowledge (how they know it) (Cottrell et al., 2003;Cutrín et al., 2021;Neiderhiser et al., 2013;Rusby et al., 2018). This is a significant limitation given that parents' active efforts to secure knowledge (e.g., solicitation and control) are modifiable parenting behaviors that increase knowledge directly and through promoting child disclosure to prevent adolescent substance-use problems (Hernandez et al., 2015;Jiménez-Iglesias et al., 2012;Soenens et al., 2006). Moreover, prior work has demonstrated distinctive relations of specific monitoring and source-of-knowledge domains in pre-to early adolescence when evaluating parentadolescent report discrepancies and alcohol use (Abar et al., 2015), as well as aggregated deliquency outcomes that include trying cannabis and driving while high (Bouffard & Armstrong, 2021). It is possible that parent and adolescent contributions to monitoring and sources of knowledge differ as adolescents increase use frequency and develop problems, and parent and adolescent perceptions of these practices may have unique importance for understanding use and disorder development.
The current study leveraged cross-sectional data from a larger investigation (Miranda et al., 2010(Miranda et al., , 2013 to fill gaps in understanding how specific parent-and adolescent-reported parental monitoring domains relate to whether an adolescent had ever used cannabis and whether they met criteria for one or more CUD symptoms in the past year. Our goal was to build on prior cannabis research by studying a broader age range of youth, ages 12 to 18 years, examining indices of lifetime use and problem development, and testing unique relations of parent and adolescent reports across four key domains: parental monitoring (knowledge), parental solicitation, parental control, and child disclosure. Prior work has focused on these domains as related to likelihood of any cannabis use (Cottrell et al., 2003;Rusby et al., 2018), a range of alcohol-related outcomes (Abar et al., 2015), and delinquency . This investigation is the first to explore unique parent and adolescent associations with an early indicator of risk of developing one or more CUD symptoms. We hypothesized that parent reports of parenting would, on average, be higher (more favorable) than adolescent reports of the same parenting practices, as widely demonstrated by prior literature (Maurizi et al., 2012;Reidler & Swenson, 2012;Reynolds et al., 2011). We also expected the "pure" parental knowledge domain and child disclosure to relate more strongly to cannabis use and symptoms, as suggested by seminal papers and metaanalyses of adolescent cannabis use (Lac & Crano, 2009) and alcohol-related behaviors and problems (Yap et al., 2017).
Our analysis also included past-year consumption of two or more alcoholic drinks in one sitting and likelihood of meeting criteria for one or more AUD symptoms in the past year. It was difficult to speculate whether parent or adolescent reports of monitoring domains would matter more when it came to adolescent-reported cannabis and alcohol outcomes. Whereas studies of outcomes specific to cannabis use tend to favor adolescent reports (Cottrell et al., 2003;Rusby et al., 2018), a recent prospective study showed added value for parent reports of parental knowledge, but not parental control, when predicting a composite measure of property offending, which included driving while drunk or high (Bouffard & Armstrong, 2021). This study sampled older adolescents (ages 14 to 18) than previous work and utilized a more severe and broad delinquency outcome (Bouffard & Armstrong, 2021). A prospective study of younger adolescents found unique effects of parent-reported control when examining a more severe alcohol-related outcome, i.e., ever drunk, but generally found that adolescent, but not parent reports of monitoring predicted likelihood of ever having a drink of alcohol (Abar et al., 2015). Taken together, these studies may suggest unique effects of parentreported monitoring for older adolescents and more severe substance-related outcomes. Given the limited body of work examining unique associations of adolescent and parent perceptions of distinct monitoring domains, however, no apriori hypotheses were forwarded with respect to differences in parental monitoring-substance use associations by substance type. Our analysis includes alcohol outcomes to draw out any distinctions between alcohol and cannabis in the same adolescent sample and to extend the age range of prior work from pre-to early adolescence to later adolescence (ages 12 to 18).
Participants
Participants were 132 adolescent-parent dyads from a larger study (n = 253) that sought to examine how differences in decision making and reactions to emotional situations are associated with adolescent problem behaviors, including alcohol and other substance use (Miranda et al., 2010(Miranda et al., , 2013 . Adolescents were recruited from the community and the family court system. Eligible youth (age 12-19 years) had no history of traumatic brain injury, hearing difficulties, or suicidal ideation or psychotic symptoms. A negative urine toxicology screen for alcohol, amphetamines, barbiturates, benzodiazepines, cocaine, and opiates was also required on the day of assessment. Parent data, most commonly provided by the youth's mother ( Interested youth called the lab to learn more about the primary study and to complete a brief telephone screening to determine initial eligibility. Individuals who passed the initial screening and did not endorse exclusionary criteria received an invitation to complete an in-person screening and, if applicable, obtain written informed consent or assent. Parents/legal guardians were required to provide permission for youth younger than age 18 years; assent was obtained from minors. Youth who were eligible participated in the half-day assessment session, which included completion of self-report, paper-and-pencil measures and a semistructured clinical interview. One parent/legal guardian for each participant was invited to participate and complete semi-structured interviews and self-report assessments about their adolescent's psychiatric functioning and developmental history, but caregiver involvement was not required. With this approach, youth whose parents/legal guardians were unavailable or unwilling could still participate in the study. The university Institutional Review Board approved all study procedures.
Alcohol and cannabis use disorder symptoms.
Psychiatric diagnoses, including substance use disorders, were attained using the Kiddie Schedule for Affective Disorders for School-Age Children (KSADS; Kaufman et al., 1997), a clinician administered semi-structured interview based on Diagnostic and Statistical Manual of Mental Disorders criteria (4th ed.; DSM-IV-TR; American Psychiatric Association, 2000). Adolescents were interviewed separately from parents and diagnoses were based on adolescent reports. Interviewers underwent systematic training and achieved a high level of inter-rater reliability (kappa > 0.90). Symptoms were coded according to severity (0= not present, 1= subthreshold, 2= clinical threshold). For each criterion, adolescents that met threshold were coded as having that AUD or CUD symptom criteria met, and coding was verified through case consensus involving two licensed clinical psychologists. Due to low base rates of meeting clinical DSM-IV-TR diagnoses of alcohol abuse or dependence, participants were classified as to whether they met at least one symptom of AUD. We used the same approach for CUD. Alcohol use. Alcohol use was measured as a single item from an introductory section to the K-SADS section on AUD. The item asked, "Have you drank 2 drinks in 1 sitting within the last year." Responses were coded as yes or no.
Cannabis use. Cannabis use was measured from a prior drug use checklist from the K-SADS. Participants were asked, "Have you used any of the drugs on this list before, even if you have only tried them once. Which ones have you used?" Cannabis use was coded either yes or no, thus identifying a broad range of youth who may be at risk for problematic cannabis use.
Parental monitoring. Parents and adolescents separately completed the Parental Monitoring Questionnaire (PMQ; , a 9item questionnaire assessing parental knowledge of child activities. Parent and adolescent versions of the PMQ shared identical content with minor changes in wording to reflect the parent/adolescent perspective. For example, adolescents responded to "Do your parents…: know what you do during your free time? …know who you have as friends during your free time?", whereas parents responded to "Do you…: know what your child does during his or her free time? …know who your child has as friends during his or her free time?" Responses were indicated with 5-point Likert scales (1 = No, never; 2 = Some of the time; 3 = About half the time; 4 = More than half, but not always; 5 = Yes, always).
Response averages were calculated separately for parent and adolescent reports.
Sources of parental knowledge. Parents and adolescents also separately completed the Sources of Parental Knowledge Scales , which assessed parental solicitation (5 items), parental control (3 items), and child disclosure (4 items). Similar to the parental monitoring scale, parent and adolescent versions were identical in content other than wording referring to whose perspective was being assessed. Parental solicitation, parental control, and child disclosure were developed by . These variables add information about parents' own efforts to find out what their children are doing as well as a child's willingness to divulge this information spontaneously. Example items from these scales in the adolescent versions are: "How often do you need to ask your parents before you can decide with your friends what you will do on a Saturday evening?" (Parental Control), "During the past month, how often have your parents started a conversation with you about your free time?" (Parental Solicitation), and "If you are out at night, when you get home, how often do you tell your parents what you have done that evening?" (Child Disclosure). One child disclosure item, "How often do you hide from your parents about what you do during nights and weekends?" was reverse coded. Parent and adolescent response averages were calculated separately for each scale.
Analytic Plan
First, dependent samples t tests evaluated differences in average raw scores of parent and adolescent reports of parental monitoring, parental solicitation, parental control and child disclosure. Point-biserial correlations related raw scores of these variables and age with binary substance-use outcomes (i.e., drank two drinks in past year, ever used cannabis, 1+ symptoms of AUD, 1+ symptoms of CUD). Other covariate relations for nominal variables with binary substance-use variables used the Phi coefficient (i.e., gender, ethnicity) and Cramer's V (race). Only covariates with significant relations to outcomes were retained in subsequent models.
Next, sets of logistic regression models tested whether parent and adolescent perceptions of parental monitoring and sources of knowledge (i.e., parental solicitation, parental control, and child disclosure) uniquely related to substance-use outcomes. Parent and adolescent scale scores were standardized (z-scores) prior to model entry. Domains of parental behaviors were analyzed in separate models (Abar et al., 2015). All models include covariates in a first step. In Model 1, a second step included parent and adolescent standardized scores. Inclusion of both parent and adolescent reports in the same model allows the following interpretation: (1) parent score main effects indicate the influence of parents' reports of parenting behaviors, accounting for or apart from (subtracting) the influence of adolescents' reports, and (2) adolescent score main effects indicate the influence of adolescents' reports of parenting behaviors, accounting for or apart from (subtracting) the influence of parents' reports. Of note, prior research (e.g., Abar et al., 2015) also tested models including discrepancy (i.e., difference) scores and either parent or adolescent standardized scores. Such models have statistical and conceptual limitations (Cronbach & Furby, 1970;Edwards, 1994Edwards, , 2001 and are mathematically equivalent to including standardized parent and adolescent scores simultaneously in the same model, and thus, difference scores were not tested (Laird, 2020).
Following current recommendations (Laird & de Los Reyes, 2013;Laird, 2020), we also modeled discrepant parent-adolescent perceptions by examining interactive effects of parent and adolescent reports. Model 2 included the interactive effects of parent and adolescent standardized scores. This moderation approach provides a statistical test of whether adding informant discrepancies to the model provides unique information above and beyond Model 1 (Laird & Weems, 2011). For all models, we applied a Bonferroni correction to account for testing effects for two substance-use outcomes, with the adjusted p-value threshold for significance = .025.
Descriptive Information and Bivariate Associations
Of the 132 adolescent participants, 22 (16.7%) consumed two drinks in one sitting within the last year. Thirty-eight (28.8%) reported ever trying cannabis, of whom 22 (57.9%) used more than once a month. Using DSM-IV-TR criteria, 12 (9.1%) met criteria for one or more CUD symptom and 8 (6.1%) met criteria for one or more AUD symptom. The average number of criteria endorsed among participants who met criteria for at least one symptom was as follows: CUD M = 3.25 (SD = 1.71), AUD M = 1.63 (SD = 1.06).
Consistent with hypotheses and prior research, parents reported significantly higher average (more favorable) parenting behaviors than adolescents Bivariate correlations of parent and adolescent reports of parenting behaviors are shown in Table 1.
Parent reports were interrelated, rs from .26 to .71, ps < .004. Adolescent reports were also interrelated, rs from .45 to .71, ps < .001. Parent and adolescent reports of the same monitoring domain were modestly related for parental monitoring, r = .30, p = .001, parental solicitation, r = .24, p = .005, and child disclosure, r = .43, p < .001, suggesting some lack of agreement among reporters. Parent and adolescent reports of parental control were not significantly related, r = .16, p = .061. Of note, between-reporter correlations of the same domain generally demonstrated lower agreement than within-reporter correlations of unique domains. Adolescent reports of solicitation were also not related to parent reports of monitoring, r = .11, p = .192, or control, r = − .14, p = .101. Likewise, adolescent-reported child disclosure was not related to parent-reported control, r = − .01, p = .905.
Bivariate relations of cannabis and alcohol variables with covariates (i.e., gender, age, race of adolescent, ethnicity of adolescent) and parent and adolescent raw scores are shown in Table 2. Of the putative covariates, only the adolescent's age significantly related to cannabis and alcohol use and problem development, rs from .23 to .46, ps < .010.
The adolescent's gender, racial identity, and ethnic identity were not significantly related to these outcomes (see Table 2). With regard to parenting behaviors, adolescent-reported parental monitoring, parental control, and child disclosure, was negatively related to adolescent use of cannabis and alcohol use and one or more CUD/AUD symptom(s). Adolescent reported solicitation was only negatively related to AUD symptom development, r = − .18, p = .046, but not cannabisuse outcomes or past-year alcohol use, rs from − .08 to − .13 ps ≥ .158. With a slightly different pattern, parent-reported monitoring and child disclosure were generally negatively related to these substance-use outcomes (see Table 2). For control, however, parent reports demonstrated negative relations to cannabis outcomes, but not alcohol outcomes (see Table 2). Parent-reported solicitation was only negatively related to lifetime cannabis use, r = − .37, p < .001, but not cannabis problem development or alcohol outcomes, rs from 0.02 to -0.14, ps ≥ .112.
Unique Parent and Adolescent Report Relations to Cannabis Use and Symptoms
Of putative covariates, only age was significantly related to substance-use outcomes in bivariate analyses, and thus, it was the only covariate retained in logistic regression models (see Tables 3 and 4, Step 1). Alone, age explained from 11 to 22% of the variance in cannabis use and problem development (Pseudo R 2 values from .11 to .22). Specifically, each one-year increase in age was associated with more than doubled odds of having ever used cannabis, OR = 2.72, p < .001, or meeting one or more CUD symptoms in the past year, OR = 2.24, p = .014.
Step 2, Models A and B are age-adjusted models testing individual effects of adolescentand parent-reported parenting variables separately for each of the four parenting domains.
Step 3 tested unique effects of parent-and adolescent-reported parenting variables. Models including both adolescent and parent reports of monitoring explained from 13 to 43% of the variance in cannabis outcomes, reflecting an increase in Pseudo R 2 values from .02 to .21 (i.e., 2 to 21%) over age-only models. Parent-reported higher levels of monitoring, solicitation, and child disclosure all related to reduced odds of the adolescent having ever tried cannabis, ORs = 0.33, 0.48, and 0.41, respectively, ps < .025, over and above adolescent-reported parenting behaviors and the adolescent's age. These odds ratios suggest that each one-unit increase in parentreported positive parenting practices was associated with from a 52 to 67% reduction in the odds of adolescents' engagement in cannabis use. Note. All independent variables were standardized prior to model entry. A Bonferroni correction for Type I error for tests of two outcomes requires p < .025. † p < .05, *p < .025, **p < .01, ***p < .001. Past-year Alcohol Use Step Note. All independent variables were standardized prior to model entry. A Bonferroni correction for Type I error for tests of two outcomes requires p < .025. † p < .05, *p < .025, **p < .01, ***p < .001.
Parent-reported parental control was not significantly related to reduced odds of using cannabis after correcting for multiple outcome tests, p = .047. Adolescent perceptions of the same parenting behaviors were not related to engagement in cannabis use, over and above parent reports and the adolescent's age. Neither parent nor adolescent reports of parenting domains uniquely related to odds of meeting one or more CUD symptoms (see Table 3, Step 3). Adolescent-reported parental monitoring, which reflects parental knowledge alone and not sources of knowledge, was related to CUD symptoms in an age-adjusted model, OR = 0.47, p = .017, but not significantly related to CUD symptoms after accounting for parent reports and correcting for multiple tests, p = .026.
In a final step, interactive effects of parent and adolescent reports were added to evaluate whether explicitly modeling the combination of patterns of informant discrepancies (e.g., high parent report, low adolescent report) provides additional information over models testing unique associations. No interactive effects of parent and adolescent perceptions of monitoring domains were significant after a stringent Bonferroni correction for multiple tests, ps ≥ .027.
Unique Parent and Adolescent Report Relations to Alcohol Use Outcomes Of all outcomes, age was most influential for reports of past-year alcohol use, with the likelihood of an adolescent reporting having two drinks in one sitting more than quadrupling for each one-year increase in age, OR = 4.60, p < .001.
Models including adolescent and parent reports of monitoring explained from 24 to 49% of the variance in alcohol outcomes, reflecting an increase in Pseudo R 2 values from .01 to .29 (i.e., 1 to 29%) over age alone (see Table 4).
Adolescent-reported parental monitoring and parental solicitation outperformed parent perceptions of these same behaviors in relation to past-year alcohol use, OR = .56, p = .045, and alcohol-related problem development, OR = .23, p = .018, respectively. Although not statistically significant, parent-reported solicitation was actually related to greater odds of meeting one or more AUD symptoms when also considering adolescent perceptions and the adolescent's age, OR = 2.49, p = .092. Parent-reported control was marginally related to lower odds of past-year alcohol use, OR = .50, p = .034, and significantly related to lower odds of problem development, OR = 0.27, p = .017. Interactive effects of parent and adolescent reports were not significant.
Parental
knowledge of their child's whereabouts, activities, or relationships, i.e., parental monitoring, is linked to lower risk for adolescent cannabis use (e.g.. Bohnert et al., 2012;Epstein et al., 2017;Lac & Crano, 2009), as well as alcohol use and early indices of alcohol problems (Yap et al., 2017). Although disagreement in parent and adolescent reports of parenting behaviors are the rule, rather than the exception (de Los Reyes, 2011;Korelitz & Garber, 2016), evaluations of the influence of monitoring on substance-use outcomes primarily focus on adolescent perceptions of parental knowledge. Exceptions for cannabis are limited to studies of unique effects of parent-and adolescent-reported parental knowledge on cannabis-use onset or frequency (Branstetter & Furman, 2013;Cottrell et al., 2003;Rusby et al., 2018), without attention to sources of that knowledge (i.e., active parent efforts of solicitation and control, or child disclosure), or cannabis-specific outcomes. Prior work for other related risk domains also focuses on the pre-to early adolescent years (Abar et al., 2015;Lippold et al., 2011;. The present crosssectional analysis builds from prior literature to understand how specific parent-and adolescentreported parental monitoring domains uniquely relate to cannabis and alcohol use and the likelihood of meeting one or more disorder symptom in adolescents (ages 12 to 18). Overall, a meaningful percentage of variance in cannabis and alcohol use outcomes was explained by accounting for parent and adolescent reports of parental monitoring and sources of knowledge parenting behaviors.
A main contribution of this work was examining both "pure" parental monitoring knowledge, but also sources of that knowledge, for understanding adolescent cannabis use and an early index of problem development. Sources of knowledge include active parenting strategies (i.e., solicitation and control) and aspects of "parental" monitoring (i.e., child disclosure) which rely on the child's behavior rather than the parents' . Prior literature suggests that parental knowledge may actually stem from the child's disclosure (or concealing) of their behavior and that greater fluctuations in adolescent-reported child disclosure over time are predictive of cannabis use initiation (Marceau et al., 2020). In our analysis, lower parent reports of parental knowledge and child disclosure were most consistently related to cannabis and alcohol use and problem development. This is fitting with prior meta-analyses indicating that parental monitoring is most predictive of cannabis (Lac & Crano, 2009) and alcohol (Yap et al., 2017) outcomes when characterized as parental knowledge.
Findings from the present study suggest that parent reports of their child's disclosure may be particularly relevant for adolescent cannabis use. If youth can conceal their cannabis use from parents through less conspicuous modalities of administration, such as edibles or vaping , protective "monitoring" effects may be contingent on parents acquiring information from their adolescents (i.e., through disclosure). Relatedly, in a longitudinal examination of these constructs, decreases in youth-reported child disclosure over time were predictive of cannabis initiation (Marceau et al., 2020). These findings highlight the importance of obtaining both adolescent and parent reports of parental knowledge and sources of knowledge. Replication of the potentially important role of parent-reported child disclosure on adolescent cannabis use is critical, as prior work has relied on adolescent selfreports and predominantly assessed parental monitoring but not sources of knowledge (Bohnert et al., 2012;Epstein et al., 2017;Lac & Crano, 2009). Additionally, correlations with substance-use outcomes suggested that parental solicitation is a less useful strategy for curtailing adolescent substance use (and avoiding related problems) than the adolescent's free, willing disclosure (or concealing) of their activities, as noted in a seminal paper . Indeed, Marceau and colleagues (2020) paradoxically found that higher consistency in child reports of parental solicitation efforts over time were related to cannabis initiation; the authors posited that associations between lability in parental solicitation and lower risk of cannabis initiation over time could be conceptualized clinically as a parent's ability to skillfully modulate levels of parental solicitation as needed, rather than inconsistent parenting (Marceau et al., 2020). Disagreement in parent and adolescent perceptions of parenting behaviors is longrecognized as a meaningful construct for understanding adolescent functioning (Achenbach et al., 1987;De Reyes & Kazdin, 2005;Guion et al., 2009). As expected, parental ratings of the parents' behaviors were higher (more favorable) as compared to adolescent reports. Parent and adolescent perceptions of the same parental monitoring knowledge and source-of-knowledge domains were also only modestly correlated, suggesting disagreement between reporters. Notably, parent and adolescent reports of parental control were not correlated, which is the same pattern observed by Abar and colleagues' study of pre-to early adolescence sipping, drinking, and drunkenness (Abar et al., 2015). Although prior literature suggests that greater disagreement in parent and adolescent perceptions relates to greater risk of adverse outcomes (de Los Reyes, 2011;Lippold et al., 2011), interactive effects of parent and adolescent perceptions were not statistically significant in our analysis after correcting for multiple tests.
With regard to unique effects of informant reports, where many were found for cannabis, few were observed for alcohol, and tended to favor both parent and adolescent reports, depending on the parental monitoring domain. For cannabis, parent reports tended to provide unique information for understanding lifetime cannabis use, but not problem development, over adolescent reports and the adolescent's age. These findings are in contrast with extant studies of pre-to early adolescent cannabis use (Cottrell et al., 2003;Rusby et al., 2018) and alcohol use (Abar et al., 2015). Our lack of unique influences of parent and adolescent reports for alcohol may be due to relative low base rates of drinking and AUD symptoms in our sample, and so these findings should be interpreted with caution. One possible explanation for the greater influence of parent-reported behaviors on lifetime cannabis use frequency, rather than CUD symptoms, is that the predictive utility of parent and adolescent reports may differ depending on the specific cannabis use behavior assessed. Parents' knowledge of whether their adolescent has ever used cannabis may be more accurate than their knowledge of their degree of frequent or problem use (Piehler et al., 2020). Regardless of the specific explanation for the findings in the current study, the results highlight the need for further research to understand the predictive ability of discrepant parent and adolescent reports more fully. Future research should consider more extensive testing of parentadolescent discrepancies in parenting in relation to a range of cannabis use behaviors, such as age of initiation, frequency of use, quantity of use, and cannabis-related problems.
Limitations
Additional limitations of this study should be acknowledged. Perhaps most problematic is that these data were only collected at one time. While our disaggregation of distinct parenting domains and focus on cannabis-use outcomes do move the field forward, our cross-sectional design limits the ability for predictive assumptions to be made. Additionally, while aspects of the sample, such as the broader age range, make it more generalizable, the limited number of adolescent-reported disorder symptoms may have restricted our ability to find effects, particularly for alcohol-related problem development. Future work should study a larger sample size at multiple time points to make more substantial predictive conclusions. Larger samples would also facilitate alternative approaches to modeling parent-adolescent discrepancies. Recent work suggests using both variable-centered (e.g., interactions or polynomial regressions) and personcentered analytic techniques (e.g., latent profile analysis) to examine parent-adolescent discrepancies (De Los Reyes et al., 2019). Future work with sufficiently large samples could employ both of these analytic strategies to obtain a more nuanced understanding of parent-adolescent discrepancies and their associations with adolescent cannabis use. Finally, as with many studies of parenting behaviors, we had a much higher number of mother parental reporters compared to father reporters. Future studies may consider balancing parent recruitment on the basis of the parent's gender. Information from fathers could provide an additional perspective on this topic.
Conclusions
Disaggregating the broad parental monitoring construct is one method of resolving inconsistencies in prior literature describing relations with adolescent substance-use outcomes (Lac & Crano, 2009;Yap et al., 2017). Future studies of parental monitoring should consider both what parents know, as well as how they know it, from the perspective of parents and youth, to better understand adolescent substance use and disorder development. Our study provides a meaningful step toward isolating the components of parental monitoring and sources of knowledge that most strongly relate to adolescent cannabis use and the development of disorder symptoms. We built on prior cannabis research by studying a broader age range of youth (ages 12 to 18 years) and evaluating one potential early index of cannabis-related harms. Future research should aim to study a larger sample of adolescents to capture a larger proportion who develop CUD symptoms and also follow these adolescents over time to study predictive relations. Findings suggest that parent-reported monitoring is a unique feature to consider in conjunction with adolescent perceived parenting, and point to targeting parent perceptions of parental monitoring and sources of parental knowledge (especially child disclosure) in adolescent substance use prevention/intervention programs. | 2022-07-14T18:20:43.137Z | 2022-07-11T00:00:00.000 | {
"year": 2022,
"sha1": "f387a5a68fca066949a087ddd771ec08c1ec26c9",
"oa_license": "CCBYNCND",
"oa_url": "https://publications.sciences.ucf.edu/cannabis/index.php/Cannabis/article/download/118/74",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "7a21fce4b86ec30318e202263be1db6ea85202d5",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
202509934 | pes2o/s2orc | v3-fos-license | High-fidelity optical diffraction tomography of multiple scattering samples
We propose an iterative reconstruction scheme for optical diffraction tomography that exploits the split-step non-paraxial (SSNP) method as the forward model in a learning tomography scheme. Compared with the beam propagation method (BPM) previously used in learning tomography (LT-BPM), the improved accuracy of SSNP maximizes the information retrieved from measurements, relying less on prior assumptions about the sample. A rigorous evaluation of learning tomography based on SSNP (LT-SSNP) using both synthetic and experimental measurements confirms its superior performance compared with that of the LT-BPM. Benefiting from the accuracy of SSNP, LT-SSNP can clearly resolve structures that are highly distorted in the LT-BPM. A serious limitation for quantifying the reconstruction accuracy for biological samples is that the ground truth is unknown. To overcome this limitation, we describe a novel method that allows us to compare the performances of different reconstruction schemes by using the discrete dipole approximation to generate synthetic measurements. Finally, we explore the capacity of learning approaches to enable data compression by reducing the number of scanning angles, which is of particular interest in minimizing the measurement time.
Introduction
Quantitative-phase imaging (QPI) enables the measurement of the phase-contrast information of transparent samples such as biological cells. QPI contrast is generated from the refractive index (RI) contrasts within and around a sample. Because this contrast mechanism is endogenous, quantitative-phase information does not require external labeling, such as immunostaining, which may perturb the sample. QPI contains the coupled information of sample thickness and RI contrast. Optical diffraction tomography (ODT) provides the 3D RI distribution of a sample by combining multiple 2D QPI measurements from various illumination angles 1,2 . Reconstructed tomograms provide structural information that has been extensively utilized to study hematology 3,4 , morphological parameters 5 , and biochemical information 6 which are summarized in several review papers 2,[7][8][9] . In ODT, the way in which multiple 2D measurements are combined into unified 3D information is critical. Under the assumption of a weakly scattering sample, the Wolf transform 10 has been widely used. Depending on how the 2D projections are processed, we obtain either the Born or Rytov approximations for the Wolf transform 1 . Each method has its limitations 11 , but the Rytov approximation is known to be more appropriate than the Born approximation for many biological applications 12 . However, when a sample is thicker and more complex, the Rytov approximation is no longer valid. This limits the usefulness of ODT for imaging complex samples.
Recently, methods have emerged to overcome the limitations of the Born and Rytov approximations by taking multiple scattering into account [13][14][15][16][17][18][19][20] . It was shown using Mie theory 20 that learning tomography (LT) 14,21 , an approach that exploits the beam propagation method (BPM) as the forward model to capture multiple scattering, has superior performance compared with that of the conventional imaging method based on the Rytov approximation. We refer to it as LT-BPM. LT uses the forward model of dividing 3D samples into multislices followed by slice-by-slice propagations. Due to the multislice modeling of forward models by LT, the resulting structure is similar to that of a neural network, and we can use the error back-propagation algorithm to calculate the gradient. The BPM consists of two steps: non-paraxial diffraction followed by phase modulation. The diffraction step used in the BPM assumes k 0 n x; y; z ð Þ%k 0 n 0 , where k 0 is the free-space wavenumber, n 0 is the RI of the medium, and n(x,y,z) represents RI variations. In addition, the phase modulation steps use a distance, dz/cosθ, to modulate the phase throughout propagation, given the propagation step (dz) and the illumination angle (θ). However, for thicker and more complex samples, as light propagates through the samples, multiple diffracted beams of light are generated, and it is not valid to use one single value, dz/cosθ, to represent optical path lengths. This deviation from the fixed distance, dz/cosθ, increases with increasing the illumination angle due to the nature of the cosine function 22 .
In this paper, we show that the accuracy of LT reconstructions of a 3D object is increased when we use the split-step non-paraxial (SSNP) method rather than the BPM. We refer to it as LT-SSNP. The SSNP method exploits not only the field but also the derivative of the field along the optical axis to model the propagation 23,24 . While the BPM requires this approximation, k 0 n x; y; z ð Þ% k 0 n 0 , to decouple diffraction from phase modulation, SSNP does not require the approximation, benefiting from propagating the derivative of the field at the same time. Phase modulation affects the derivative and is used concurrently in the next step of the diffraction calculation. LT-SSNP uses the same iterative scheme used in LT-BPM. To fairly assess LT-SSNP and compare it with the LT-BPM, synthetic measurements are generated using Mie theory and the discrete dipole approximation (DDA). For spherical and cylindrical objects, Mie theory provides the analytical solution to the Helmholtz equation 25 . Therefore, the solution of Mie theory takes into account multiple scattering. Here, we also use the DDA to simulate light scattering by an arbitrarily shaped sample to generate more complex synthetic data. The DDA is a general method for calculating the scattering and absorption caused by an arbitrarily shaped sample represented by finite discrete dipoles 26 . These dipoles react not only to incident light but also to one another, which places the resulting fields under high orders of scattering. It has been shown that the DDA works well for samples whose RI values fairly match those of the surroundings, such as biological cells in a liquid medium 27 . Therefore, we use Mie theory for multiple cylinders and the DDA for a cell phantom, as well as a cluster of 15 red blood cells (RBCs). After generating synthetic measurements by using either Mie theory or the DDA, the LT-BPM and LT-SSNP are used to reconstruct the 3D RI of each sample, and the accuracy of each reconstruction is evaluated quantitatively.
In this analysis, we include an investigation of the performance of each algorithm with respect to regularization. The iterative reconstruction scheme used for both the LT-BPM and LT-SSNP minimizes a cost function that comprises two terms: data fidelity and regularization. The data fidelity term is defined by whether the forward model applies either the BPM or the SSNP, and the regularization term introduces prior knowledge about the sample characteristics such as edge sparsity and non-negativity. The relative importance of the two terms in the cost function is controlled by the regularization parameter. We compare the LT-BPM and LT-SSNP by using varying regularization parameters with the goal of minimizing the influence of the regularization term so that the results are primarily based on the forward model rather than on prior knowledge. For the simulations described, we confirm that LT-SSNP shows lower dependency on the regularization parameter due to the accuracy of SSNP. In other words, the use of a more accurate forward model permits LT-SSNP to extract more information from the measurements and to rely less on regularization. More importantly, for highly aggregated samples subject to significant multiple scattering, LT-SSNP allows individual objects and structures to be clearly distinguished, while this observation cannot be made when using the LT-BPM.
We validate the proposed method by using experimental ODT data from a yeast cell and from HCT116 human colon cancer cells. To image biological cells with fine details, it is critical to reduce the influence of the regularization term, as high regularization not only smooths out the imaging artifacts but also useful information, leading to deterioration in the quality of the reconstruction. Tomograms of a yeast cell reconstructed by using LT-SSNP show successful results with high quality even with a very low regularization parameter, while the LT-BPM fails to recover fine details within and around the cells. In the case of experimental measurements of biological cells, the true RI distribution is not known, which prevents the direct assessment of the accuracy of the various ODT methods. To overcome this issue, we generate two sets of semisynthetic measurements by using the DDA for each of the RI reconstructions from the LT-BPM and LT-SSNP. A comparison of the discrepancies between the semisynthetic and experimental measurements reflects the proximity of each solution to the real RI values.
Finally, we explore the capacity of LT-SSNP to produce accurate reconstructions with a reduced number of illumination angles 28,29 . This is of particular interest because the number of scanning angles is directly related to the measurement time. A comparison of each reconstruction method for a varying number of scanning angles indicates that learning approaches provide a dramatic improvement over conventional methods. Overall, the more accurate forward model used in LT-SSNP translates to excellent results even with low regularization and a small number of illumination angles.
Results
In this section, we compare the LT-BPM and LT-SSNP, which belong to the same family of LT reconstruction schemes, except for the forward models, namely, the BPM and the SSNP, respectively. LT minimizes the cost function, which consists of two terms as follows: where the first term is the data fidelity term and R is the 3D total variation (TV) 30 regularization term to impose edge sparsity on the solution. The relative importance between two terms is controlled by the regularization parameter, τ. y l ð Þ K 2 C M denotes the experimental measurements at the Kth slice for each illumination angle l, and L is the total number of angles. S l ð Þ K x ð Þ represents the estimate by a forward model (either the BPM or the SSNP) at the Kth slice, which is the last slice of the volume, to be compared with y l ð Þ K given a current solution, x 2 R N . P 2 R N is a convex set that imposes a nonnegativity constraint. In the supplementary section, we describe the calculation of the gradient for SSNP. Once we calculate the gradient of the data fidelity term in Eq.
(1), the optimization scheme uses the fast iterative shrinkage-thresholding algorithm (FISTA) 31 as explained in ref. 21 for 3D isotropic TV regularization, with eight randomly chosen angles in each iteration.
Multiple cylinders by using Mie theory
We applied the LT-BPM and LT-SSNP on a highly scattering simulated sample consisting of a 3 × 3 grid of cylinders. Each cylinder is 6 μm in diameter with an RI of 1.05 immersed in air. The center-to-center distance is 9 μm. We varied the regularization parameter to investigate the accuracy of the forward model for each algorithm. The results are presented by mapping the difference between the reconstructed tomogram for each method and the known solution, as shown in Fig. 1a. The LT-BPM shows many artifacts inside the cylinders and smearing of the RI in the region between the cylinders. These artifacts of the forward model cannot be eliminated even by increasing the regularization parameter. Regularization only smooths out the overall reconstruction. In contrast, LT-SSNP clearly distinguishes each cylinder without interstitial artifacts even with the weakest regularization parameter tested, that is, 0.25τ = 0.01. Interestingly, increasing the regularization parameter to 4τ = 0.16 reduces the reconstruction quality when using the LT-SSNP algorithm. The total Error, which is defined as follows: was also calculated as a function of the iteration number, as shown in Fig. 1b. x recon is the reconstructed RI contrast from the medium RI, and x true is the ground truth RI contrast. Figure 1b displays the Error plots of the LT-BPM and LT-SSNP by using the regularization parameter that produced the lowest Error value for each algorithm: 4τ = 0.16 for the LT-BPM and τ = 0.04 for LT-SSNP. This analysis quantitatively confirms the better accuracy of LT-SSNP. In the case of multiple cylinders, it is critical to model distortions in the wavefront (phase modulation) introduced by the precedent samples, which determine the illumination on subsequent samples. We further analyzed this scenario by varying the number of layers in multiple cylinders and summarized the results in the supplementary material.
RBC cluster using discrete dipole approximation
To investigate the performance of each algorithm with highly scattering samples in 3D, we performed a similar test on a simulated cluster of RBCs. The shape of a single RBC is sketched in Fig. 2a, while the organization of the cluster is shown in Fig. 2b. Reconstructions were performed by using various regularization parameters; for each algorithm, we show only the reconstruction by using the regularization parameter that gives the lowest Error: 8τ = 0.2 for the LT-BPM and τ = 0.025 for LT-SSNP. In Figs. 2c, 3, different slices (xy, yz, and xz) of the 3D RI distributions resulting from each method are presented, along with the difference map with respect to the ground truth. Both the LT-BPM and LT-SSNP show better reconstructions compared with reconstructions based on the Rytov approximation, which is expected since Rytov does not consider multiple scattering. By comparing the LT-BPM and LT-SSNP, we can see that the RI tomogram resulting from LT-SSNP shows clearer and more accurate reconstructions of each RBC, producing homogeneous RI distributions within each RBC.
Cell phantom using discrete dipole approximation
To evaluate the LT-BPM and LT-SSNP algorithms on a sample whose RI values are not homogeneous and which contains fine details, we generated a synthetic cell phantom. The phantom contains four different RI values corresponding to the cytoplasm, nucleus, nucleolus, and lipids 32 . Synthetic measurements were made by using the DDA in the same manner as for the RBCs. Again, we present for each algorithm only the results obtained by BPM. By contrast, the LT-SSNP not only distinguishes the shapes of fine structures but also correctly positions them along the optical axis.
Experimental validation using a yeast cell
To validate the relative performances of the LT-BPM and LT-SSNP on experimental data, we acquired ODT images of a yeast cell. Again, we evaluated different regularization parameters for the reconstructions obtained by using the LT-BPM and LT-SSNP. Figure 4 shows the reconstruction results for a slice close to the image plane. For both the LT-BPM and LT-SSNP, the high regularization parameter 4τ = 0.1 results in too much smoothing, and it becomes difficult to resolve fine details. Therefore, it is necessary to reduce the regularization parameter. However, in the case of the LT-BPM, lowering the value of the regularization parameter introduces artifacts similar to those present in the simulation results in the previous section. By contrast, the LT-SSNP can reconstruct fine details without introducing strong artifacts. Therefore, we used τ = 0.025 for the LT-BPM and τ/4 = 0.00625 for LT-SSNP and further analyzed the sample for different z planes, as shown in Fig. 5. Since we used higher regularization for the LT-BPM, we can clearly see that images tend to be smoothed out and fine details are lost, as indicated by the red arrows in Fig. 5. By contrast, the LT-SSNP reveals structures that are not observable in the Rytov and LT-BPM reconstructions. In addition, even with the higher regularization, the LT-BPM still shows several artifacts, as indicated by the black arrows in Fig. 5.
A serious limitation for quantification of the reconstruction accuracy for real biological samples such as this yeast cell is the fact that the true RI distribution is unknown. However, we were able to further evaluate the differences between the LT-BPM and the LT-SSNP by using the semisynthetic measurements generated by using the DDA. While we generated synthetic measurements by using synthetic samples in all previous cases, the RI reconstructions obtained by using the LT-BPM and the LT-SSNP served as samples for the DDA to generate semisynthetic measurements in this case. The projection error-the difference in phase information between the experimental data and these simulated measurementsreflects how close the solution is to the true RI distributions, as shown in Fig. 6a. Figure 6b maps the 2D projection error for two randomly selected angles as well as the average across the full set of angles for each algorithm. In the case of the LT-BPM, differences are clearly observed when compared with the LT-SSNP, which shows remarkable consistency with the experimental measurements. We quantified the mean projection error (radians/pixel) for each and used this metric to quantify the accuracy of the LT-SSNP compared with that of the LT-BPM (Fig. 6c). The average projection error across all angles was 65% lower for the LT-SSNP than for the LT-BPM.
Data compression demonstrated on experimental data-HCT116 cells
The tomographic reconstruction based on the Wolf transform and the Rytov approximation directly maps multiple 2D measurements into the 3D Fourier space. Therefore, any missing information in measurements directly deteriorates the final reconstruction. However, the LT-SSNP is an iterative reconstruction scheme. The iterative reconstruction begins with an initial guess (usually based on the Rytov approximation), and the initial solution is updated based on the calculated error gradient by using the forward model. In addition, prior knowledge about the sample is imposed on the current guess during the iterative process. Therefore, even if the measurements are underdetermined due to missing measurements, the learning approaches can fill in some of the missing information. This idea was validated by reducing the number of illumination angles used for each method. The experimental data used for this investigation were ODT images of a pair of HCT116 human colon cancer cells. These cancerous epithelial cells contain information in small structures relative to the size of the cell and highlight the importance of reconstructions that can capture these fine details. Reconstructions were performed by using Rytov, linear tomography 20 , and the LT-SSNP by using different numbers of projection angles (45, 24, 12, and 4) uniformly spaced in the range from 0 to 360°. The linear tomography method uses the same iterative reconstruction scheme as the LT-SSNP, except with single scattering as the forward model. For the quantitative analysis, we also compare the structural similarity index (SSIM) 33 for reconstructions from compressed measurements with the full measurement case, namely, 360 angles at the focal plane. The results, plotted in Fig. 7, show a dramatic improvement in the reconstruction quality for linear tomography and the LT-SSNP because the two methods iteratively fill up empty components introduced from missing measurements but using different forward models. In the case of the HCT116 cells, Rytov produces fairly good reconstructions that reveal intracellular structures with 360 full projections, despite the underestimation due to the missing-cone problem. The Rytov reconstructions, on the other hand, rapidly deteriorate as the number of illumination angles decreases. Compared with Rytov and linear tomography, the LT-SSNP is more robust in the number of projections, providing reconstructions with only four scanning angles with nearly the same quality as reconstructions by using the full 360-angle data, as confirmed by the SSIM in Fig. 7b. We believe that the LT-SSNP can benefit from both the iterative scheme and an accurate forward model. In addition, we further tested the compression using the cell phantom, which has higher RI contrasts; the results have been added to the supplementary material.
Discussion
In this study, we have proposed a new tomographic reconstruction algorithm, the LT-SSNP, which is based on the SSNP forward model, for imaging complex highly scattering samples with fine details. By benefiting from the accuracy of the SSNP, the LT-SSNP extracts a maximum amount of information from measurements rather than relying on prior assumptions and generalizations about the sample structure. The LT-SSNP was quantitatively evaluated and compared with the previous algorithm, the LT-BMP, by using synthetic measurements. These synthetic measurements with a known solution were generated by using Mie theory for multiple cylinders, and the DDA for an arbitrarily shaped cluster of RBCs and a cell phantom.
In the case of multiple cylinders, the LT-SSNP shows clear reconstruction of each sample without introducing artifacts. The more interesting point is that the LT-SSNP does not require strong regularization. This is because the SSNP forward model is accurate enough that regularization is not necessary to compensate for poor data fidelity, while the LT-BPM could not properly carry out the reconstruction even with high regularization. For the RBC cluster in 3D, the LT-SSNP returns more homogeneous distributions even with a lower value of the regularization parameter than that of the LT-BPM. This fact is critical when imaging complex samples because too much regularization smooths out fine structures and makes them impossible to resolve. The cell phantom simulation confirms the performance of the LT-SSNP on a sample with high-resolution information. The LT-SSNP is more accurate and permits the use of a lower regularization parameter, which allows details of the 3D refractive index to be identified without artificially being smoothed out by regularization.
Importantly, the added capabilities of the LT-SSNP are dramatic for imaging biological samples containing information across many scales, as confirmed by applying it to tomographic images of a yeast cell. The Fig. 6 Evaluation of LT algorithms by using semisynthetic error estimation. a Overall scheme of semisynthetic measurement generation by using the DDA. b Phase-difference maps for two randomly selected angles and the average for all angles. The color bar is in radians. c Calculation of the projection error in retrieved-phase information from experimental measurements and semisynthetic data map represents how close the reconstruction using each method is to the real sample. In contrast to the averaged phase-difference map of the LT-BPM, which produces many discrepancies inside the sample, that of the LT-SSNP shows consistency with the experimental measurements. The numerical evaluation shows that the LT-SSNP produces a 65% reduction in the projection error compared with that of the LT-BPM.
Furthermore, we explored the capacity of learning approaches to enable data compression by reducing the number of scanning angles. The LT-SSNP shows a dramatic improvement in image quality by using a small number of illumination angles when compared with the conventional direct inverse method by using the Rytov approximation. Even with a low number of projections, the LT-SSNP benefits from its weak dependency on the regularization parameter.
Simulation
We used Mie theory to derive the field scattered by multiple cylinders (2D) 34 . A total of 101 illumination angles uniformly distributed between −45°and 45°were used. To perform a deeper assessment of the LT-BPM and LT-SSNP algorithms, we also tested on synthetic measurements in arbitrary-shaped samples: an RBC cluster and a cell phantom.
For RBC simulations, the discrete dipole approximation 26,35 was applied to an RBC cluster, in which the surface of each RBC is defined by using the following equation: where ρ is the radius in cylinder coordinates (ρ 2 = x 2 + y 2 ) and S, P, Q, and R are parameters derived from d, h, b, and c shown in Fig. 2a, respectively. In this paper, d, h/d, b/d, and c/d were set to 7.7 μm, 0.3542, 0.1752, and 0.6196, respectively, as suggested in ref. 36 . We refer interested readers to previous studies 36,37 for a more complete presentation of the DDA simulation of an RBC. By using a single simulated RBC, a cluster consisting of 15 identical RBCs was generated, as shown in Fig. 2b. In addition, we generated a synthetic cell phantom with four different RI values corresponding to the cytoplasm, nucleus, nucleolus, and lipids 32 . To derive the scattered field from the cluster and the cell phantom, samples were scanned by using 40 uniformly distributed illumination angles on a circle with an incident angle of 45°. For every simulation mentioned above, the sample with an RI of n was immersed in air, and the wavelength used was 600 nm. This is equivalent to a case in which the RI of the medium is n 0 and the sample with an RI of n × n 0 is illuminated at a wavelength of 600 × n 0 nm. The number of dipoles per wavelength for both simulations was set to 12. Table 1 summarizes the numerical and experimental parameters used for the simulations as well as for the experiments.
Experiments
The experiments were performed by using a conventional optical diffraction tomography configuration in which a spatial light modulator was used to control the illumination angle. A total of 360 holograms were recorded for each sample in a circular pattern with 1°resolution at an incidence angle of 35°. Additional details about the optical setup and sample preparation are provided in the supplementary section.
Semisynthetic simulation
The semisynthetic measurements were calculated by using the reconstruction results acquired from the LT-BMP and the LT-SSNP as samples for the DDA. The size of the dipole was set to λ 12n 0 ¼ 0:033 nm, where λ = 0.532 nm is the wavelength of the laser and n 0 = 1.338. Both the values were set from the values used in the experiments. The grid size of the reconstructions from the LT-BPM and the LT-SSNP was 99 nm. The reconstruction results were interpolated to a grid, one pixel of which was the size of a dipole. Then, we quantized the RI values by using the following equation: roundð n recon n 0 1000Þ=1000, where n recon denotes the reconstructed RI values. Simulations were performed for 160 nonoverlapping angles, which were calculated from the experiments.
Reconstruction algorithm
We implemented the algorithms by using custom scripts in MATLAB R2018a (MathWorks Inc., Natick, MA, USA) on a desktop computer (Intel Core i7-6700 CPU, 3.4 GHz, 32 GB of RAM). To accelerate the computation, a graphic-processing unit (GPU, GeForce GTX 1070) with custom-made functions based on the compute unified device architecture (CUDA) was utilized. The gradient, calculated from a data fidelity term, D(x), was ∂D(x)/∂x, the amplitude of which is proportional to the amplitude of D(x). The LT-BPM and the LT-SSNP use different data fidelity terms. The LT-BPM calculates the difference in the fields u(x,y,z). In contrast, the LT-SSNP requires differences in both u(x,y,z) and its derivative du(x,y,z)/dz. Therefore, calibration of the optimization parameters between the methods is necessary to make the LT-BPM and the LT-SSNP use similar optimization parameters. The FISTA requires two parameters: step size (γ) and regularization parameter (τ). The calibration of those parameters can be performed by calculating the ratio C between jju x; y; z ð Þjj 2 2 and jju x; y; z ð Þþdu x; y; z ð Þ=dzjj 2 2 . We approximated this as the average value of (1+1ik z ) 2 for the illumination k z s, which corresponds to a case in which u(x,y,z) is replaced with a planar wave, e 1iðk x xþk y yþk z zÞ . Therefore, the LT-BPM, which uses the parameters γ and τ, can be directly compared with the LT-SSNP, which uses the parameters γ/C and τ × C. For convenience, we labeled the figures according to the parameters used for the LT-BPM. The actual parameter values for the LT-SSNP can be easily calculated given C, which is provided in Table 1. The total number of iterations used in the FISTA is also provided in Table 1. Twenty iterations were used for the TV optimization step in all cases.
Overall scheme of the learning tomography Both algorithms (LT-BPM and LT-SSNP) start from measured electric fields (including both amplitude and phase information) from the holographic data. An initial guess of the RI distributions is obtained by using the Rytov tomographic reconstruction method. By using either the BPM or the SSNP as the forward model, the scattered field is estimated given the plane-wave illumination propagating through this initial guess. The square of the difference between the estimated and the measured fields is the cost function, which is minimized by adjusting the index values contained in the forward model through the FISTA. At the same time, an intermediate step of regularizations such as smoothness and non-negativity is included. This process is repeated until the total cost function converges.
Split-step non-paraxial method
In this section, we briefly describe the SSNP in 3D 23,24 , which is the physical forward model used in the LT-SSNP. Bhattacharya and Sharma 38 implemented this method by using a matrix formalism for wave propagation in 3D. Here, we describe a fast Fourier transform implementation for more efficient use of memory.
The propagation of a scalar wave u(x,y,z) through a medium n(x,y,z) in 3D must satisfy the following wave equation: where k 0 = 2π/λ is the free-space wavenumber for a given wavelength λ in a vacuum. Eq. (4) can be written in matrix form dvðx; y; zÞ dz ¼ Hðx; y; zÞvðx; y; zÞ where, vðx; y; zÞ ¼ uðx; y; zÞ uðx;y;zÞ ∂z " # ð6Þ and Hðx; y; zÞ ¼ 0 1 À ∂ 2 ∂x 2 þ ∂ 2 ∂y 2 þ k 2 0 n 2 ðx; y; zÞ 0 When we consider an inhomogeneous sample immersed in a homogeneous medium, n 0 , it is possible to split the matrix H into two terms that correspond to diffraction and phase modulation. Note that no approximation is assumed up to this point. We refer interested readers to the supplementary section for a detailed explanation.
Author's contributions J.L. carried out the algorithm modeling and computations. A.A. built the optical setup and carried out the optical experiments. E.A. prepared the samples. D.P. supervised the project. All authors contributed to the discussion and wrote the paper. | 2019-09-11T13:44:01.077Z | 2019-09-11T00:00:00.000 | {
"year": 2019,
"sha1": "54f4d7ec273f22b338d67d043df3d2ca5a7f2c2a",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41377-019-0195-1.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "68d19e0b11e9ccab2222863ec7359e34464f4503",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
252911688 | pes2o/s2orc | v3-fos-license | Participation rate in cervical cancer screening in general practice related to the proximity of gynecology care facilities: A 3 year follow-up cohort study
Cervical cancer screening (CCS) by Pap tests is mainly performed by gynecologists in France, but also by general practitioners (GPs) and midwives. The screening uptake is insufficient to reduce the incidence of cervical neoplasms. Our aim was to investigate the association between screening rates in patients listed with GPs and the distance between GPs' offices and gynecology facilities. The population of 345 GPs, and their 93,918 female patients eligible for screening over 3 years (2013–2015), were derived from the Health Insurance claim database. We estimated the socioeconomic level of the geographical area of GPs' offices using the European Deprivation Index (EDI). The proximity of gynecology facilities was calculated by computing their distance from GPs' offices (in order to adjust the proximity of gynecology facilities with EDI and performance of smears by the GP). The number of gynecologists within 5 km of a GP's office was associated with the CCS rate increasing by 0.31% for every unit increase in the density of gynecologists within 5 km (p < 0.0001). The close proximity of gynecology facilities was not significantly associated with screening uptake among female patients when the office of the GP where they were registered was settled in a deprived area.
Cervical cancer screening (CCS) by Pap tests is mainly performed by gynecologists in France, but also by general practitioners (GPs) and midwives. The screening uptake is insu cient to reduce the incidence of cervical neoplasms. Our aim was to investigate the association between screening rates in patients listed with GPs and the distance between GPs' o ces and gynecology facilities. The population of GPs, and their , female patients eligible for screening over years ( -), were derived from the Health Insurance claim database. We estimated the socioeconomic level of the geographical area of GPs' o ces using the European Deprivation Index (EDI). The proximity of gynecology facilities was calculated by computing their distance from GPs' o ces (in order to adjust the proximity of gynecology facilities with EDI and performance of smears by the GP). The number of gynecologists within km of a GP's o ce was associated with the CCS rate increasing by . % for every unit increase in the density of gynecologists within km (p < . ). The close proximity of gynecology facilities was not significantly associated with screening uptake among female patients when the o ce of the GP where they were registered was settled in a deprived area.
Introduction
According to the Global Cancer Observatory (GLOBOCAN) from 2018 estimating cancer data from 185 countries, cervical cancer (CC) was the fourth most common cancer in women worldwide, with a global age-standardized incidence rate (ASIR) of 13.1/100,000 women. This ASIR varied widely among countries ranging from <2 to 75/100,000 (1). In Europe between 2012 and 2018, the ASIR of CC varied from 13.4 to 13.9/100,000, and in France from 8.0 to 8.4/100,000, and the age-standardized mortality rate (ASMR) from 2.6 to 3.2/100,000, showing an increase after four decades of decrease (2,3). In France, there were 2,920 new cases of CC and 1,117 related deaths in 2018 (4). In Northern France, the incidence rate is 10% higher compared to the country average (4).
CC is always preceded by neoplastic lesions with a long-lasting persisting evolution before reaching a cancerous stage. This offers the opportunity to prevent cancer by screening and early intervention. The classical screening test is the Papanicolaou-test (Pap-test) by cytologic examination of cervical smears, which requires a gynecological examination. To implement cervical cancer screening (CCS), French health authorities recommend a Pap-smear every 3 years in women between 25 and 29 years of age after two annual normal initial Pap-smears. Since 2019, the same authorities recommend a HPV test every 5 years between 30 and 65 years of age; in the case of positive test a cytology must be achieved. In the case of negative cytology, screening must occur again the next year following the same procedure (5). In France in 2017, CCS was "opportunistic" except in 13 departments testing an experimental organized screening. The screening participation rate is not in accordance with the recommended rate of 80% in the guidelines for women in the target ages, being insufficient for 51.6% of women or too frequent for 40.6% (5).
In high income countries, insufficiently screened women are mainly those who do not use the services of gynecologists for cultural or economic reasons: low level of education or income [consultations with a gynecologist being more expensive than those with a general practitioner (GP)], women with no children, having no partner or being post-menopausal (6). Most of these women have at least one encounter with their GP over 3 years. In France, 80% of targeted women have previously chosen to be screened by a gynecologist but their numbers are drastically decreasing (7). In French Flanders, 53.1% of GPs and more recently midwives also perform this procedure (8). The performance of smears by the GP or the female gender of the GP, described as positive factors for participation in CCS, do not increase the rates significantly (9). Socioeconomic environmental factors like the European Deprivation Index (EDI) appear significantly and independently associated with these rates, women dwelling in deprived areas being more often insufficiently screened or not screened at all (10). Another factor described as positive for participation in CCS is the proximity of the office of a gynecologist (11).
Our interest was to investigate the effect of the close proximity of the office of a gynecologist on the CCS participation rates. In our former publications (8-10), we acknowledged as main limitations a follow up period of 2 years and not controlling for the influence of the gynecology care facilities. These elements are considered in this paper.
Study design
As at that time (2017), the recommended interval between two CCS smears was 3 years, a cohort study was undertaken based on a 3 year retrospective follow up of 93,918 female patients aged from 25 to 65 years and their 345 GPs coupled with a telephone survey.
Setting
This study took place in primary care in French Flanders (Northern France). Data were collected from 2013/01/01 to 2015/12/31 from the Information System of the main mandatory Health Insurance claim database (SIAM) of French Flanders (CPAM). Telephone surveys with all the practicing GPs registered with the CPAM were carried out.
Participants
Participants were the GPs listed on the registers of the CPAM. Inclusion required that the GPs were practicing in primary care over the 3 year period selected. GPs having another practice outside of primary care were excluded if they had <100 female patients declared on their patient lists, ruling out GPs with complementary medicine practices (for example homeopathy, acupuncture), other practices than primary care (for example sonography and angiology) and GPs with an unbalanced practice (recently established or nearing retirement). GPs who retired during the follow up period and those who refused to answer the telephone surveys were also excluded.
For the included GPs, we considered their female patient population aged from 25 to 65 years eligible for cervical cancer screening under French guidelines.
Variables
The main outcome was the cervical cancer screening participation rate in the eligible female patient population of included GPs, measured by the refunding to female patients of cytological examination of cervical samples by the health insurance fund. Working with claim databases where patients are anonymised and not traceable for regulatory ethic reasons (we only know their gender, their age between 25 and 65 years, the designation of their GP, and the reimbursement of a pap smear), it was not possible to compute the distance between the dwelling place of patients and offices of gynecologists. However, most of the patients are registered on the patient lists of their closest settled GP and share the same environmental characteristics (10). As a surrogate outcome of the distance between the dwelling place of patients and offices of gynecologists, we computed as our proximity indicator the density of the gynaecologists' offices around GPs' offices within 5, 10, 20, and 40 km. Thus, the predictor was the distance between the office of a gynecologist and each GP office. This variable was computed using geo-tracking of GP offices and the gynaecologists' offices.
The confounding variables on the GP level were the gender of the GP (recovered from the SIAM database) and the performance of vaginal samplings (as a binary variable) in the GP office based on telephone surveys as described in a former paper (8).
The European Deprivation Index (EDI) (12) was the socioeconomic effect indicator utilized. The EDI is an ecological marker reflecting the individual deprivation experience of the general population in an area based on the census. The determination of EDI started from the construction of an individual deprivation indicator associated with both objective and subjective poverty and following the identification of the basic needs of people. This first part was undertaken using the European survey specifically dedicated to the study of deprivation (EU-SILC: European Union-Statistics on Income and Living Conditions), since there is no gold-standard of deprivation. It was then necessary to identify and dichotomize the variables available and coded in a similar way both at the individual level (EU-SILC) and in the census data. Variables associated with the individual deprivation indicator were then selected and weighted by multivariate logistic regression. The regression coefficients associated with these variables in the final model then became the weights of these 10 variables measured at the aggregate level in the ecological index: overcrowding, no access to a system of central or electrical heating, nonhome owner, unemployment, foreign nationality, no access to a car, unskilled worker-farm worker, household with more than six persons, low level of education, single parent household. The EDI is then defined as the weighted sum of these 10 variables quantifying fundamental basic needs associated with both objective and subjective poverty, normalized to the national average and usually divided into quintiles (national or regional). Areas of reference were the smallest available statistical census units in France (IRIS) allowing for an infra-municipal study scale. Each GP surgery was assigned its IRIS and the EDI of the corresponding IRIS was computed. Elsewhere (10), we have demonstrated the strong association between the EDI and the CCS rate. The EDI has a mediation effect on the CCS uptake.
Bias
The number of patients managed by the GP was not considered as we have demonstrated that it is not associated with the CC screening rate (8). The age of the GP has not been considered though it appears to be associated with the screening rate, as it is linked to the age of the patients, and young female patients are more likely to participate in cervical cancer screening compared to older patients (13,14). Another reason is that young GPs are more often of female gender compared to older GPs, and the performance of smears is associated with the gender of the GP as demonstrated earlier (8), though without influence on CCS uptake in a multivariate analysis. The gender of the GP and the performance of smears therefore seemed to be sufficient substitution variables.
Study size
This study was implemented on a complete population basis without sampling.
Statistics and analysis
Continuous quantitative variables are expressed as mean ± standard deviation (SD), median [interquartile range (IQR)] and categorical variables are expressed as frequencies and percentages. In this study, there were two hierarchical levels for the data: the individual GP level (GP's gender and performance of smears, and the outcome "the cervical cancer screening participation rate among the GP's listed eligible female patients") that were nested in the geographical level (variable EDI and number of gynecologists at a given distance) as the patients of GPs practicing in the same area (IRIS) share common characteristics. The association between the CCS rate and the distance from gynaecologists' offices was analyzed using a linear generalized hierarchical mixed model. This statistical model takes into account the hierarchical structure of the data. The analysis was performed without and with adjustment based on the characteristics of the GPs and the socioeconomic level considered as a mediator (EDI).
All statistical tests were two-sided and performed at the 0.05 level. Data were analyzed using the SAS software R version 9.4 (SAS Institute, Cary, NC).
Results
Of the 410 GPs registered on the CPAM of Flanders, 52 were excluded as they had <100 female patients on their patient lists, six because they retired before the end of the study period, five because they refused to answer the telephone survey and two because they planned to suspend their activity as primary care practitioners, resulting in 343 included GPs (Figure 1).
Among the 343 GPs participating in this study, 269 (78.4%) were men, and 182 GPs (53.0%) performed smears. Characteristics of the listed patients per GP are described in Table 1 and shows the mean screening participation rate for female patients from 25 to 65 years during the 3 years was 50.1% (SD: 7.5%).
The mean number of gynecologists within 5 km of GP surgeries was 5.4 (SD 5.6, median 5, IQR 0-11), between 5 and 10 km was 3.1 (SD 6.2, median 1, IQR 0-3), between 10 and 20 km was 15.2 (SD 21.0, median 7, IQR 0-20) and between 20 and 40 km was 30.8 (SD 28.1, median 15, IQR 10-42) (Figure 2). The association between the cervical screening rate and the distance from gynecology care facilities is shown in Table 2. The table presents the unadjusted model, the model adjusted for GP gender, and performance of smears by the GP and the model adjusted for GP gender, performance of smears by the GP and EDI. All models show the impact of the density of gynecologists within specified distances from the GP offices on the CCS rate with the greatest impact being the density within 5 km. The density of gynecologists within 20-40 km of GP surgeries had a smaller significant positive coefficient.
We found a significant association between the density of gynecologists within 5 km of the GP's office and the cervical cancer screening participation rate after adjustment for these GP characteristics and the EDI, with the cervical screening rate increasing by 0.31% with every unit increase in the density of gynecologists within 5 km. When not adjusting for EDI, the density of gynecologists within 5 km of GP surgeries had no significant effect on the screening rate.
The density of gynecologists between 20 and 40 km also had a significant effect, with the cervical screening rate increasing by .
/fpubh. . 0.09% with every unit increase in the density of gynecologists between 20 and 40 km. After adjusting for the GP's gender, practice of Pap-smears by the doctor and EDI, the association remained significant though the effect size was small.
Main findings
In the analysis adjusting for EDI, we found that the density of gynecologists within 5 km of GP surgeries had the most significant positive regression coefficient with the CCS rate. We also found that the density of gynecologists within 20-40 km of GP surgeries had a smaller but still significant positive coefficient.
The higher the density of gynecologists within 5 km of GP surgeries, the higher the CCS rate: for each supplementary gynecologist, the screening rate was improved by 0.31%. When not adjusting for EDI, the density of gynecologists within 5 km of GP surgeries had no significant effect on the screening rate, reflecting the major influence of socioeconomic determinants on screening behavior. Thus, in disadvantaged areas (like French Flanders: EDI of 2.3 compared to the mean EDI of 0 for France), a higher number of gynecologists does not increase the screening rate in the overall population, unless erasing the influence of the deprivation factor. This reflects the fact that women from deprived areas are not likely to be managed by gynecologists while women from more favored areas are more likely to be so (15,16).
Study strengths and limitations
The claim database of the CPAM of Flanders is reliable and consistent, and the data extracted from this database for a duration of 3 years are considered trustworthy. A participation rate of 98% of the targeted GPs allows us to consider that our study was based on an entire, not sampled population. Including 345 GPs, their almost 94,000 female patients eligible for CCS, and the 149 gynecologists in the area who may have been consulted by these patients, confers to this study a solid internal validity.
Studies exploring the association between the density of gynecological care facilities and the CCS participation rate as the main outcome are scarce. No one has previously explored this association based on the ground distance between GPs and gynecologists. Our results match another French country-side study highlighting the same association by another method (11) strengthening the external validity of our finding.
In our previous publications, we acknowledged as limits a follow up of only 2 years (as CCS used to be triennial in France) and no consideration of gynecology care facilities as confounding factors (8)(9)(10). These limits have been addressed in the current paper.
This study only investigates the association between CCS rates and the distance from gynecologist offices to GP offices. The global screening rate in French Flanders of 50.2% is lower than the national rate of 62.3% (range 41.6-72.5%) (17). The density of GPs in French Flanders was slightly lower than in the rest of France (13 vs. 16/10 000) and is even lower now (retirement of GPs from the baby-boom generation). However, the density of gynecologists (2.7/10 000) in this area was not lower than in the rest of France, which does not explain under-screening and our findings regarding the highlighted association.
There are many different compulsory health insurance regimes in France depending on the occupational sector of the insured persons. We based our study on the claim database of the CPAM of Flanders representing 80% of insured persons. This means that we missed some occupational sectors like teachers . /fpubh. . or farmers. This can possibly be considered as a selection bias though there is no reason that the 20% of missed population substantially differs from the general population as described in other contexts (18). This does not diminish the external validity of our main result.
Comparison to literature
The only former publications investigating this association are the above cited French study carried out by Araujo in 2010 (11), and another French study published by Barré in 2017 (19), which found that a lower CCS participation rate was associated with a lower density of gynecologists in the residence area, matching our findings. A third study, carried out by Grillo in 2012 in Paris, did not find any significant association between the density of GPs and gynecologists in the residence area of women and the CCS participation rates (20). However, no adjustment was performed to correct for the influence of the overall deprivation rate and the geographical area studied was smaller with more opportunities for public transport.
Profound changes in the mindset of women influenced by the social pressure of their deprived neighborhood will be necessary to enhance participation in CCS. The proximity of care facilities has little influence on enhancing screening participation in deprived areas unless community oriented primary care reaches out to concerned people (21-23). Education is probably the main solution to solve the lost opportunity associated with underscreening in deprived areas (24).
Conclusion
Adjusting for our deprivation indicator (EDI), the density of gynecology care facilities within 5 km of a GP surgery, and to a lesser extent, within 20-40 km of a GP surgery, was significantly associated with a higher CC screening rate. When the effect of deprivation on the screening participation rate is erased by adjusting the model, the density of gynecology care facilities is linked to an increase of the CCS participation rate, meaning a potential decrease of CC. However, this effect is not noticeable when this adjustment is not made probably because women dwelling in deprived areas do not make use of services offered by gynecologists. The reasons women are not screened are complex and this certainly explains why medical demography alone cannot resolve inequalities and social disparities in participation in screening. This seemed to be confirmed by our models, despite its adjustment using the EDI (an aggregate index quantifying fundamental basic needs associated with both objective and subjective poverty). The current setting of midwifes in primary care practices might be a response to this situation that will have to be confirmed by further studies.
Data availability statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
Ethics statement
The studies involving human participants were reviewed and approved by Ethics Committee North West III of Caen under the reference 2015-23, on 2016/03/02. The patients/participants provided their written informed consent to participate in this study.
Funding
This was an ancillary study of the PaCUDAHL-Gé trial sponsored by the University Hospital of Lille, which evaluates women's interest in cervical cancer screening using a device from their GP for self-collection of vaginal samples and HPV testing. organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. | 2022-10-17T14:11:02.812Z | 2022-10-17T00:00:00.000 | {
"year": 2022,
"sha1": "c638f08ab7d2751e089df5e35f524d1a9ff3598f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "c638f08ab7d2751e089df5e35f524d1a9ff3598f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
47331475 | pes2o/s2orc | v3-fos-license | Experiments to increase the used energy with the PEGASUS railgun
The French-German Research Institute (ISL) has several railguns installed, the largest of these is the PEGASUS accelerator. It is a 6 m long, 4 × 4 cm2 caliber distributed energy supply (DES) railgun. It has a 10 MJ capacitor bank as energy supply attached to it. In the past, this installation was used to accelerate projectiles with a mass of about 300 g to velocities up to 2500 m/s. In the ongoing investigation, it is attempted to accelerate heavier projectiles to velocities above 2000 m/s. For this a new type of projectile including a payload section was developed. In this paper the results of the experiments with payload projectiles using a primary energy between 3.8MJ and 4.8MJ are discussed.
I. INTRODUCTION
In the military domain, there are two main applications for a railgun. Both make use of the, compared to explosively driven guns, superior muzzle velocity being achievable. A long range electromagnetic artillery system can cover ranges up to 400 km. Such a system would accelerate a heavy projectile with a mass of up to 60 kg to a muzzle velocity of 2500 m/s [1]. Alternatively one can use the railgun to accelerate lighter projectiles in rapid succession to bring down approaching missiles in a last line of defense scenario. Both scenarios have the common requirement of a large muzzle velocity, but very different demands concerning the needed power levels and the design of the railgun itself.
At the ISL, PEGASUS is the largest railgun installation. This railgun has a primary energy storage of 10 MJ and an overall efficiency of about 30 %. With the aim of reaching the desired 2500 m/s, the maximal total weight of the launch package has to stay below 1 kg. Starting from this, different payload projectiles in the mass range of 600 g to 700 g were developed and tested. In a first series of investigations, a monolithic payload projectile was launched with primary energies ranging from 2.6 MJ to 3.7 MJ, to velocities of up to 1560 m/s. To make better use of the primary energy and to investigate the innovative concept of a separating projectile, a projectile being split into two parts was designed. While the armature of the monolithic projectile consists out 4 rows of brushes in the rear half of the projectile, the separating projectile can leave the part of the projectile body with the 3 rearmost brush rows behind. At the same energy level of 3.8 MJ the payload reached a velocity of 1825 m/s. A more detailed description of these experiments can be found in [2]. Here further modifications of the separating projectile and the experimental results with this type of projectile are described.
II. PEGASUS RAILGUN
The PEGASUS railgun was built in 1998 with the goal to accelerate projectiles with masses of about 1 kg to velocities above 2000 m/s in the medium caliber range (40 mm to 50 mm). Initially it was equipped with a glass-fiber and carbonfiber wound barrel with a caliber of 50 mm [3]. Figure 1 shows the PEGASUS installation with the currently mounted barrel, being in use since 2002 [4]. This barrel is 6 m long and has a square caliber of 40 mm. A total of up to 10 MJ of electrical energy can be stored in the 200 capacitor modules being visible in the background of the railgun. PEGASUS is a distributed energy supply (DES) system connected to 13 banks with 16 (the last one only 8) capacitor modules, each. Banks no. 1 and no. 2 are connected to the breech, the other banks are connected to current injection points being distributed along the first 3.6 m of the barrel length. For the purpose of the experiments being performed for this investigation, 4 capacitor modules in the banks no. 8 to no. 12 are disconnected. This reduces the capacity of these banks by 25 % and the total capacity by 10 %. Each capacitor module can store up to 50 kJ and deliver a maximum current of 50 kA. They are equipped with a high-voltage capacitor, a thyristor, a crowbar diode and a pulse forming coil [5]. The electrical connection of the capacitor module to the accelerator is made with a coaxial cable. Due to the DES scheme the maximum total current that can be delivered to the gun is approx. 2 MA. The cylinder on the left hand side of figure 1 is a 7 m long catch tank, being equipped with several flash x-ray tubes, Doppler radar devices and the possibility to mount several high-speed cameras. At the end of the free-flight phase in the cylinder, the projectile is stopped using steel plates.
III. PEGASUS STANDARD PROJECTILE
Since several years, projectiles used with the PEGASUS railgun are made out of glass fiber reinforced plastic (GRP) equipped with copper-fiber brush armatures. The GRP body of the projectile is lightweight, electrical isolating and relatively easy to manufacture. In combination with the copper brush armature, the current is confined to well defined paths through the projectile. In combination with appropriate sensors, this allows for the investigation of the current distribution in between different rows of brushes during launch conditions [6], [7]. The standard type of projectile, that was used in many investigations with PEGASUS, consists out of 8 brushes arranged in 4 rows. The total mass for this type of projectile is approx. 300 g. The body of these projectiles is fabricated using industrial pre impregnated E-glass laminate. With these projectiles it was possible to reach velocities of about 2200 m/s, for higher velocities delamination and break-up of the GRP was observed [8]. One possibility to increase the velocity potential of plastic projectile bodies is to improve the stability against hard acceleration by using innovative materials. Experiments using a body composed out of alternating layers of GRP and carbon fiber materials allowed to reach a velocity well above 2500 m/s [9]. In figure 2 the current and muzzle voltage trace for this shot is shown. The DES system of PEGASUS allows to generate a rather flat current pulse. Here, the current driving the projectile is just below 1.1 MA. The muzzle voltage trace shows that excellent sliding conditions exists until approx. 2 ms. At that time the voltage increases rapidly, indicating the onset of transition. The peaks in the muzzle voltage signal, clearly seen in the raising part of the graph (between 2 ms and 3.5 ms), are explained by inductive effects connected to the current injection locations of the DES system of the launcher [10]. Usually in experiment with PEGASUS, the plasma after transition appears in between one rail and the armature, only, while the other side of the projectile is pressed heavily against the opposite rail. As long as the plasma is connected to the brushes and confined in the small space between projectile surface and rails, there is still a well defined current path and the clear pattern of inductive peaks as seen in the figure is visible. Underlying the peaks, there is a nearly linear rise of the muzzle voltage. This effect can be explained by an increase in resistance seen by the current flowing through the plasma and the copper brushes. Contributing to this resistance is the increase of the resistivity of the copper brushes due to the increase of the temperature and possibly a dilution effect of the plasma due to the rapid movement of the projectile.
IV. PAYLOAD PROJECTILES
In figure 3 two different types of payload projectiles are shown. The projectile on the left hand side has a monolithic design, with a total mass of about 625 g. The rear half of the 140 mm long projectile is identical to the standard projectile and carries 8 copper brushes in 4 rows. The forward part is equipped with 8 copper cylinders as payload. To reach higher velocities at the same electrical energy distributed to the launcher, the separating projectile seen on the right of figure 3 was introduced. The basic idea behind this type of projectile is, that during the acceleration the brushes are eroded rowwise from the rear to the front. Once the 3 rearmost brushes lost electrical contact to the rails, this part is left behind and only the payload section continues to be accelerated. The total mass and the dimensions for this projectile are the same as for the monolithic design. The forward payload part with the 2 booster brushes weighs approx. 400 g. In [2] experiments with the monolithic projectile are described. A muzzle velocity of 1560 m/s was reached using 3.8 MJ of electrical energy. Accelerating a separating projectile using the same electrical energy resulted in a velocity of the payload section of 1825 m/s, an increase of 16% compared to the monolithic projectile design.
V. EXPERIMENTS WITH SEPARATING PROJECTILES
After the introduction of the separating projectile, several experiments with PEGASUS were performed, to investigate the behavior of this type of projectile. The primary electrical energy was increased from 3.8 MJ to 4.8 MJ and the brush diameter of the two booster brushes was changed from 8 mm to finally 10 mm. The key parameters for the shot series are itemized in table I. Figure 4 shows the current and the muzzle voltage traces for the shot no. 181. In addition to this, the velocity as derived from measurements of the projectile passage using B-dot sensors is shown using dots. The line label "act. int." represents a value that is proportional to the action integral ( I 2 dt). The current reaches it's maximum value of 1.2 MA at 1.8 ms. The current drop after 3.5 ms is caused by the before mentioned capacity reduction of 25 % for the banks no. 8 to no. 12. The muzzle voltage distribution shows that the projectile experienced excellent sliding contact until 3.4 ms, when transition occurred. The voltage rises quickly to a plateau of about 500 V from 3.8 ms until the shot out of the payload part at 5.8 ms. The voltage trace of this shot looks different to the trace from the standard projectile shot shown in figure 2. The inductive peaks and the linearly rising slope are not seen. The absence of the inductive peaks indicates, that the current is spread out over a larger area as compared to the standard projectile shot. The large rise of the muzzle voltage at 5.8 ms witnesses the shot out of the payload part of the projectile. Inspecting this peak closely, reveals that the peak is actually a double peak. The earlier sub-peak can be explained by current flowing through the copper cylinders (the payload) in the front of the projectile. Later at 6.6 ms a further peak appears. At that time, the rear part of the projectile leaves the barrel. The velocity of the projectile is measured, using bdot probes distributed along the barrel. Figure 4 shows that the measured velocity follows the action integral until approx. 4 ms. After this time, the velocity rises stronger, than the action integral. This can be understood as the point in time, when the front part separates from the rear part, thus reducing the to be accelerated mass. For this shot, the velocity reached by the payload part is 1825 m/s. The x-ray picture in figure 5 (left) shows the payload part of the projectile during its free-flight in the blue catch tank. The movement is from left to right. The GRP body shows signs of delamination and the ends of the copper cylinders are exposed. They show clear signs of current conduction during the shot, thus supporting the discussion from above. In the next experiment, shot no. 182, the primary energy was increased to 4.2 MJ. The measured end-velocity of 1880 m/s was less than expected. The freeflight x-ray picture 5 (right) shows that the booster brush is nearly fully eroded, indicating the development of a stronger plasma than in shot no. 181. As a consequence of this data, the decision was made, to increase the booster brush diameter to 9 mm and redo the experiment. The result of acceleration of this modified projectile is shown in figure 6. The muzzle voltage graph shows about the same behavior as in shot no. 181. After transition at 3 ms, a plateau of approx. 500 V is reached. Muzzle exit of the payload part is at about 5.5 ms. The measured velocity is 1945 m/s. Inspecting the trace being proportional to the action integral and comparing it to the measured velocity of the projectile, shows that the separation happened at approx. 3.3 ms, just after the current injection no. 7. In shot no. 181, the projectile separated after injection no. 9. A further increase of the primary energy to 4.8 MJ in shot no. 184 resulted in the current and muzzle-voltage measurements shown in figure 7. A peak current of more than 1.35 MA is reached, but transition sets in early, at 2.5 ms. In addition to this early onset, the voltage rises up to more than 800 V at about 3 ms. This behavior is different to the behavior seen in shots no. 181 and no. 183. The maximum measured velocity is reached at 4.5 ms, before the projectile leaves the barrel. After this, the projectile decelerates until shot-out. Figure 9 (left) shows a strong material loss of the projectile body due to delamination. The copper cylinders had contact with the rails and where most likely conducting current. It is visible that the booster brush is moving out of its hole. Other x-ray pictures showing the top view, witness, that the other brush is already lost before the x-ray picture is taken. The movement and loss of the brush(es) is most likely the cause for the development of the large values of the muzzle voltage and the deceleration of the projectile in the final phase of the launch process. As the increase of the booster brush diameter was successful for shot no. 183, this strategy was attempted again, and its diameter was increased to 10 mm. Figure 8 shows the key parameter traces of this shot. The current is again at about 1.35 MA peak value. At 2.75 ms transition sets in, and until 3.2 ms the trace resembles very much the standard projectile behavior as shown in figure 2. After 3.2 ms an approximate plateau of 600 V to 620 V is reached. The plateau is followed by a peak at 4.8 ms and shot out at 5.25 ms. The split of the shot-out peak indicates that there was current conduction in the copper cylinders. The rear part of the projectile leaves the barrel at about 5.9 ms, producing a peak in the muzzle voltage trace. The measured end-velocity for this shot is 2170 m/s. The trace for the action integral compared to the measured velocity shows, that separation occurred at about 3.5 ms. This is again after current injection no. 7. The x-ray picture in figure 9 (right) shows that the GRP holding the booster brushes is broken during acceleration, thus indicating, that the GRP holding the brushes is no longer able to cope with the forces during acceleration. For this reason a further increase in primary energy with this type of projectile was not attempted.
In figure 4 (shot no. 181) the muzzle voltage peak caused by the exit of the projectile from the barrel showed a sub- structure. This sub-structure was interpreted as a sign for current conduction through the copper cylinders. Support for this hypothesis comes from the x-ray pictures, that show exposed copper cylinders with signs of mechanical wear from sliding along the rails. In some of the figures one can also see the copper evaporating from the hot copper cylinders. As an example, this can be seen in figure 5 (left). For the projectiles used here, the copper cylinders are 16 mm apart (in direction of flight). The same distance holds for the brushes, as well. When a current carrying short circuit element like a brush or copper cylinder leaves the rails during shot-out, a plasma bridge will develop. This bridge connects the rail ends and the short circuit element. If another short circuit element is in-between the rails, the plasma bridge will eventually die out and the current will retract to this element. In figure 10 the muzzle voltage trace from shot no. 183 enlarging the time during the shot-out of the payload-part of the projectile is shown. Inspecting the raising slope from approx. 5.5 ms on, several small peaks on top of the slope are visible. The peaks are labeled with number 1 to 5. In the upper left corner, the x-ray picture of the projectile for shot no. 183 is shown. The direction of flight is from left to right. From right to left, the 4 rows of copper cylinders (labeled 1 to 4) and the booster brush (labeled 5) are visible. The muzzle voltage trace can be explained as follows. The payload-part of the projectile approaches the muzzle while the current drops rapidly (see figure 6). At the time of shot-out the current has approached a value of about 150 kA. While most of the current is flowing through the brush, some current is distributed in-between the 8 copper cylinders. When the cylinders with row number 1 leave the rails, a plasma bridge is established and the resistance increases. At the same the additional area bounded by the plasma bridge is filled by the magnetic field, thus generating an inducting effect. Due to this, the muzzle voltage raises. When the peak value of peak number 1 is reached the bridge breaks and the current switches back to the remaining current path still being in the barrel. During the time the current switches to the rearward short circuit path the muzzle voltage drops. Then the subsequent rows disconnect one after the other from the rails and the muzzle voltage peaks 2 to 4 are generated. When finally the booster brush leaves the rails, the plasma bridge has to carry the full remaining current. There is a slight change in slope, when the brush fully has ceased to make contact to the rails, as then the plasma bridge with a different resistance takes over. This point is labeled with number 5 in the figure. After this time a rising resistance due to the growing length of the plasma length continues to generate a rising muzzle voltage. At one point in time the current drops quicker than the resistance rises. For this shot this happens at approx. 5.6 ms. In table II, the distances corresponding to the peak-to-peak time distance for a projectile with a velocity of 1945 m/s are calculated. It is seen that the calculated values range between approx. 17 mm and 21 mm. The actual distance from cylinder to the next cylinder (or brush) is 16 mm. The hypothesis for the slightly larger derived value is, that the contact is not a solid copper to copper contact, but instead an plasma connection extending into the space around the cylinder. Overall, the muzzle voltage behavior in combination with the x-ray picture gives evidence, that for this shot, the copper cylinders took part in establishing a short circuit route in-between the rails and were carrying current.
In table I, 2 shots are listed with a primary energy of 4.8 MJ. The difference between these 2 shots is the diameter of the TABLE II THE TIME FOR THE PEAKS LABELED 1 TO 5 IN FIGURE 10 IS LISTED. FOR TWO FOLLOWING PEAKS, THE TIME DIFFERENCE IS CALCULATED AND CONVERTED TO A DISTANCE BY USING THE PROJECTILE brush. While the first shot (no. 184) had a brush diameter of 9 mm, shot no. 185 had this value increased to 10 mm. The kinetic energy of the payload part (approx. 400 g) was 663 kJ and 942 kJ, respectively. Shot no. 185 has a payload part kinetic energy that is 279 kJ larger. In figure 11 the power acting on the rail-armature-rail interface is drawn. To suppress short time fluctuations as seen, for example, in the muzzle voltage trace, the curve was smoothed. Until 2.5 ms, the onset of transition for shot no. 184, both curves are identical. The electrical power converted to heat during the acceleration at the rail-armature-rail contact element is very low, basically all the available electrical power at the rails is converted into acceleration of the projectile. After transition the heating power reaches values of up to 830 MW for shot no. 184 and 660 MW for shot no. 185. Integrating the power over time, results in the amount of energy that is converted into heat. This is 1.49 MJ and 1.19 MJ for shot no. 184 and shot no. 185, respectively. The difference is 300 kJ, very close to the 279 kJ difference in kinetic energy. This means that the additional available energy in shot no. 185 was converted to 90 % into kinetic energy of the payload part of the projectile.
VI. SUMMARY AND CONCLUSION
During an investigation spanning over several years, different payload projectile types were designed and tested. In an earlier report [2] the experiments with an monolithic projectile and the development path to the separating projectile was described. Here experiments with a separating projectile type were presented. A series of 5 shots was made and for each shot the parameters, current, muzzle voltage, velocity and action integral were recorded. An inspection of the muzzle voltage traces of these shots revealed a very different behavior as compared to the muzzle voltage of the standard projectile with the same number of copper brushes. If the contact of the brushes to the rails is not disturbed by strong erosion or loss of brushes, the voltage raises after transition quickly to a plateau, while for the standard projectile, the voltage continues to increase. For shot no. 183 an enlarging of the shot-out muzzle voltage peak showed that the payload copper cylinders do take part in the sharing of the current. In addition to this, traces of molten copper smoke from the heat developed by current conduction and mechanical friction can be identified in some of the free-flight x-ray pictures, pointing to the same conclusion. As the energy was increased, it was necessary to enlarge the brush diameter from 8 mm to 10 mm. This indicates that there is a positive effect of a larger armature mass being available. One clear problem at the current PEGASUS railgun for experiments with velocities above 2000 m/s is the delamination of the GRP projectile body seen in the x-ray photographs in the region where the payload is mounted and the inability of the GRP to adequately support the pushing booster brushes. As a consequence of this, the next design of the payload projectile will make usage of mechanically tougher materials. The currently preferred candidate material is aluminum. | 2017-02-18T19:41:52.295Z | 2013-06-16T00:00:00.000 | {
"year": 2013,
"sha1": "9c238880551a79d1b4741e52f394b3025235579c",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1402.6094",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "1eb1f1031f5cde1502b8b7318c7772856e67180b",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
86083152 | pes2o/s2orc | v3-fos-license | Dequalinium-induced Protofibril Formation of α-Synuclein*
α-Synuclein is the major constituent of Lewy bodies, a pathological signature of Parkinson disease, found in the degenerating dopaminergic neurons of the substantia nigra pars compacta. Amyloidosis generating the insoluble fibrillar protein deposition has been considered to be responsible for the cell death observed in the neurodegenerative disorder. In order to develop a controlling strategy toward the amyloid formation, 1,1′-(1,10-decanediyl)-bis-[4-a-mino-2-methylquinolinium] (dequalinium), was selected and examined in terms of its specific molecular interaction with α-synuclein. The protein was self-oligomerized by dequalinium, which gave rise to the ladder formation on N-[2-hydroxy-1,1-bis(hydroxymethyl)ethyl]glycine/SDS-PAGE in the presence of a coupling reagent of N-(ethoxycarbonyl)-2-ethoxy-1,2-dihydroquinoline. The double-headed structure of dequalinium with the two cationic 4-aminoquinaldinium rings was demonstrated to be critical for the protein self-oligomerization. The dequalinium-binding site was located on the acidic C-terminal region of the protein with an approximate dissociation constant of 5.5 μm. The protein self-oligomerization induced by the compound has resulted in the protofibril formation of α-synuclein before it has developed into amyloids. The protofibrils were demonstrated to affect the membrane intactness of liposomes, and they have also been shown to influence cell viability of human neuroblastoma cells. In addition, dequalinium treatment of the α-synuclein-overexpressing cells exerted a significant cell death. Therefore, it is pertinent to consider that dequalinium could be used as a molecular probe to assess toxic mechanisms related to the amyloid formation of α-synuclein. Ultimately, the compound could be employed to develop therapeutic and preventive strategies toward α-synucleinopathies including Parkinson disease.
␣-Synuclein is the major constituent of Lewy bodies, a pathological hallmark of Parkinson disease (PD), 2 found within the dopaminergic cells in the substantia nigra pars compacta (1)(2)(3)(4). Physiological function of the protein is virtually undefined, although its involvement in synaptic plasticity has been suggested (5). ␣-Synuclein is the first protein shown to be genetically linked to PD. Three independent missense mutations of the gene were isolated from a few pedigrees of familial PD (6 -8). Two of those resulted in substitutions of alanine at either residue 30 or residue 53 with proline and threonine (A30P and A53T), respectively (6,7). The third mutation showed a drastic substitution from acidic glutamate at residue 46 to basic lysine residue (E46K) (8). In addition, triplication of the ␣-synuclein gene was observed in an American family of mixed northern European origin with autosomal dominant young onset PD (9). Experimentally, overexpression of ␣-synuclein in mice and flies caused behavioral deficits reminiscent of human PD with selective degeneration of dopaminergic neurons (10,11). These facts clearly indicate that ␣-synuclein is a pathological component of PD. The protein is also related to other neurodegenerative disorders, collectively known ␣-synucleinopathies including Alzheimer disease (AD), dementia with Lewy bodies, and multiple system atrophy (12,13).
␣-Synuclein has been known to be a "natively unfolded" protein (14,15). When incubated in vitro, however, the protein tends to aggregate and form the fibrils known as amyloids in which the unstructured protein has aligned to form cross--sheet conformation (16 -18). The underlying mechanism for the amyloid formation remains to be clarified. Amyloid formation of the mutant form of A53T was facilitated as compared with the wild-type ␣-synuclein, whereas the other A30P was not as effective (19 -21). It has been debated whether the amyloids are responsible for the cell death observed in the degenerative diseases. It was suggested that oligomeric intermediates obtained prior to the amyloid formation would be a culprit for the toxicity by possibly forming the amyloid pores on membranes (22)(23)(24). On the other hand, various forms of protein aggregates from granular to filamentous structures were hypothesized to exert different potentials for the cell death, depending on their morphologies (25)(26)(27).
␣-Synuclein could be divided into three regions in its primary structure (28,29). The N-terminal region (residues 1-60) has been demonstrated to form amphipathic ␣-helices upon membrane interaction (30), suggesting that the protein could be involved in synaptic plasticity by participating in membrane dynamics (31). Since all of the amino acid substitutions observed in the mutant forms are exclusively localized on this region, deficits in its membrane interaction could have pathological implications, if any (32). The hydrophobic middle segment (residues 61-95) is also known as the non-A component of AD amyloid, since the peptide fragment has been found in the senile plaques of AD as the second major constituent next to the primary amyloid /A4 protein (A) (33). The protein ends with the acidic C terminus (residues 96 -140), which is most variable among synuclein isoforms including and ␥-synucleins (5). This protein has been demonstrated to experience multiple ligand interactions. This property could be due to its "natively unfolded" structure, which is prone to be stabilized upon various ligand interactions (34). In particular, we have been interested in screening for ␣-synuclein interactive small chemicals that could be used to control the amyloidosis eventually. Recently, phthalocyanine tetrasulfonate, previously proposed as an antiscrapie agent (35), was shown to interact with ␣-synuclein via selective binding to the acidic C terminus, and it prevented the cytotoxicity on SH-SY5Y cells caused by the overexpression of ␣-synuclein in the presence of a proteasomal inhibitor, lactacystin (36). The copper complex of phthalocyanine tetrasulfonate-Cu 2ϩ , on the other hand, influenced the protein to be self-oligomerized by interacting with the N-terminal region and stimulated the amyloid formation (36). Coomassie Brilliant Blue-G and -R augmented the protein aggregation of ␣-synuclein and gave rise to two different shapes of the aggregates, such as worm-like and filamentous structures, respectively (26). Their interactions were independent of the acidic C terminus. Eosin was another specific dye for the ␣-synuclein interaction, which led to the facilitated protein self-oligomerization and fibrillization (37). Pesticides such as rotenone, paraquat, and dieldrin also enhanced the fibrillization of ␣-synuclein (38 -40). Other chemicals including trimethylamine N-oxide and diethyl dithiocarbamate also induced the protein to be self-interactive (41).
1,1Ј-(1,10-Decanediyl)bis-[4-amino-2-methylquinolinium] (dequalinium (C10-DQ)) is an amphipathic cation that contains two cationic aminoquinaldinium rings separated by a 10-carbon methylene bridge ( Fig. 1) (42). Dequalinium previously used as a topical antimicrobial agent exhibited anti-tumor activity by selectively accumulating in mitochondria (43). The compound has been demonstrated to interact with several enzymes such as mitochondrial F 1 -ATPase, protein kinase C, calmodulin-dependent phosphodiesterase, and calcium-activated K ϩ channel (44 -48). In this report, dequalinium interaction of ␣-synuclein has been investigated in terms of its effects on induction of protein self-oligomerization and the stabilization of the protofibrils. Molecular details of the interaction, therefore, could provide a means to assess toxic mechanisms related to the amyloid formation of ␣-synuclein.
Analysis of Self-oligomerization of ␣-Synuclein in the Presence of Dequalinium and Its Analogues-␣-Synuclein (5 M) was preincubated with various concentrations of C10-DQ and C14-DQ in 20 mM Mes, pH 6.5, for 30 min at 37°C. The protein was also incubated with dequalinium analogues such as C4-DQ, C6-DQ, and C8-DQ along with quinaldine, quinaldinium iodide, and decamethonium bromide at a fixed concentration of 1.5 mM. Dequalinium and its analogues were prepared in 50% ethanol. The dequalinium-induced protein self-oligomerization was also carried out with various ␣-synuclein related proteins such as -synuclein, ␣-syn97, ␣-syn61-140, and the two mutant forms (A53T and A30P) at a concentration of 5 M. Following the addition of 0.3 mM EEDQ originally stocked in Me 2 SO, the chemical crosslinking reactions proceeded for another 1 h at 37°C while keeping concentration of the organic solvent less than 10% in the final mixtures (53). The reactions were terminated with a sample buffer of Tricine/SDS-PAGE consisting of 8% SDS, 24% glycerol, 0.015% Coomassie Blue G, and 0.005% phenol red in 0.9 M Tris-Cl, pH 8.45, by mixing at an 1:1 (v/v) ratio. After boiling for 5 min, the samples were analyzed with a precast gel for 10 -20% Tricine/SDS-PAGE, and the ladder formation was visualized with the silver staining procedure by Morrissey (54).
Analysis of Protein Aggregation-Protein aggregation of ␣-synuclein was monitored with either turbidity and/or thioflavin-T binding fluorescence (55,56). ␣-Synuclein (1 mg/ml) was incubated with either various concentrations of dequalinium or its analogues (C4, C6, and C10) at a fixed concentration of 50 M in 20 mM Mes, pH 6.5, at room temperature under continuous shaking at 200 rpm with an orbit shaker (Red Rotor; Hoefer Scientific Inc.). Turbidity was estimated by measuring absorbance of the incubation mixture at 405 nm. Amyloid formation of ␣-synuclein was evaluated with thioflavin-T binding fluorescence at 485 nm with an excitation at 440 nm. During the incubation, aliquots (20 l) were combined with 5 M thioflavin-T in 50 mM glycine, pH 8.5, to a final volume of 100 l. The fluorescence was measured with FL500 Microplate Fluorescence Reader (Bio-Tek Instruments). The protein aggregates were visualized with a transmission electron microscope (H7100; Hitachi). Aliquots (5 l) of the aggregates were adsorbed onto a carbon-coated copper grid (300-mesh) and air-dried for 1 min. After negative staining with 2% uranyl acetate for another 1 min, the aggregates were observed with the electron microscope (27). For the analysis with atomic force microscope (AFM; XE-150, PSIA), an aliquot (5 l) was placed on freshly cleaved mica (thickness 0.3 mm). Following adsorption of the protein aggregates (1-2 min), the droplet was displaced with 100 l of Millipore-filtered water. After removing excess water with a filter paper, the aggregates were examined with AFM.
Cytotoxicity-Human dopaminergic neuroblastoma cells (SH-SY5Y) were grown in Dulbecco's modified Eagle's medium containing 50 units/ml penicillin and 50 g/ml streptomycin supplemented with 10% fetal bovine serum in 5% CO 2 at 37°C. Cells were cultured to 80% confluence on a 60-mm culture dish, and subjected to transient transfection with a mammalian expression vector (pcDNA 3.0) containing human cDNA of ␣-synuclein. The vector (2 g) was mixed with Lipofectamine Plus TM reagent according to the manufacturer's procedure and added onto the culture dish in the presence of serum-free medium. The transfection was carried out for 3 h at 37°C under humidified 5% CO 2 and 95% air. After a change to the medium containing 50 units/ml penicillin, 50 g/ml streptomycin, and 10% fetal bovine serum, the cells were further incubated for 24 h. Following trypsinization with 1 ml of trypsin-EDTA for 1 min at 37°C, the cells were plated onto a 24-well plate (2 ϫ 10 5 cells/well). Overexpression of ␣-synuclein inside the cells was examined with Western blotting. The cell lysates were prepared with 1% Triton X-100 in 50 mM Tris-Cl, pH 7.5, containing 150 mM NaCl, 1 mM EDTA, 1 mM dithiothreitol, 0.1 mM phenylmethylsulfonyl fluoride, 10 M leupeptin, 1 g/ml aprotinin, and 10 M pepstatin A. The extracts were subjected to 15% SDS-PAGE, and the gels were transferred to polyvinylidene difluoride membrane and incubated with the monoclonal antibody to ␣-synuclein (1:2,000). Following horseradish peroxidase-conjugated anti-mouse IgG secondary antibody treatment, the bands were visualized on x-ray film by using the ECL system (50).
The effect of dequalinium on the ␣-synuclein-overexpressing SH-SY5Y cells was examined. The cells were plated in a 24-well plate at 1.5 ϫ 10 5 cells/well. After cell growth reached 80 -90% confluence, human cDNA of ␣-synuclein within pcDNA3.0 and a mock plasmid were separately transfected into the cells at 0.5 g each in the presence and absence of 0.5 M dequalinium, and incubated for 24 h at 37°C. Cell survival was estimated with the tetrazolium salt extraction method (57) and trypan blue exclusion assay. Since living cells with active mitochondria cleave the tetrazolium ring into a visible dark blue formazan reaction product, 3-[4,5-dimethyldiazol-2-yl]-2,5-diphenyltetrazolium bromide (MTT) was added to the culture medium at a final concentration of 1 mg/ml and incubated for 3 h at 37°C. To measure absorbance at 570 nm, the MTT extraction buffer containing 20% SDS and 50% N,Ndimethylformamide, pH 4.7, was added to each sample (400 l into 400 l of medium in a 24-well plate including 80 l of 5 mg/ml MTT) to dissolve the formazan grains. The cell survival was monitored with absorbance at 570 nm by taking the extraction buffer containing the medium as a blank. In addition, trypan blue exclusion of the viable cell was estimated with 0.4% trypan blue by mixing the cell at a 1:1 (v/v) ratio. Stained cells were counted in hemocytometer under an inverted microscope. All of the data related to the cytotoxicity were obtained from two sets of three separate experiments.
Miscellaneous-Dissociation constant between ␣-synuclein and dequalinium was obtained by incubating ␣-synuclein at 8.7 M with various concentrations of dequalinium (0 -25 M) in 20 mM Mes, pH 6.5, for 30 min at room temperature in a final reaction volume of 200 l. Protein-bound dequalinium was separated from the unbound form by using a centrifuge column procedure (Penefsky column) packed with Sephadex G-25 (coarse) (58). Preswollen gel packed in a 3-ml syringe was compressed via centrifugation at 600 ϫ g (HA1000-3 by Hanil Industrial Co., Incheon, Korea) for 1 min. The reaction mixture was loaded on the top of the dehydrated gel and centrifuged for an additional 1.5 min at the same speed. The amounts of protein in the collected samples were quantified with a micro-BCA assay, which gave rise to the protein recovery of 52% on average. The amount of dequalinium was estimated by measuring absorbance at 327 nm with an extinction coefficient of ⑀ ϭ 2.80 ϫ 10 4 M Ϫ1 cm Ϫ1 . A saturation curve was drawn between the amounts of total dequalinium and its protein-bound form. A dissociation constant between ␣-synuclein and dequalinium was obtained from a double-reciprocal plot of the saturation curve.
Structural alterations of ␣-synuclein in the presence of various concentrations of dequalinium were monitored with CD spectroscopy (JASCO J-715) after 30 min of preincubation between ␣-synuclein (2 M) and dequalinium in 20 mM potassium phosphate, pH 7.5, at 20°C. Structural transition of the protein upon liposome treatment was also observed with the CD spectroscopy in the presence and absence of 10 M dequalinium. The contents of secondary structures were evaluated with the program provided by the manufacturer.
Liposome was prepared with a lipid mixture of phosphatidic acid and phosphatidylcholine at a mass ratio of 1:1. A total of 5 mg of the mixture was dissolved in 1 ml of chloroform and dried under nitrogen gas with vortex to increase surface area of the resulting lipid film. After treating the film at 60°C for 30 s, 20 mM sodium phosphate (pH 7.2) was added to 1 ml, and vortexed vigorously in the presence of glass beads. Following sonication of the sample in the absence of the beads, the resulting cloudy solution was subjected to repeating extrusion (15-17 times) through a 0.1-m membrane equipped in the miniextruder (Avanti Polar Lipids Inc.) at 50 -60°C. The resulting large unilamellar liposome was separated from free lipids with Sephacryl S-200 size exclusion chromatography. Morphological alterations of the liposome were examined with TEM or AFM under various experimental conditions.
All of the statistical analyses were performed with analysis of variance and Duncan's multiple range test by using the XLSTAT-Pro. A limit of the statistical significance was selected at p value of Ͻ0.05.
Self-oligomerization of ␣-Synuclein in the Presence of Dequalinium-
Dequalinium interaction of ␣-synuclein was evaluated in terms of whether it induced the protein self-oligomerization in the presence of a coupling reagent of 0.3 mM EEDQ (53). As shown in Fig. 2A, the protein was self-oligomerized by dequalinium from 50 M at a molar ratio of 1:10 (␣-synuclein/C10-DQ), which appeared in ladders on 10 -20% Tricine/SDS-PAGE. The ladder formation became stronger as the dequalinium concentration was raised to 1.5 mM at a ratio of 1:300. On the other hand, the self-oligomerization induced by a dequalinium analogue with a longer carbon chain of C14-linker (C14-DQ) responded abruptly to the chemical concentration and exerted a biphasic response. The ladder formation appeared suddenly at 150 M C14-DQ and started to disappear from 500 M (Fig. 2B). This abnormal behavior could be due to possible cooperative interaction of the compound to the protein and subsequent nonspecific protein aggregation, which might suppress the ladder formation on the gel at the higher concentrations of C14-DQ. The phenomenon of ligand-induced self-oligomerization of ␣-synuclein has been demonstrated to be a highly selective process between the protein and various small chemicals, such as eosin, erythrosine B, Coomassie Brilliant Blue (G and R), and the coppercontaining phthalocyanine tetrasulfonate in addition to A25-35 and the divalent copper ion (26,36,37,45,51,53). The dequalinium interaction of ␣-synuclein, therefore, could be also considered as another highly selective process of molecular recognition.
In order to demonstrate the molecular selectivity, the chain length between the two quinaldinium rings in dequalinium (C10-DQ) was varied from 4 carbon units (C4-DQ) to 6 (C6-DQ) and 8 (C8-DQ) carbon units. These dequalinium analogues with different chain length were synthesized and employed to induce the protein self-oligomerization of ␣-synuclein. At a fixed chemical concentration of 1.5 mM, the ladder formation was revealed from C8-DQ as the chain extended to C10-DQ (Fig. 3A). Since quinaldine, quinaldinium iodide, and decamethonium bromide (Fig. 1) did not cause the self-oligomerization (Fig. 3B), the double-headed structure with the two cationic quinaldinium rings optimally separated by a hydrocarbon chain with appropriate chain length turned out to be a prerequisite for the selective ␣-synuclein interaction leading to the protein self-oligomerization.
Binding between Dequalinium and ␣-Synuclein-The specific interaction site of dequalinium on ␣-synuclein responsible for the protein self-oligomerization was pursued by employing various isoforms of ␣-synuclein, including -synuclein, C-terminally truncated ␣-syn97, N-terminally truncated ␣-syn61-140, and the two mutant forms of A53T and A30P. The ladder formation was observed with all of the isoforms except the C-terminally truncated ␣-syn97 (Fig. 4), indicating that the dequalinium binding site should be localized on the acidic C-terminal region. Ionic interaction between the negatively charged acidic region and the cationic quinaldinium rings might be one of the critical factors for the selective interaction. Since -synuclein, which contains the similar acidic C terminus as ␣-synuclein with a distinctive amino acid composition, still gave rise to the same self-oligomerization in the presence of dequalinium, a pivotal role of the charged interaction was also appreciated (Fig. 4).
In order to evaluate molecular affinity, ␣-synuclein was incubated with various concentrations of dequalinium, and the protein-bound compound was separated from its free form by employing the centrifuge column packed with Sephadex G-25. By plotting a saturation curve and its corresponding double-reciprocal plot (Fig. 5), an approximate dissociation constant between ␣-synuclein and dequalinium was obtained as 5.5 M, indicating that the molecular interaction has exerted a considerable affinity. This binding of dequalinium caused a prominent effect on the structure of ␣-synuclein as shown in Fig. 6. Structural alteration of ␣-synuclein upon the dequalinium treatment was examined with CD spectroscopy. The minimum ellipticity at 197 nm increased as dequalinium concentration was raised to 20 M in the presence of 2 M ␣-synuclein (Fig. 6). Dequalinium itself, however, was not optically active. These data indicated that the protein appeared to experience a significant structural transition by reducing the content of random coil with a tendency to increase -sheet structure.
Protein Aggregation of ␣-Synuclein in the Presence of Dequalinium-Protein aggregation of ␣-synuclein was monitored with turbidity by observing absorbance at 405 nm as well as thioflavin-T binding fluorescence at 485 nm with excitation at 440 nm. As dequalinium concentrations increased (1.4, 14, and 140 M), the final turbidities were also increased from a control with prominent shortening of the lag phases on the aggregation kinetics of ␣-synuclein (Fig. 7A). Intriguingly, however, the final amyloid formation evaluated with the thioflavin-T binding fluorescence was decreased instead as the dequalinium concentrations were increased, although the lag phases were consistently reduced in the presence of the compound (Fig. 7B). The increased fluorescence, especially during the initial period of the aggregation in the presence of dequalinium, was found to be due to the granular protein aggregates of ␣-synuclein that have been observed often within the protofibril fraction (data not shown). Although thioflavin-T binding fluorescence has been employed to monitor amyloid formation in general, the amyloids are not the exclusive materials for the dye to be bound. In fact, the thioflavin-T binding fluorescence was routinely observed even with only the granular forms of ␣-synuclein aggregates (27). These observations indicated that dequalinium could facilitate the protein aggregation by inducing the granular aggregates, whereas the final amyloid formation was somewhat suppressed. The results showing enhanced protein aggregation with reduced amyloid formation were consistently observed when other dequalinium analogues were employed to induce the protein aggregation. As the length of hydrocarbon chain was varied among C4, C6, and C10 at a fixed concentration of 50 M, the final turbidity was increased with apparent reduction in the lag period (Fig. 7C), whereas the final thioflavin-T binding fluorescence obtained in the presence of the analogues was lower than that of the aggregates prepared with ␣-synuclein alone (Fig. 7D). These data suggest that dequalinium could induce the oligomeric intermediates of ␣-synuclein via the facilitated protein self-oligomerization, whereas their subsequent progress into the final amyloids has been somewhat prevented.
Induction of the Protofibrils of ␣-Synuclein by Dequalinium-The oligomer-inducing property of dequalinium was investigated by examining the protein aggregates of ␣-synuclein collected at a stationary phase of the aggregation kinetics following a sufficient period of incubation. In the absence of dequalinium, only the fibrillar protein aggregates were observed (Fig. 8A). The granular forms that appeared as "white circles" with an average diameter of about 40 nm, however, were present in the aggregates as dequalinium was included in the incubation mixtures at three different concentrations of 1.4, 14, and 140 M (Fig. 8, B-D). These "white circles" have been frequently observed during various protein aggregation studies with ␣-synuclein in the presence of various chemical ligands (27,36). The protein aggregates obtained in the absence and presence of the compound were also observed with AFM. Whereas ␣-synuclein itself produced the fibrillar aggregates as an exclusive product (Fig. 8E), dequalinium (50 M) also gave rise to the small granules (ϳ40 nm diameter) in addition to the incomplete fibrillar structures (Fig. 8F). By measuring various granular structures revealed under either TEM or AFM, their average diameters were estimated as 42 Ϯ 6 and 39 Ϯ 9 nm, respectively. Protofibrils, on the other hand, were defined as the node-containing chainlike structures with an average width of 43 Ϯ 4 nm and a length shorter than 500 nm. The protofibrils Protein-bound dequalinium was separated from the unbound form by using the centrifuge column packed with Sephadex G-25 (coarse). The dehydration of the gel and the collection of the dequalinium-bound protein were achieved with a low speed centrifugation (600 ϫ g) for 1 and 1.5 min, respectively. The protein-bound dequalinium was quantified by observing absorbance at 327 nm with ⑀ ϭ 2.80 ϫ 10 4 M Ϫ1 cm Ϫ1 . A saturation curve was drawn between total amounts of dequalinium initially treated and amounts of the protein-bound forms (A). The double-reciprocal plot was redrawn from the saturation curve (B). Dequalinium Effects on ␣-Synuclein FEBRUARY 10, 2006 • VOLUME 281 • NUMBER 6 appeared to result from lateral associations of the granular structures. Intriguingly, an average width of amyloid fibrils was reduced to 35 Ϯ 11 nm from that of the protofibrils. In addition, the node structures in the protofibrils were smoothed in the amyloids. Unfortunately, the morphological relationship between the protofibrils and the amyloids could not be evaluated until currently unknown association mechanisms from the protofibrils to the amyloids have been identified. Since the granular aggregates were observed along with the protofibrils in the presence of dequalinium (Fig. 8F), they could be considered as oligomeric intermediates that might develop into amyloids via the protofibril formation (23,24). Dequalinium, therefore, could be considered as an inducer and/or stabilizer of the oligomeric intermediates by partly preventing subsequent fibrillization of ␣-synuclein.
The dequalinium-mediated oligomeric intermediate formation of ␣-synuclein was examined with size exclusion chromatography of Sephacryl S-200 (Fig. 9). The protein aggregation of ␣-synuclein (2 mg/ml) was performed in the absence and presence (5 and 50 M) of dequalinium (Fig. 9, inset). Prior to the incubation, the protein solutions were filtered through a 0.22-m syringe filter to remove any high molecular weight species. It was confirmed that these protein samples did not contain any materials eluted at the void volume of the size exclusion chromatography. Following 24 h of further incubation after the kinetics reached their stationary phases, as indicated with arrows in the inset (Fig. 9), the aggregates were separately subjected to the gel filtration chromatography. The monomeric ␣-synuclein was hardly observed from the three samples. The aggregates prepared with ␣-synuclein alone exhibited the shortest peak at the void volume where the oligomeric forms were expected to be eluted. Intriguingly, the oligomer-containing peak was increased as dequalinium level was raised.
In the presence of 50 M dequalinium, the highest peak due to the oligomeric protein aggregates was obtained at the void volume, although the condition caused most rapid protein aggregation with the highest final turbidity (Fig. 9). This finding data clearly indicated that dequalinium favored the oligomer formation and also prevented further development into the amyloids.
The oligomers of ␣-synuclein obtained with dequalinium were collected from the size exclusion chromatography and examined under AFM. The image contained the aggregates of not only granular structures but also protofibrils in which small round-shaped granules were serially aligned to form a chainlike structure (Fig. 10, A and B). Since the protofibril fraction was suggested to contain the amyloid pores, which could affect membranes, liposomes were prepared and treated by the protofibrils in order to check their influence on membrane intactness. Generally, the liposomes treated with the protofibrils experienced drastic shrinkage in size when compared with a control (Fig. 10, C and D). On average, diameters of liposomes were reduced by 36% from 81.1 to 51.6 nm in the absence and presence of the protofibrils, respectively. In addition, some of the boundaries of protofibril-treated liposomes were disrupted to result in diffused structures, as indicated with arrows (Fig. 10D). The membrane-disrupting effect of the protofibrils was evaluated with live cells (SH-SY5Y) as well by observing the penetration of trypan blue into the cells. The protofibril-treated cells were stained by the dye to the higher proportion when compared with the control cells treated with monomeric ␣-synuclein (Fig. 10, E and F). The data indicated that those protofibrils could affect cell viability by possibly influencing the membrane intactness.
Dequalinium-induced Cytotoxicity of the ␣-Synuclein-overexpressing Cells-Cell death of the SH-SY5Y cells overexpressing ␣-synuclein was examined in the presence of dequalinium to show that the membranedisrupting toxic activity of the oligomeric intermediates could be reflected within the cell. The cells were transfected with pcDNA containing the human ␣-synuclein gene in the presence and absence of dequalinium and incubated for 24 h at 37°C within a humidified CO 2 (5%) incubator. The expression of the protein was confirmed with Western blot. Cell survival was monitored with both MTT and trypan blue exclusion assays. Compared with the control in which a mock plasmid was transfected, the ␣-synuclein overexpression slightly affected the cell viability by 11% based on the results of the MTT assay (Fig. 11). In the presence of 0.5 M dequalinium, however, the ␣-synuclein-overexpressing cells were demonstrated to experience a significant cell death, exerting 54% of the cell viability from 77% of the control. The cell death assessed with the trypan blue exclusion also showed that dequalinium caused another significant cell death of the ␣-synuclein-overexpressing cells by 26% (Fig. 11), which showed a good correlation with the death evaluated by the MTT assay. Taken together, the data suggested that the overexpressed ␣-synuclein could exhibit its toxic effect in the presence of dequalinium by presumably forming the protofibrils and affecting membrane stability, although the possibilities that the reagent could affect intracellular ATP generation and phosphorylation level by directly inhibiting F 0 F 1 -ATPase or protein kinase C, respectively, should not be completely excluded (45,46,60).
Since both dequalinium and ␣-synuclein have been considered to influence membranes via a common amphipathic nature, their individual and mutual effects on the membranes were investigated with AFM and CD spectroscopy. During 12 h of incubation with dequalinium at 10 M, the liposomes appeared significantly distorted on their morphologies examined under AFM (Fig. 12, A and B). Coexistence with ␣-synuclein (1 M), however, restored the liposomes to their original shapes (Fig. 12, A and C). This observation indicated that dequalinium interaction with the lipid membranes could be prevented by ␣-synuclein via selective interaction between the molecules. In addition, membrane interaction of ␣-synuclein was also demonstrated to be modulated by the compound. The random structured protein was confirmed to show a significant structural transition to increase ␣-helical content upon the liposome interaction (Fig. 12D). Intriguingly, the altered structure of ␣-synuclein in the presence of the membranes was shown to experience a dramatic structural transition upon the dequalinium treatment to the original spectrum, with an increased minimum ellipticity around 197 nm (Fig. 12D), indicating that the membrane- bound ␣-synuclein could be removed and subsequently influenced by the compound. These morphological and structural analyses indicated that the individual potentials of ␣-synuclein and dequalinium to interact with lipid membranes were abolished via their specific intermolecular interactions, which could lead to the eventual protofibril formation.
Since the protofibrils could be formed inside the cells overexpressing ␣-synuclein in the presence of dequalinium and they have been shown to exert cytotoxicity in vitro, the compound could be used as a molecular probe to assess toxic mechanism of ␣-synuclein during the amyloidosis.
DISCUSSION
␣-Synuclein has experienced the protein self-oligomerization in the presence of dequalinium. The double-headed structure of the compound with two cationic 4-aminoquinaldinium rings separated by a hydrophobic hydrocarbon chain of 10 carbon units appeared to be critical for the self-oligomerization. Its interaction to ␣-synuclein was local-ized on the acidic C-terminal region. It is not clear, however, whether the two cationic quinaldinium rings could occupy two separate sites on one protein or bridge between two proteins. Its mode of interaction could depend on the chain length as well. The chain, however, cannot be extended too long, because additional hydrophobicity would cause rather abnormal protein self-interaction as observed with C14-DQ. Alternatively, a certain solution structure of dequalinium prior to its protein engagement could influence the molecular interaction with ␣-synuclein (42).
␣-Synuclein has been suspected to have certain local structures to accommodate the structural characteristics of dequalinium, although the protein has been known to exist in a "natively unfolded" state (14,15). This suggestion has been supported by various studies employing small chemicals responsible for specific ␣-synuclein interactions (26,36,37,51,53,62). Although structural plasticity of this unstructured protein would play a key role to interact with various small ligands, the protein might not exist in a fully extended completely denatured state. The CD spectrum of ␣-synuclein showing predominant random coil structure does not necessarily mean that the protein would not form any local structures distinctive from either ␣-helix or -pleated sheet. Recently, it was demonstrated that ␣-synuclein could exist in a stable compact structure in addition to the elongated state, depending on pH (63,64). When the protein recognizes its specific ligand, however, the protein would exhibit structural plasticity to accommodate the ligand. It is this altered structure of ␣-synuclein that could develop into the final protein aggregates in the presence of the specific ligand. As a result, the final protein aggregates of the single so-called "natively unstructured" protein could end up with various forms of amyloids depending on ligands treated (26,27,36). Our data indicated that ␣-synuclein has experienced structural transition upon the dequalinium treatment, which is necessary for the intermolecular interaction leading to the protein aggregation.
It has been controversial whether intracellular overexpression of ␣-synuclein actually causes the cell death under a cultured condition (65)(66)(67)(68). Although it might depend on the cell types studied, our transient ␣-synuclein overexpression did not affect the SH-SY5Y cells. It has required an additional executor, such as lactacystin as a proteasome inhibitor, in order to exert cell death (36). In this study, dequalinium has been shown to cause the cell death of the ␣-synuclein-overexpressing SH-SY5Y cells via presumably direct interaction with the protein in the absence of the additional executor. However, since the compound itself could also affect the cell survival by influencing various critical enzymes, such as F 0 F 1 -ATPase and protein kinase C or direct accumulation inside mitochondria responsible for the elevated level of reactive oxygen species, the dequalinium effect on the cytotoxicity could not be overestimated solely on the basis of the protofibril formation of ␣-synuclein inside the cell. It awaits more direct evidence for the dequalinium interaction of ␣-synuclein inside the cell, leading to the intracellular protein aggregation.
Based on this study, dequalinium has been proposed to act as an inducer and/or stabilizer of the protofibrils of ␣-synuclein. Induction of the stable ␣-synuclein oligomers has been also reported in other studies. Methionine oxidation and nitration of ␣-synuclein were shown to inhibit the fibrillization of ␣-synuclein by forming soluble oligomers (69,70). Concerning the pathological significance of the soluble oligomers as an intermediate during amyloidosis, recently the soluble A oligomers have acquired more attention than the A deposits as a pathological component responsible for the neuronal cell death and subsequent memory loss observed in early AD (71). However, a possibility that dequalinium could disaggregate preformed fibrillar protein aggregates into protofibrils cannot be completely excluded, as demonstrated with rifampicin and baicalein (72,73). Nonetheless, the protofibrilbased toxic mechanisms could be evaluated with the dequalinium-induced protein aggregation. Membrane instability of the plasma membrane and other intracellular organelles could be a part of the toxic causes observed with the protofibrils, as shown in this study. Alternatively, membrane itself could influence amyloidogenic proteins to be more pathogenic (74). The protofibrils could be formed inside the cells overexpressing ␣-synuclein in the presence of dequalinium via their mutual intermolecular interaction, which could suppress their individual potentials to interact with membranes. Taken together, it is pertinent to consider that dequalinium could be useful as a chemical probe to understand molecular mechanisms of the cell death caused by the protofibrils and/or amyloids of ␣-synuclein. In addition, the data provided in this study would put emphasis on the protofibrils, not the final amyloids, of amyloidogenic proteins as a real culprit for the neuronal cell death observed in various neurodegenerative disorders (59,61). | 2019-03-30T13:09:08.280Z | 2006-02-10T00:00:00.000 | {
"year": 2006,
"sha1": "309c3d53cc4590f3dfe798277c0ddcf9d5b6456f",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/281/6/3463.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "fcbd0ee7a28b98429b4c0e1200bedc6e90090d82",
"s2fieldsofstudy": [
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
55624226 | pes2o/s2orc | v3-fos-license | Thinning of Concentric Circular Antenna Arrays Using Improved Binary Invasive Weed Optimization Algorithm
the original This study presents a novel optimization algorithm based on invasive weed optimization (IWO) for reduction of the maximum side lobe level (SLL) with specific half power beam width (HPBW) of thinned large multiple concentric circular arrays of uniformly excited isotropic elements. IWO is a powerful optimization technique for many continuous problems. But, for discrete problems, it does not work well. In this paper, the authors propose an improved binary IWO (IBIWO) for pattern synthesis of thinned circular array. The thinning percentage of the array is kept equal to or more than 50% and the HPBW is attempted to be equal to or less than that of a fully populated, uniformly excited, and half wavelength spaced concentric circular array of the same number of elements and rings. Simulation results are compared with previous published results of DE, MPSO, and BBO to verify the effectiveness of the proposed method for concentric circular arrays.
Introduction
Circular antenna arrays have considerable interest in a variety of applications including radar, sonar, and mobile and commercial satellite communication systems [1].It consists of a number of elements which are usually omnidirectional and arranged on a circle and can be employed for beamforming in the azimuth plane, for example, at the base stations of the mobile radio communications systems [2][3][4].As very popular type of antenna array is the circular array that has several advantages over other array geometries such as all-azimuth scan capability (it can perform 360-degree scan around its center) and the beam pattern can be kept invariant [5].Moreover, circular arrays are less sensitive to mutual coupling as compared with linear and rectangular arrays since these do not have edge elements [1].Concentric circular antenna array (CCAA) that contains many concentric circular rings of different radii and number of elements have several advantages including flexibility in array pattern synthesis and design both in narrowband and broadband beamforming applications [2][3][4].CCAA is also used in direction-of-arrival applications since it provides almost invariant azimuth angle converge.
Uniform concentric circular antenna array (UCCA) is one of the most important types of the CCA where the interelement spacing in individual ring is kept almost half of the wavelength and all the elements in the array are uniformly excited.Uniform antenna arrays exhibit high directivity; however, they usually suffer from high side lobe level [1].To reduce the SLL, the array is made aperiodic by altering the positions of the antenna elements with uniform amplitude excitations.The other method to reduce SLL is to use an equally spaced array with radically tapered amplitude distribution [3,4].However, uniform excitation is desired to reduce the complexity in designing a feed network.
The process is known as array thinning and is widely employed to reduce the SLL of antenna array with large number of elements.Array thinning is related to the removal of some elements from a uniformly spaced or periodic array to achieve a desired radiation pattern.Thinning not only reduces SLL but also brings down the cost, weight, and power consumption by decreasing the number of radiating elements.However, synthesis of antenna array is a tough challenge, and it cannot be solved by analytical methods.But there are various global optimization algorithms for thinning such as genetic algorithms (GA) [6], particle swarm optimization [7], differential evolution (DE) [8], and biogeography-based optimization (BBO) [9].Haupt has used GA for thinning of linear arrays and a center element fed concentric ring array antenna to reduce the SLL [10,11].Orthogonal GA has been proposed by Zhang et al. for pattern synthesis of thinned linear array [12].Ghosh and Das utilized DE with global and local neighborhood (DEGL) for thinning planar circular array [13].Thinning of a planar concentric circular array to SLL reduction using modified PSO (MPSO) algorithm is proposed by Pathak et al. [14].Chatterjee et al. have compared the performance of MPSO and gravitational search algorithm for thinning of a scanned concentric ring array [15].BBO has been utilized by Singh and Kamal for thinning large multiple concentric circular ring arrays [16].Wang et al. proposed a chaotic binary PSO (CBPSO) algorithm to synthesis of thinned linear and planar antenna array [17].Basu and Mahanti introduced the fire fly algorithm (FFA) to thinning of concentric two-ring circular array antenna [18].Chatterjee et al. have compared the performance of binary FFA and binary PSO to thin planar concentric ring antenna array to minimize SLL in a number of predefined -planes [19].
In this paper, we present the method of optimization of uniformly spaced concentric circular array using an improved binary variant of recently proposed metaheuristic algorithm called invasive weed optimization.The IWO algorithm has found successful application in many electromagnetic problems like design of printed Yagi antenna [20], E-shaped MIMO antenna [21], multifeed reflector antennas [22], broadband patch antenna [23], conformal phased arrays [24], circular antenna arrays [25], and so forth.Results obtained using IWO for these problems are encouraging.However, the nature of reproduction operators in classical IWO limits its application, such as discrete optimization problems.In this paper, an improved binary version of IWO algorithm (IBIWO) has been proposed for large multiple concentric ring arrays of isotropic elements for SLL reduction and at the same time keeping the half power beam width (HPBW).The thinning percentage of the array is kept to more than 50% and the HPBW is kept equal to or less than that of the array with the same number of elements and rings.The same problem has been dealt by DEGL, MPSO, and BBO, respectively.To the best of our knowledge, IWO has not been utilized for the thinning of concentric circular array before.Simulation results obtained are compared with the above algorithm.
The rest of the paper is organized in follows.A formulation of the thinned CCAA pattern synthesis as an optimization task has been discussed in Section 2. Section 3 gives a comprehensive overview of the proposed IBIWO algorithm.Section 4 presents the simulation results and in Section 5 conclusions are presented.
Thinned Planar Circular Array
Thinning an array means turning off some radiating elements in a uniformly spaced or periodic array in order to generate a pattern with low side lobe levels.Typical applications for thinned array include satellite-receiving antennas that operate against a jamming environment, ground-based high frequency radars, and design of interferometer array of radio astronomy.In this work, we assumed that the positions of elements are fixed and the elements can have only two states: either "on" or "off." An antenna in an "on" state only contributes to the total array pattern.On the other hand, an antenna is in "off " state if the element is either passively terminated to a matched load or is open circuited and hence it does not contribute to the total array pattern.Thinning an array to produce low side lobes is much simpler than the more general problem of nonuniform spacing of the elements.Nonuniform spacing has an infinite number of possibilities for placement of the elements.
In CCAA, the elements are arranged in such a manner that all antenna elements are positioned in multiple concentric circular rings, which vary in radii and in number of elements.Figure 1 shows the general configuration of CCAA with concentric circular rings, where the th ( = 1, 2, . . ., ) ring has a radius and the corresponding number of elements is .Assuming that all the elements are isotropic sources, the far-field pattern of this array can be written as Normalized absolute power pattern (, ) in dB can be expressed as follows: where = 2 * /, is the radius of the th circle = /2, is the interelement arc spacing of the th circle, = 2/ is the angular position of th element of the th ring, and and are the zenith and azimuth angle, respectively.All the elements have the same excitation phase of zero degrees.
Invasive Weed Optimization Algorithms
3.1.Continuous IWO.Invasive weed optimization (IWO) is a metaheuristic algorithm that mimics the colonizing behavior of weeds.The IWO algorithm may be summarized as four steps and more details can be found in [26].
(I) Initialization: the solutions are produced in the given -dimensional search space randomly.
(II) Reproduction: in this step the parent weed produces seeds depending on its own fitness, as well as the colony's lowest and highest fitness.
(III) Spatial distribution: the generated seeds are randomly scattered over the -dimensional search space by perturbing them with normally distributed random numbers with zero mean and a variable variance.The standard deviation for a particular iteration can be given as in The position of the new seed can be given as in (7): where iter and iter max are the current iteration and the maximum iteration, initial and final are the initial and final standard deviation, cur is the standard deviation of the current iteration, and is the nonlinear index.
(IV) Competitive exclusion: the new seeds grow to flowering weeds and are placed together with parent weeds in the colony.A competition is adopted for limiting maximum number of plants in a colony.Weeds with the worst fitness are eliminated until the maximum number of weeds max in the colony is reached.The steps 1 to 4 are repeated until the maximum number of iterations has been reached.
Binary IWO.
The presentation used in IWO is a realvalued vector.We make some improvements for classical IWO to deal with discrete problems.Firstly, appropriate coding method is utilized to represent the actual problem.
In general, the weed in BIWO will be replaced by the binary coding sequence; each takes value of 1 or 0. Secondly, binary space spread should be employed.In IWO algorithm, the generated seeds are randomly scattered over the dimensional search space by perturbing them with normally distributed random numbers with zero mean and a variable standard deviation.But this continuous spread method will not make sense in the binary coding sequence.We take a mutation mechanism to create seed in the BIWO.Each bit in the parent weed will be mutated with a probability which is calculated by the following sigmoid function: where is the spread distance generated by the normally distributed function (0, cur ).When the mutation probability of each bit in a weed is calculated, then a uniformly distributed random numbers in the range [0, 1] will be generated to specify the mutation bits through following equations: 3.3.Improved Binary IWO.As mentioned in Section 3.1, we can know that cur defines the exploration ability and exploitation ability of algorithm and acts as both diversification and intensification components of BIWO; it has a great effect on final solutions.A good diversification will make the final solution near to global optimum.The algorithm will use this component to identify most the potential spaces where the global optimum may lie in.A good intensification will help the algorithm to exploit the potential areas to find the global optimum.It will increase the convergence speed of the algorithm and search the better final solution.Hence, it is very important to keep an efficient balance between diversification and intensification of the algorithm.But BIWO uses a fixed cur to produce seeds related to each weed and suffers from the lack of fine balance between exploration and exploitation.
In order to overcome the drawbacks of BIWO, adaptive dispersion mechanism is integrated in the BIWO algorithm The adaptive dispersion mechanism is that the cur of the current generation distribute linearly among the weeds as weed with the highest fitness achieves the lowest cur and the lowest fitness achieves the highest cur , which can be represented by (7). is the index of weeds in the colony which are sorted according to their fitness, cur can be calculated by (3), sum is the sum number of weeds in the current generation, and is the cur of th weeds to produce seeds.Hence, the plant with lower fitness will have the chance to produce good seeds in current generation.In addition, this concept will increase the diversification of algorithm so the algorithm will explore the search space effectively: ) .
Design Examples
In this work, an array of ten-ring concentric circular is considered.Each circular has 8 isotropic elements spaced uniformly, where represents the circular number counted from the innermost circular.The total number of element is 440; such a fully populated array is shown in Figure 2. Two instances are considered similar to that reported in [13,14,16].The objective function is given as where SLL max is the value of maximum side lobe level, HPBW and HPBW are the obtained and desired half power beam width, respectively.We assume that the values of off , off are obtained and desired value of number of switched In this section, we use two thinned array cases to evaluate the capability and versatility of the proposed algorithm.In all the examples, the array with all elements "on" is used as initial solutions.All simulations are conducted in a Windows 7 Professional OS environment using 12-core processors with Intel Xeon (R), 3.33 GHz, 72 GB RAM and the codes are implemented in Matlab 7.10.
Both cases are optimized by IBIWO and BIWO.The stopping criterion for IBIWO and BIWO is the maximum number of iterations.Because of the randomness nature, all the experiments have been run 25 times with 150 iterations independently.The parameters for IBIWO and BIWO taken are as follows (Table 1).
Case One.In this case, the interelement arc spacing in the entire circular is fixed at 0.5.The objective is now to find the optimal set of on and off elements that will generate a pencil beam in the plane keeping the HPBW unchanged, fixing the number of switched-off elements equal to 220 or more, and reducing the maximum SLL further.
Figure 3 shows the best pattern obtained by the IBIWO, and the results are compared with the results of fully populated (all the elements are turned on) uniform array.From Figure 3, we notice that the SLL max with the full populated array is −17.37 dB and the SLL max lower to −30.18 dB after thinning by IBIWO.The HBPW of the best pattern achieved by IBIWO is 4.67, which is nearly equal to that of fully populated array.
To further verify the performance of the IBIWO, the results of IBIWO are compared with the results of DEGL, MPSO, BBO, and BIWO thinned arrays which are given in Table 2.The results used for comparison are given by the references [13,14,16]; the maximum number of iterations of these algorithms is equal to 150.The maximum SLL achieved by IBIWO is −30.18 dB and BW is 4.6 ∘ while 231 elements are switched off.The maximum SLL obtained by MPSO, DEGL, BBO, and BIWO are −23.22 dB, −21.91 dB, −26.55 dB, and −28.59 dB, respectively.Hence, the maximum SLL achieved by IBIWO is lower by 6.96 dB, 8.27 dB, 3.63 dB, and 1.59 dB than the maximum SLL obtained by MPSO, DEGL, BBO, and BIWO thinned arrays, respectively.Evidently, the IBIWO provides better SLL.The number of switched-off elements achieved by IBIWO is more than that obtained by DEGL, BBO, and BIWO, expect MPSO.The thinned antenna array obtained by IBIWO is shown in Figure 4.
The comparison radiation patterns of MPSO, BBO, BIWO, and IBIWO are also drawn in Figure 5.The optimal amplitude excitations obtained by IBIWO are shown in Table 3.The convergence characteristics of IBIWO and BIWO are shown in Figure 6.From Figure 6 we can know that the convergence speed of the proposed algorithm is faster than BIWO.
Case Two.In this case, the interelement arc spacing is made uniform and the same but not fixed at 0.5.The objective is now to find the optimal set of on and off elements that will generate a pencil beam in the plane keeping the HPBW unchanged, fixing the number of switched-off elements equal to 220 or more, and reducing the maximum SLL further.The interelement arc spacing is allowed to vary between [0.5, ].The parameters for BIWO and IBIWO are also the same as in the previous example.The best result obtained by IBIWO is listed in Table 4.
Figure 7 shows the best pattern with optimized obtained by the IBIWO, and the results are compared with the results of fully populated (all the elements are turned on) uniform array.
To further verify the performance of the IBIWO, the results of IBIWO are compared with the results of fully populated uniform array and DEGL, MPSO, BBO, and BIWO thinned arrays which are given in Table 4.The results used for comparison are given by the references; the maximum number of iterations of these algorithms is equal to 150.In this case, the maximum SLL achieved by IBIWO is −31.17 dB and BW is 4. 2.42 dB than the maximum SLL obtained by fully populated, DEGL, MPSO, BBO, and BIWO thinned arrays, respectively.The number of switched-off elements achieved by IBIWO is more than that of the other four algorithms.The thinned antenna array obtained by IBIWO is shown in Figure 8.
The comparison radiation patterns of MPSO, BBO, BIWO, and IBIWO are also drawn in Figure 9.The optimal amplitude excitations obtained by IBIWO are shown in Table 5.
The convergence characteristics of IBIWO and BIWO are shown in Figure 10.From Figure 10 we can know that the convergence speed of the proposed algorithm is faster than BIWO.
From the above results, in the synthesis of thinned concentric multiple ring antenna arrays, it can be observed clearly that the proposed algorithm with adaptive dispersion can take a good balance between the local search and global exploration.In both cases, the IBIWO can achieve better solutions compared with the above algorithms.
Conclusions
In this paper, the improved binary IWO is proposed for the thinning large concentric ring arrays of isotropic elements to generate a pencil beam in the vertical plane with maximum SLL reduction.The obtained pattern has half power beam width which is very close to the value of a fully populated array of the same size and shape and yet has a lower SLL.
Mathematical Problems in Engineering
Two cases are discussed in this paper.One is to generate a thinned array with fixed interelement array spacing and the second with optimized interelement array spacing.Both cases are to obtain a thinned array with minimum SLL and 50% or more percentage of thinning.Comparisons of the IBIWO and other techniques (DEGL, MPSO, BBO, and BIWO) show the efficiency of the proposed technique.
Figure 1 :
Figure 1: Multiple concentric circular rings array of isotropic antennas.
Figure 2 :
Figure 2: Ten-ring concentric circular ring array of isotropic antennas.
Figure 3 :
Figure 3: The pattern of the best thinned array obtained by IBIWO algorithm and fully population array (fixed = 0.5).
Figure 5 :
Figure 5: Radiation pattern for ten-ring antenna array obtained using IBIWO results as compared with the results achieved by MPSO, BBO, and BIWO (fixed = 0.5).
Figure 7 :Figure 8 :
Figure 7: The pattern of the best thinned array obtained by IBIWO algorithm and fully population array (optimized ).
Figure 9 :Figure 10 :
Figure 9: Radiation pattern for ten-ring antenna array obtained using IBIWO results as compared with the results achieved by MPSO, BBO, and BIWO (optimized ).
Table 2 :
Results obtained by different algorithms with fixed = 0.5.
Table 4 :
Results obtained by different algorithms with optimized .
Table 5 :
Excitation amplitude distribution using IBIWO with optimized . | 2018-12-08T02:01:43.007Z | 2015-03-05T00:00:00.000 | {
"year": 2015,
"sha1": "5ebf059c3a51e19e5181b164018483f8a7fa1498",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/mpe/2015/365280.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "5ebf059c3a51e19e5181b164018483f8a7fa1498",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
229493358 | pes2o/s2orc | v3-fos-license | Validation of a simplified model for liquid propellant rocket engine combustion chamber design
The combustion phenomena inside the thrust chamber of the liquid propellant rocket engine are very complicated because of different paths for elementary processes. In this paper, the characteristic length (L*) approach for the combustion chamber design will be discussed compared to the effective length (Leff) approach. First, both methods are introduced then applied for real LPRE. The effective length methodology is introduced starting from the basic model until developing the empirical equations that may be used in the design process. The classical procedure of L* was found to over-estimate the required cylindrical length in addition to the inherent shortcoming of not giving insight where to move to enhance the design. The effective length procedure was found to be accurate within ± 10%.
Intr oduction
The combustion phenomena inside the thrust chamber of the liquid propellant rocket engine (LPRE) are very complicated because of different paths for elementary processes, figure 1 [1]. Previous research from the same group studied the vaporization controlled model for liquid propellant thrust chamber design. It was concluded that for some propellant combination i.e. LOX/Kerosene, vaporization is actually the rate-limiting process and can be used as to find the minimum length required for the combustion chamber, they also found that for other propellant combinations, the chemical reactions are slow that the vaporization is no longer the rate-limiting process [2]. In this paper, the L* approach will be discussed and applied to real case studies, next, the effective length approach will be introduced. This effective length design methodology is developed using the previously developed vaporization-limited model. Similarly, the effective length approach is applied for the same known liquid propellant motors in order to validate the applicability of the effective length and L* approaches.
The characteristic length approach is a classical approach to design a LPRE combustion chamber. This approach depends on tabulated values for L*, table 1 [3] which is defined as the ratio between the volume of the combustion chamber and the nozzle critical area, equation 1 [4]. The chamber volume is that volume enclosed between the injection head and the nozzle critical section as indicated in figure 2. Characteristic length is not a physical length but it is a parameter depending on the perfection of combustion processes.
where V c = Volume of the chamber (m 3 ), A cr = Critical cross-section area (m 2 ) Experimental data for the characteristic length is obtained through several testes using motors with different cylindrical lengths. The measurments involves the characteristic velocity efficiency defined as the ratio between experimental and theoretical characteristic velocities, equation 2, [3]. The characteristic velocity is known as the ratio of the product (chamber pressure × nozzle throat area) to the mass flow rate of gases leaving the the nozzle. Such parameters can be measured to determine the experimental characteristic velocity while the theoretical one can be obtained from any thermochemical calculation software for each motor. Similar curves can be obtained with this method by changing the propellant type, injection system, combustion chamber geometry (contraction area ratio), or the mixture ratio. From these curves, the optimum cylindrical length can be determined for the combustion chamber as showing in table 1 [3].
Figur e 1. Processes inside the thrust chamber of LPRE, modified from reference [1]. The procedure starts by selecting a value of L* then find the volume of the chamber. This volume is used to evaluate the value of chamber length depending on volume shape (spherical, cylindrical, or ellipsoidal). Table 2 shows the data for 6 of actual LPREs, shown in figures 4-9, the table summarizes the main data needed for the analysis in this section, namely contraction area ratio a c (ratio of the combustion chamber area to the critical area), convergent angle θ, critical area A cr and cylindrical length L c . Such values were obtained from available documentation. The results of applying L* procedure are shown in table 3, choosing values for L* = 1.5 m and 2.5 m. These are the lower and upper values for the combination of LOX/ Kerosene, given in table 1. It is clear from the last three columns that the actual engines do not comply with the recommended ranges of L*. For the engine RD-0110, the actual length is far below the length calculated using the lowest value of L*. This also is true for the engine LR79-NA-11. This discrepancy adds to the The previous data obligate the need for a different procedure to design a liquid propellant combustion chamber which can account for the process in order to find more accurate values and account for both operating conditions and design parameters.
Model of r elevant chamber par ameter s
The model used to develop the equations needed for the new methodology is discussed in detail in [2]. The main ideas are reviewed in the following. The model is assuming a one-dimensional steady flow in which propellant vaporization is the rate-controlling process, considering reaction and mixing are infinite fast processes. In addition, there is no reaction in the liquid phase, neglecting secondary breakup. Applying these assumptions would result in the following equations: Gas velocity For the nomenclature and more details, interested readers can rely on reference [2]. The equations can be used to find the length required to vaporize the less-volatile component, considering the reaction takes place in gas phase between the more volatile component and the amount vaporized from the less-volatile one. The objective in this research is to develop an empirical equation relating the required length to the operating parameters (combustion pressure P c , and injection temperature T in ), geometric parameter (contraction ratio a c ), spray parameter (mass median diameter MMD), and standard deviation σ LN . The target is to illustrate the effect on the length required to vaporize 95% of propellant when using conditions other than those applied in a test case. The test matrix used for n-heptane as typical hydrocarbon propellant is considering the variation of injection temperature T in , injection velocity V in , MMD, σ LN , Pc and a c as shown in table 4. The length required to vaporize a given fraction of the total mass of the spray is proportional to the 1.46 of the mass median droplet size. a result which is intuitionally accepted as the larger the size of a droplet, the longer the length it needs to vaporize.
Geometric standard deviation effect
For higher standard deviation, the percent of the spray vaporization and vaporization rate near the injector increases because of the large number of small droplets, and the required length to vaporize propellant is longer because of the larger droplets. The length required to vaporize a given fraction of the total mass of the spray is proportional to the 0.77 of the geometric standard deviation.
Injection droplet velocity effect
The maximum vaporization rate is obtained with a low initial velocity. The higher the initial velocity, the more mass vaporized before the minimum point is reached. However, the larger the velocity, the larger the spray travel before complete vaporization. This yields the results that the length required to vaporize a given percentage of the total mass is proportional to 0.86 power of initial velocity. The higher the chamber pressure, the higher the vaporization rate and amount vaporized in a given length. The length required to attain a given high percentage of mass vaporized is inversely proportional to 0.54 power of chamber pressure. This would not be attained if it was made use of the d 2 -law of Spalding [11]. The pressure dependence is introduced via the droplet equation and vaporization equation. Experimentally, it was found that burning rate dependency on pressure is related to fuel (0.25 for furfuryl alcohol, 0.4 for benzene, 0.37 for hydrazine in air and 1 for hydrazine decomposition) [12]. It is the ratio of cylindrical chamber cross-section area to the nozzle throat area. Small contraction ratio yields high gas velocities at the end of the combustion chamber, which encourages vaporization. The higher the contraction ratio, the more mass is vaporized before the minimum point is reached. Reduction in contraction ratio decreased the length of the combustor needed for complete vaporization. This effect, however, was not constant throughout the vaporization period. During the initial vaporization phase, an inverse effect of contraction ratio was noticed. Inversion occurs in the region where combustion gas velocity begins to exceed drop velocity. This analytical result is similar to the experimental effect, where efficiency improves with an increase in contraction ratio for the lowefficiency region and the inverse effect becomes evident as efficiency increases [13]. The calculation shows that the length required to vaporize 95% of the mass is proportional to 0.48 power of contraction ratio.
Droplet temperature effect
A higher initial liquid temperature is moderately beneficial, as droplet spends less time reaching the wet-bulb temperature which simply is the droplet temperature at which all heat flux goes to vaporize droplet and no more heating or droplet temperature increase. The calculation shows that the percentage of mass vaporized is inversely proportional to 0.51 power of initial droplet temperature. For a summary, it is concluded that the chamber length required for a given percent of propellant vaporized increases with larger drop sizes and higher injection velocity, decrease with higher final gas velocity, higher chamber pressure, and higher initial temperature.
Empirical correlation
The above discussion can be used to derive a correlation relating the chamber length required to vaporize 95% of propellant. The resultant correlation for heptane can be expressed as: The above correlation has an error band of ±10% in comparison with the data from the solution of differential equations as in [2]. This correlation may be compared with the correlation given in [14] that expressed the empirical equation for the geometric standard deviation of 2.3 only. From the above equations, it may be evident that the data could be expressed by an "effective length" rather than the actual length. This length can be expressed by: The advantage of the effective length over actual length is that it combines the actual length, design and operating parameters. Effective length can be considered as a generalized or universal length. Effective length can be expressed as percentage propellant vaporized as in the following section.
The relation between mass vaporized and effective length
Variation percentage mass vaporized vs. effective length for different parameters is shown in figure 10. From that figure, it is clear that there is a stacking of data in a central stream except for a very la1rge change in standard deviation (the outlier dashed line in the figure). Nevertheless this outlier arises only in the very low-efficiency motors, those with low percentage mass vaporized. Taking advantage of such a stack to fit a generalized equation for variation of effective length with percentage mass vaporized, the following relation can be written.
The Methodology
The methodology is described in the flow chart shown in figure 11, and can be summarized in the following steps: • Start with targeting characteristic velocity efficiency (defined in equation 2).
• Find the effective length using equation 10.
• The effective length equation represents a relation between chamber length, operating parameters, and injection system design • From injection system cold flow test or from empirical correlations found in literature, find the MMD, injection velocity for the injector elements • From the regenerative cooling or other cooling system constraints, find the injection temperature • Choose the operating pressure according to structural constraints or feeding cycle constraints • Use equation 9 to find the desired cylindrical length; this length satisfies the target characteristic velocity efficiency and takes into account the operating and design constraints Figur e 11. Flow chart for the proposed methodology.
Application
The effective length methodology is applied for the same test case mentioned in table 1. The data needed to find MMD for the injection head is shown in table 11. The temperature of injection liquid depends on the layout of cooling achieved by fuel or oxidizer. In some cases, temperature is known or otherwise, it will be assumed to be around 350 -400 K. MMD is calculated for swirl injector [2,15] and for impingement injector [2].
The details of the injector and injection head for the cases studied are shown in figures 13-18, the calculations of the needed data are summarized in table 12. The classical L* methodology is applied; in addition to the effective length methodology applied with different target characteristic velocity efficiency. Table 13 and figures 19-20 show the results. The comparison between the classical L* approach vs. the real data is shown in figure 19. As mentioned before, all the engines do not comply with that recommended range of L*. However, applying the effective length approach with different target characteristic velocity efficiencies (92.5%, 95% and 97.5 %) show a far better agreement with that of real values, figure 20. It is clear from the previous figure that the effective methodology shows better agreement with the real data, with the powerful advantage that it can propose insight on how to change design parameters to fit the design in a certain geometrical envelope, which is an inherent shortcoming in the classical L* methodology. However, the proposed methodology has an inherent assumption of vaporization limited engines which is an accurate assumption in a well-mixed propellant with a large difference in volatility. However, these assumptions are not valid for engines with mixing limited situations [16] such as the case in the gas generator, or low-cost engine with high flow rate injectors [17] or in cases where the propellants have similar volatility such combination MMH/NTO, but this is left for future work. Figur e 20. Comparison between L eff procedure and the actual data.
Conclusion
The complete modelling of processes inside the liquid propellant thrust chamber is a formidable task. However, the insightful engineer can rely on the concept of the "rate-limiting process" in order to find | 2020-11-19T09:13:31.161Z | 2020-11-18T00:00:00.000 | {
"year": 2020,
"sha1": "82c608b5180f829fea3012561c463aa68638b54e",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/973/1/012003",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "60fba3393bbc27f00a2ef1e69be2ae9c0a8006d0",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
3565486 | pes2o/s2orc | v3-fos-license | Prevalence of infant sneezing without colds and prediction of childhood allergy diseases in a prospective cohort study
Background Allergy sensitization may begin during the perinatal period, but predicting allergic diseases in infancy remains difficult. This study attempted to identify early predictors of childhood allergy diseases in a prospective cohort study. Materials and Methods In a prospective birth cohort study at southern Taiwan locating in a subtropical region, questionnaire surveys of sneezing or cough without colds at 6 and 18 months of age were recorded, and the correlation with allergy diseases was assessed at 3 and 6 years of age. Results A total of 1812 pregnant women and 1848 newborn infants were prenatally enrolled, and 1543, 1344, 1236, and 756 children completed the follow-up at ages 6 months, 18 months, 3 years and 6 years, respectively. The prevalence of infant sneezing without colds at 6 and 18 months of age was 30.3% and 19.2%, respectively. The prevalence of infant cough without colds at 6 and 18 months of age was 10.6% and 5.7%, respectively. Infant sneezing without colds at 18 months of age was significantly correlated with atopic dermatitis, allergic rhinitis and asthma at 6 years of age. Infant cough without colds at 18 months of age significantly predicted asthma but not atopic dermatitis or allergic rhinitis at 6 years of age. Conclusions Infant sneezing without colds predicted all allergy diseases at 6 years of age in a subtropical country. This highlights a potential non-invasive clue in a subtropical region for the early prediction, treatment and prevention of childhood allergy diseases in infancy.
INTRODUCTION
The prevalence of childhood allergy diseases has increased worldwide in recent decades [1]. Many studies have shown that allergy diseases are complex diseases related to gene-environment interactions, which makes the early prediction and prevention of allergy diseases difficult. Traditionally, parental atopy history has been Clinical Research Paper the most important predictor of the development of allergy diseases in offspring [2,3]. However, recent evidence suggests that early life infections [4] and a green environment (forest and agricultural land) within 5 kilometers [5] have different impacts on the development of childhood asthma. Children born by cesarean section have an increased likelihood of asthma at 36 months of age, and the association is stronger among children of nonatopic mothers [6]. Elevated total immunoglobulin E (IgE), frequent respiratory infections, and parenting difficulties in the first year of life were associated with asthma at 3 and 6 years of age [7].
To observe and identify risk factors for early prediction and prevention of childhood allergic diseases, we conducted a birth cohort study in Kaohsiung, Taiwan, as reported previously [8][9][10][11]. We found that elevated maternal but not paternal total IgE levels correlated with elevated infant IgE levels and infant atopy [8]. Atopic disease in a mother increases the risk of atopic eczema in her child but is a poor predictor of atopic eczema [8].
Breastfeeding, Cesarean section, use of curtains and/or air filters affect development of atopic dermatitis (AD), allergic rhinitis (AR) and asthma (AS) in offspring of nonatopic parents [12]. These results suggest that perinatal factors in addition to parental inheritance have an impact on the development of childhood allergic diseases and that it is important to identify an early infant predictor to facilitate the prediction and prevention of childhood allergy diseases.
Recently, the hygiene hypothesis has suggested that infants with less exposure to microorganisms in the environment are more susceptible to the development of allergic diseases [13,14]. By contrast, Nja et al. [15] indicated that a history of lower respiratory infections in infancy was a risk factor for asthma in school-age children. Moreover, children with asthma or other atopic diseases are more susceptible to infections [16]. Whether increased infections in infancy cause asthma or infant occult asthma or allergic sensitization causes increased infections remains controversial. Our previous analysis demonstrated that the gene-environment interaction on allergic sensitization begins in the perinatal stage [9,10]. Infant wheezing combined with other risk features was recently used to predict persistent asthma before 3 years of age [17][18][19]. However, wheezing is related to transient or non-atopic wheezing in two thirds of infants with wheezing episodes [20,21]. Our previous study also showed that frequent infant wheezing was not correlated with allergy sensitization but was correlated with Clara cell protein 10 (CC10) expression [22]. Based on these findings, we attempted to investigate whether early infant sneezing or infant cough without colds was correlated with perinatal conditions of parents or infants and thus could be used to predict childhood allergy diseases. Accordingly, this study acquired information on early (6 months) and late (18 months) infant sneezing or cough without colds to predict the development of allergic rhinitis (AR), atopic dermatitis (AD) and asthma (AS) at 3 and 6 years of age.
Demographic data of parents and their offspring in the birth cohort
In this study, 1848 newborns were prenatally recruited, of which 1543, 1344, 1236, and 756 children completed the follow-up at the age of 6 months, 18 months, 3 years and 6 years, respectively ( Figure 1). In total, 1000, 849, 852, and 737 children underwent blood tests to measure total IgE and specific IgE levels with parental permission at the follow-up visits at 6 months, 18 months, 3 years and 6 years of age, respectively. The cohort population was 53.3% male, 6.5% preterm (< 37 weeks gestational age), 28.2% Cesarean section, 54.5% maternal allergy history, 44.4% paternal allergy history, 26.5% maternal allergy disease, and 36.7% paternal allergy disease. The mothers in this cohort population reported greater allergy history but less definite allergy disease as defined by allergic disease history with elevated IgE levels ≥ 100 kU/l. There were no significant differences in the demographic data of the participants who did or did not complete the 6-year follow-up, as described elsewhere [11].
Prevalence of infant sneezing or cough without colds
We collected data on early and late infant sneezing or cough without colds at 6 and 18 months of age, respectively. The prevalence of early infant sneezing without colds was 30.3%, including occasional episodes in 25.8% and frequent episodes in 4.5%. The prevalence of early infant cough without colds was 10.6%, including occasional episodes in 9.7% and frequent episodes in 0.9% (Table 1A). The prevalence of late infant sneezing without colds was 19.2%, including occasional episodes in 16.4% and frequent episodes in 2.8%. The prevalence of late infant cough without colds was 5.7%, including occasional episodes in 4.8% and frequent episodes in 0.9% (Table 1B).
Late but not early infant sneezing is associated with allergy diseases at 3 or 6 years of age
Analyses were next performed to assess whether early sneezing at 6 months or late sneezing at 18 months was a predictor of allergic diseases at 3 and 6 years of age by Chi-square for trend analysis in the comparison among the 3 subgroups (no symptom, occasional or frequent). Early infant sneezing at 6 months of age was not significantly associated with atopic dermatitis, allergic rhinitis and asthma at 3 and 6 years of age (Table 2A). Late infant sneezing at 18 months of age was significantly associated with atopic dermatitis (p = 0.002) and allergic rhinitis (< 0.001) but not asthma (p = 0.843) at 3 years of age (Table 2B). Remarkably, late infant sneezing without colds was significantly associated with atopic dermatitis (0.018), allergic rhinitis (0.012) and asthma (0.001) at 6 years of age (Table 2B).
We also assessed whether early or late infant cough without colds at 6 months or 18 months of age predicted allergic diseases at 3 and 6 years of age. Early infant cough without colds was significantly associated with atopic dermatitis (p = 0.008) but not allergic rhinitis or asthma at 3 years of age; late infant cough without colds was significantly associated with asthma but not AR or AD at 6 years of age (p = 0.022) (Table 2B).
Late infant sneezing without colds is correlated with parental and childhood aeroallergen sensitization
To investigate the perinatal factors and childhood allergen sensitization associated with late infant sneezing or cough without colds at 18 months, we performed a Chi- square analysis for the trend to identify predictors. Late infant sneezing without colds was significantly associated with maternal allergy disease (p = 0.020), paternal allergy disease (p = 0.028) and childhood aeroallergen sensitization (p = 0.023) but not frequent URIs (≥ 3 times) (p = 0.051) (Table 3A). By contrast, late infant cough without colds at 18 months of age was significantly associated with frequent URIs (≥ 3 times) (p = 0.016) and childhood aeroallergen sensitization (p = 0.011) but not maternal or paternal allergy disease (Table 3B). Tobacco smoke exposure (TSE) was not significantly associated with infant sneezing without colds or infant cough without colds. The high rates (62.4-90.9%) of frequent URIs (≥ 3 times) were reported in different groups of infant sneezing or cough without colds, suggesting early infant allergic symptoms with cough or sneezing could be potentially recalled as common colds by parents.
Prediction of childhood allergic diseases by late infant sneezing or cough without colds
We next used a 2 × 3 table to assess the relative risk (RR) of the allergy diseases of AD, AR and AS at 3 and 6 years of age based on late infant sneezing or cough without colds at 18 months. Sneezing described as "occasional" was associated with a significantly higher predictive risk of AD (p = 0.007, RR = 1.872) and AR (p = 0.005, RR = 1.893) but not AS at 3 years of age. Similarly, sneezing described as "frequent" was associated with a significantly higher predictive risk of AD (p = 0.032, R R= 2.610) and AR (p = 0.003, RR = 3.410) but not AS at 3 years of age (Table 4, upper panel). By contrast, only sneezing described as "frequent" was associated with a significantly higher predictive risk of AD (p = 0.007, RR = 3.367) and AS (p = 0.002, RR=3.815) but not AR (p = 0.090, RR = 2.263, 95% CI 0.880-5.821) at 6 years of age (Table 4, lower panel). Late infant cough described as "occasional" was associated with a significant predictive risk of AD (p = 0.002, RR=2.904) but not AR or AS at 3 years of age (Table 5, upper panel). By contrast, late infant cough described as "occasional" was associated with a significant predictive risk of AS (p = 0.026, RR=2.306) but not AD or AR at 6 years of age (Table 5, lower panel). The results of these analyses suggest that infant sneezing without colds described as "frequent" is more predictive of different allergy diseases at 3 and 6 years of age, whereas infant cough without colds described as "occasional" is predictive of AD at 3 years of age and AS at 6 years of age.
DISCUSSION
This study revealed that infant sneezing without colds with morning or night symptoms at 18 months of age significantly predicted all allergy diseases at 6 years of age. Infant cough without colds with morning or night symptoms at 18 months of age only predicted childhood asthma but not atopic dermatitis or allergic rhinitis at 6 years of age. These results highlight the potential for early prediction of allergy diseases during infancy based on sneezing or cough without colds.
It remains difficult to predict or diagnose allergic rhinitis and asthma in infants or toddlers. Historically, infant and childhood allergy diseases have been predicted based on parental atopic disease [2,3]. However, allergy diseases are complex diseases that cannot be accurately predicted by inheritance. Many studies have recognized that infant wheezing in combination with parental atopy (asthma or eczema) and infant risk features (eosinophilia, eczema, allergen sensitization) can predict later asthma before 3 years of age [17][18][19][20]. Recently, Pescatore et al. [23] developed a simple asthma prediction tool with 10 non-invasive symptoms and signs for preschool children who wheeze or cough and observed that sex, age, wheezing without colds, wheezing frequency, activity disturbance, shortness of breath, exercise-related and aeroallergenrelated wheezing/coughing, eczema, and parental history of asthma/bronchitis predicted school children with asthma in a birth cohort in the United Kingdom. This tool was shown to be useful for the prediction of childhood asthma in high-risk toddlers in a subsequent German birth cohort study [24]. This novel 10-item asthma prediction tool is non-invasive and simple but too long to remember and record in clinical practice. Moreover, the tool emphasizes 4 types of wheezing features (wheezing frequency, exercise-related wheezing, wheezing without colds and aeroallergen-related wheezing) in the 10 noninvasive symptoms [23], which may be redundant.
In fact, infant wheezing can be classified into 3 populations: transient, nonatopic and allergy-related wheezing. Most wheezing during the first 3 years of life is related to transient wheezing and non-atopic wheezing, and allergy-related infant wheezing occurs in less than one-third of the population exhibiting wheezing [17][18][19]22]. These observations were replicated in the Copenhagen (COPSAC2000) birth cohort, which revealed that a global assessment of significant lung symptoms in the first 3 years of life is a better predictor of asthma than an assessment of wheezing [21]. In the present study in a subtropical country where house dust mite-mediated allergic sensitization is dominantly prevalent, we have demonstrated that infant sneezing without colds with morning or night symptoms significantly predicted all allergy diseases at 6 years of age, whereas infant cough without colds only significantly predicted asthma and not atopic dermatitis or allergic rhinitis. Coughing and sneezing are common cold symptoms and thus have not been used as early predictors of allergic diseases in questionnaires in many birth cohort studies. Although we included cough or sneezing without colds at morning or night in the proposed questionnaire, we observed that infant sneezing without colds at 18 months of age was a better predictor of allergy diseases at 6 years of age than infant cough without colds. This result may have been obtained because morning or night cough without colds is more likely to be a confounder of common colds, whereas morning or night sneezing without colds is more likely to be related to allergic symptoms. This possibility is further supported by our data indicating that infant sneezing without colds was significantly associated with parental allergy disease and the child's aeroallergen sensitization; infant cough without colds was significantly associated with URIs but not parental allergy disease ( Table 3).
The prevalence of infant sneezing without colds at 6 months of age was 30.3% and decreased to 19.2% at 18 months of age, whereas infant sneezing without colds at 18 months of age predicted allergic diseases better than that at 6 months of age, suggesting that infant sneezing without colds at younger than 6 months of age are more related to nonallergic trigger(s) and those at 18 months of age are more related to allergic trigger(s). Few studies have demonstrated that infant sneezing is associated with or predicts childhood allergy diseases. In this study, we attempted to differentiate whether the frequency of sneezing or cough without colds predicted allergy diseases Notes: AD, atopic dermatitis; AR, allergic rhinitis; AS, asthma.
at 6 years of age. We observed that infant sneezing reported as "frequent" at 18 months of age significantly predicted atopic dermatitis and allergic rhinitis at 3 years of age, whereas sneezing reported as "frequent" was the best predictor of asthma at 6 years of age. By contrast, the size 13 subjects included in the "Frequent" infant cough without colds provided not enough power for statistical analysis, however, the sizes between 34 and 127 subjects in the group of "Occasional" cough without colds is good enough for analyses on the prediction of childhood asthma at 6 years of age. Moreover, the populations of AD, AR and AS might be overlapped each other, we, however, did not analyze the multiplex interactions (sneezing and/or cough in frequent, occasional and no symptom vs. AD, AR and AS) because the sizes in subgroups are not qualified for the analyses.
The prevalence of children with allergic rhinitis (AR) at 6 years of age while defined by symptoms of easy sneezing and/or itching eyes for longer than 2 weeks in the past six months and had been diagnosed with AR by a physician was 55.6, 67.5 and 76.0%, respectively, in infants with no sneezing, occasional sneezing and frequent sneezing at 18 months of age. The prevalence is higher than those (30.8-50.7%) defined by physician-diagnosed rhinitis with detection of specific aeroallergen IgE in our previous study [12]. We did not use the latter definition in this study because we measured specific IgE only in 2 common aeroallergens (house dust mite and cockroach).
The strength of this study is the longitudinal birth cohort study with a large sample size. We were able to include parental and infant factors for the prediction of allergic diseases in children at 3 and 6 years of age. It was also effective to define sneezing or cough without colds as morning and/or night symptoms in the questionnaire at 6 months and 18 months of age. The responses of parents with respect to sneezing without colds may be inconsistent depending on the seasons in which they returned to the outpatient clinics for follow-up. Fortunately, this birth cohort was performed in Taiwan, a subtropical island, where perennial AR and AS are predominantly associated (> 90%) with house dust mite and/or German cockroach sensitization and pollen-related AR and AS are rarely observed [11,22,25,26]. Thus, the prediction of AR and AS by infant sneezing may be limited to regions in which house dust mite-mediated perennial AR and AS are prevalent. Other limitations of this study include the following: 1) the relative high retention rates at newborn, 18 months and 3 years of age (> 75%) but lower retention rate at 6 years of age may have affected the power of the prediction of allergic diseases in children at 6 years of age; 2) Although infant cough without colds (no symptom, occasional or frequent) in the Chi-square for trend analysis revealed a significant trend on prediction of AS, the sizes of populations in AD, AR or AS for the group ≥ 3 days per week were less than 13 (between 7 and 13), suggesting the statistical power is not qualified for the inference; 3) using questionnaires to collect allergic symptoms by parents may have a recall bias, although we made a group consensus to collect the data by differentiating common colds with a course of 5 to 7 days of continual daily symptoms from infant sneezing or cough without colds at morning and/or night occurrence more than 2 weeks in the last 6 months. In future studies, we must refine a quantitative scale in an App (application software) for the collection of data on infant sneezing or cough without colds such as the "Mymee software" at https://qz.com/507727/a-man-whorecorded-his-every-sneeze-for-five-years-might-have-afix-for-your-pollen-allergy/ built to support iterative health monitor for each individual to validate the early symptoms of allergy sensitization and allergy diseases.
In summary, we defined and differentiated common colds as a course of 5 to 7 days of continual daily cough or sneezing symptoms from infant sneezing or cough without colds as morning/night sneezing or cough for more than 2 weeks in the past 6 months in a subtropical country to present a better predictor of childhood allergic diseases in infancy. This study highlights the potential for early prediction of allergy diseases during infancy based on non-invasive symptoms. We demonstrated that infant sneezing without colds at 18 months of age could predict all allergic diseases at 6 years of age, whereas infant cough without colds could predict only AS because of the lower rate of infant cough without colds. Moreover, we found that allergic rhinitis appears to develop earlier than asthma because the prevalence of infant sneezing at 18 months of age was much higher than that of cough without colds and the prevalence of AR at 3 years of age was also much higher than that of AS. Based on this birth cohort study in a subtropical region where perennial allergy diseases are prevalent, the march of allergies from AD, AS to AR might be modified to AD, AR to AS. Further studies are needed to determine whether a better design with keeping track of infant sneezing or cough without colds in an App at weekly basis may predict childhood AR and AS in the countries where seasonal and/or perennial allergy diseases are prevalent.
Study design and subjects
This study is part of a longitudinal birth cohort study conducted in Kaohsiung, Taiwan, as reported previously [8][9][10][11]. A total of 1812 pregnant women and 1848 newborn infants were prenatally recruited after informed consent was obtained. A research nurse was trained to explain the purpose of this study to eligible pregnant women when they visited obstetric clinics. Upon recruitment, information on parental allergic history such as asthma, allergic rhinitis, and/or atopic dermatitis was acquired from the questionnaire, and parental blood tests to determine IgE levels were performed in the second or third trimester upon consent. At delivery, the type of delivery, prematurity defined as a gestational age of < 37 weeks, and gender were recorded. Infants and children were followed at 6 months, 18 months, 3 years and 6 years of age. Blood tests for total IgE and specific IgE levels were also performed at the clinical follow-ups. The study protocol was reviewed and approved by the Institutional Review Board committee of Chang Gung Memorial Hospital.
Questionnaires for collecting information on early infant sneezing or cough without colds
Infants at 6 and 18 months of age were followed in pediatric clinics, where questionnaires were administered to their parents to collect information on frequency (0, 1-2 or > 2 times in the past 6 months) of URIs, frequency (no symptoms, occasional (< 3 days a week) or frequent (≥ 3 days a week)) of sneezing without colds with morning and/ or night onset, and frequency (no symptoms, occasional (< 3 days a week) or frequent (≥ 3 days a week)) of cough without colds with morning and/or night onset.
Questionnaires for clinical allergy diseases at 3 and 6 years of age
Children at 3 and 6 years of age were assessed for allergy diseases in pediatric clinics, and parents were asked 1) whether their child had AD (chronic or relapsing eczema) lasting longer than 2 weeks in the past six months and had been diagnosed with AD by a physician; 2) whether their child had AR symptoms of easy sneezing and/or itching eyes for longer than 2 weeks in the past six months and had been diagnosed with AR by a physician; and 3) whether their child had more than three asthmatic wheezing episodes and had been diagnosed with AS by a physician. The content of the questionnaire reports was verified by an allergist at follow-up in the pediatric clinics.
Detection of total IgE and specific allergen IgE levels
Sera were collected from parents and their offspring, centrifuged at 3000 rpm (1500 g) for 15 minutes, and stored at −80°C until analysis. Serum total IgE levels and specific IgE antibodies in the blood of children at 3 and 6 years of age were determined by a full-range total IgE detection system to measure total IgE and specific IgE levels to egg white (f1), cow's milk (f2), peanut (f13), shrimp (f24), house dust mite (d1), and German cockroach (i6) (Pharmacia & Upjohn Diagnostics AB, Uppsala, Sweden). The specific IgE profiles to these 6 common allergens were assessed based on a pilot study of 100 random samples to identify common allergens, which were defined by greater than 5% allergen sensitization (specific IgE ≥ 0.35 KU/l) in the population studied [8,11]. Aeroallergen sensitization was defined by specific IgE to aeroallergen (house dust mite or German cockroach) ≥ 0.35 kU/L and food allergen sensitization was defined by specific IgE to egg white, cow's milk, peanut or shrimp ≥ 0.70 kU/L. Allergy disease of the parents was defined by the presence of allergy disease history together with a total IgE level ≥ 100 kU/L [8,11].
Data analysis and statistics
Data were coded in an Excel file with a series of labels for demographic, clinical and laboratory data but not participants' names. Allergic symptoms and allergy diseases were presented as percentages and subjected to Chi-square for trend analysis in the comparison among the 3 subgroups (no symptom, occasional or frequent). Relative risks with 95% CI values were assessed by a Poisson regression model and used to predict the development of childhood allergy diseases based on infant sneezing or cough without colds. The populations of AD, AR and AS might be overlapped each other, we should have analyzed effects of infant sneezing or cough without colds on the comorbidities among AD, AR and/or AS. We did not analyze the multiplex interactions (sneezing and/or cough in frequent, occasional and no symptom vs. AD, AR and AS) because the sizes in subgroups are not qualified for the analyses. For all statistical computations, SPSS for Windows, version 17.0 (Chicago, IL, USA), was used. A p value of ≤ 0.05 was considered statistically significant.
Author contributions
The study was conceived by KDY, HCK and TYH. KDY, CCW and MCL analyzed the data and wrote the manuscript. CYO, JCC, CAL, HCK, TYH, CLW and HC followed up the cohort population, collected data during the cohort follow-up, and commented on the manuscript and interpretation. | 2018-04-03T00:35:58.150Z | 2015-07-18T00:00:00.000 | {
"year": 2017,
"sha1": "a82abb6b12b72ab5d417bbec081179dbc963a07c",
"oa_license": "CCBY",
"oa_url": "http://www.oncotarget.com/index.php?journal=oncotarget&op=download&page=article&path[]=22338&path[]=70678",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a82abb6b12b72ab5d417bbec081179dbc963a07c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
262174447 | pes2o/s2orc | v3-fos-license | Risk Factors of Restenosis After Full Endoscopic Foraminotomy for Lumbar Foraminal Stenosis: Case-Control Study
Objective To investigate risk factors associated with postoperative restenosis after full endoscopic lumbar foraminotomy (FELF) in patients with lumbar foraminal stenosis (LFS). Methods A single-center, retrospective case-control study was conducted on patients diagnosed with foraminal stenosis who underwent FELF between August 2019 and April 2022. The study included 56 patients, comprising 18 cases and 38 controls. Clinical data, radiologic assessments, and surgical types were compared between the groups. The cutoff values of radiologic parameters that differentiate the 2 groups were investigated. Results No significant difference in age, sex distribution, or presence of adjacent segment disease or grade I spondylolisthesis was observed between the groups. Cases had a higher degree of disc wedging angle (DWA) (3.0°±1.1° vs. 0.5°±1.4°, p < 0.001), larger coronal Cobb angle (CCA) (8.8°±5.1° vs. 4.7°±2.5°, p = 0.004), and smaller segmental lumbar lordosis (SLL) than controls (11.0±7.4 vs. 18.0±5.4, p = 0.001). Optimal cutoff values for DWA, CCA, and SLL were estimated as 1.8°, 7.9°, and 17.1°, respectively. A significant difference in surgical types was observed between cases and controls (p = 0.004), with the case group having a higher distribution of patients undergoing discectomy in addition to TELF. Conclusion The study identified potential risk factors for restenosis after FELF in patients with LFS, including higher DWA, larger CCA, smaller SLL angle. We believe that discectomy should be perform with caution during FELF, as it can lead to subsequent restenosis.
INTRODUCTION
Lumbar foraminal stenosis (LFS) is responsible for approximately 8% to 11% of degenerative lumbar spine diseases. 1As the clinical characteristics of this condition have become more widely recognized, the importance of surgical intervention has grown.Surgical treatments for LFS can generally be categorized into neural decompression and fusion procedures.Fusion is often necessary due to the frequent association of intervertebral foraminal stenosis with various degenerative changes.However, elderly patients or those with chronic diseases face a higher risk of postoperative complications related to general anesthesia and blood loss, leading to an increased preference for minimally invasive treatment methods in recent years.
Consequently, endoscopic surgery, a form of minimally invasive spine surgery, has undergone significant advancements over the past 2 decades, providing a beneficial treatment option for elderly patients at high risk for complications related to general anesthesia. 2,3Historically, the use of endoscopic spine surgery was limited to the lumbar spine, but through the efforts of various researchers, its applicability has expanded to include the cervical and thoracic spine as well as a broader range of patho-
Neurospine eISSN 2586-6591 pISSN 2586-6583
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (https://creativecommons.org/licenses/by-nc/4.0/)which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.5][6] Recent studies have even reported successful endoscopic surgical outcomes for tumor lesions, comparable to traditional surgery. 7One of the most significant achievements of endoscopic surgery is its ability to largely replace fusion procedures in the treatment of intervertebral foraminal stenosis.
Nonetheless, it is uncertain whether the duration and durability of symptom relief and the risk of stenosis recurrence with endoscopic decompression alone of the intervertebral foramen can be compared to the fusion technique, which offers both neural decompression and structural stabilization. 8This represents a key challenge that the endoscopic approach needs to address.Therefore, this study aims to examine the risk factors associated with postoperative restenosis, a primary reason for reoperation following full endoscopic lumbar foraminotomy (FELF) in patients with LFS.To the best of our knowledge, this is the first study to compare restenosis cases after FELF to a control group.
Study Populations
Prior to the start of this study, the Institutional Review Board of Chosun University Hospital approved for the research design (CHOSUN 2023-04-029).This retrospective study was conducted in a single center on patients diagnosed with foraminal stenosis who underwent full FELF for foraminal stenosis from August 2019 to April 2022.Participants were selected based on the following inclusion criteria: (1) 18 years of age and older with a diagnosis of symptomatic moderate or severe intervertebral foraminal stenosis, 9 (2) clear evidence of foraminal stenosis observed on lumbar magnetic resonance imaging (MRI) with corresponding lower extremity radicular pain, (3) symptom not improving after at least 3 months of nonsurgical treatment.
Exclusion criteria were as follows: (1) inconsistency between MRI findings and symptoms, (2) patients where low back pain is the main symptom rather than lower extremity radicular pain, (3) presence of spondylolytic spondylolisthesis of grade 2 or higher, (4) presence of segmental instability, (5) coexistence of severe grades 3-4 central spinal canal stenosis, 10 (6) coexistence of other pathological conditions such as infection, trauma, or tumors, (7) cases of lumbar disc extrusion or sequestration.
Selection of Cases and Control Group
The definition of restenosis after FELF in this study, which served as the criteria for selecting the patient group, was as fol-lows: (1) improvement in lower extremity radicular pain for at least one month after FELF surgery, (2) recurrence of lower extremity radicular pain in the same location within 1 year after surgery, (3) radiologic confirmation of foraminal stenosis recurrence at the same site as the initial surgery.The control group criteria were set as not having any clinical symptom recurrence and no radiological evidence of restenosis for at least one year after FELF surgery.To increase the statistical power of the independent variables, a control group twice the size of the case group was collected.
Clinical Data Collection
Basic clinical indicators of all selected patients, including age, sex, surgical level, and follow-up period, were collected.Clinical outcomes were assessed using the visual analogue scale (VAS), preoperatively, postoperatively and at 6-month follow-up.In terms of surgical technique, whether the basic TELF procedure was performed and whether additional discectomy was performed were investigated.
Radiologic Assessment
To evaluate the relationship between imaging findings and restenosis, a range of baseline radiologic parameters were obtained from preoperative static and dynamic simple radiograph of all study participants in both the case and control groups.The differences between the groups were then analyzed using statistical methods.Parameters that showed significant differences between groups were analyzed to calculate receiver operating characteristic curve, area under curve (AUC), optimal cutoff value, sensitivity, and specificity using statistical software.Their respective measurement methods are as follows (Fig. 1): (1) Total lumbar lordotic angle: angle measured between the upper endplate of L1 and the upper endplate of S1, (2) Segmental lordotic angle (SLA): angle measured between the upper endplate of upper vertebra and the lower endplate of lower vertebra, (3) Coronal Cobb angle (CCA): angle measured between the most tilted top vertebrae and the most tilted bottom vertebrae, (4) Disc hei ght (DH): half the sum of the anterior and posterior heights of disc, (5) Foraminal height (FH): distance between the pedicles, (6) Disc wedging angle (DWA): angle between the inferior endplate of the upper vertebra and superior endplate of the lower vertebra.A positive value indicates disc wedging towards the side of the lesion, while a negative value indicates disc wedging on the side opposite the lesion.In case of L5/S1 level, angle between the upper endplate of L5 and the line connecting the top of bilateral sacral ala, 11 (7) Dynamic SLA: differwww.e-neurospine.org901 ence of SLA between flexion and extension posture.
Statistical Analysis
Statistical evaluations were conducted using R ver.4.2.2 (R Foundation for Statistical Computing, Vienna, Austria).All variables underwent descriptive statistical analysis, and appropriate statistical methods were used for comparisons between groups when necessary.For continuous variables, the normality of values was assessed for each variable, and either Welch T-test or Wilcoxon rank-sum test was used for analysis, depending on the appropriate method.Categorical variables were analyzed using either Pearson chi-square test or Fisher exact test, after checking the expected frequencies in the contingency table cells.A statistical significance level of 0.05 was set for evaluating significance.
Surgical Procedure
Intramuscular midazolam (0.05 mg/kg) and intravenous fentanyl (0.8 μg/kg) were administered as preoperative treatments, and the patient was conscious and underwent surgery with local anesthesia and a transforaminal epidural block.All patient was placed in the prone position on a radiolucent operating table, with the knees and hips slightly flexed to reduce the lumbar lordotic curve.All procedures were performed via a full endoscopic transforaminal approach, and an out-in technique was used to minimize exiting nerve root (ENR) irritation of the narrowed intervertebral foramen and facilitate resolution of the circumferential stenosis. 12For sufficient decompression of the ENR, the extent of bone work was planned through preoperative MRI, computed tomography, and x-ray evaluation, and the need for bone work in the superior articular process as well as the isthmus, lower pedicle of upper body, and upper endplate of lower body was determined preoperatively. 13If preoperative examination revealed a vertical or circumferential foraminal stenosis, removal of the protruding disc and osteophyte was planned. 1 After removing the protruding disc and osteophytes, if the possibility of disc reprotrusion was expected, interbody discectomy was performed (Fig. 2).So, surgical procedures were classified into 2 types as follows: (a) TELF without discectomy, (b) TELF with discectomy.
(a) TELF without discectomy: Exposure of the ENR has been confirmed by sufficient bone resection and soft tissue removal such as ligamentum flavum, foraminal ligament, and fatty tissue.And then, the ENR was retracted to remove the protruding disc and osteophyte, if deemed necessary by the preoperative imaging evaluation.(b) TELF with discectomy: After the (a) procedure, interbody disc removal was performed.
Illustrative Case
A 62-year-old male patient presented with left leg radiating pain from the lateral thigh down to the anterolateral leg, with no history of previous back surgery.Preoperative MRI revealed narrowing of the left L3/4 intervertebral foramen (Fig. 3A), preoperative x-ray showed increased CCA and decreased total and SLA, and disc wedging to the lesion side (positive DWA) with overall DH reduction at L3/4 (Fig. 3D, E).Postoperative MRI showed dilatation of the L3/4 foramen (Fig. 3B), but approximately 6 months later, due to recurrent left lower extremity radicular pain, an MRI scan was performed and the L3/4 foramen was found to be narrowed again (Fig. 3C).
Patient Characteristics
A total of 56 patients were included in this case-control study, comprising 18 cases and 38 controls.The average age was 68.9 years for cases and 65.6 years for controls, with no significant difference in age (p = 0.3384) or sex distribution (p = 1.000) between the 2 groups.In the case group, the most common surgical level was L4-5 and L5-S1 and followed by L3-4.While in the control group, the most common surgical level was L5-S1, followed by L4-5 and L3-4.There was no significant difference in the presence of adjacent segment disease or grade I spondylolisthesis between the groups.Preoperative and postoperative VAS scores was comparable between the 2 groups.However, VAS scores at 6 months follow-up were significantly higher in www.e-neurospine.org903 cases compared to controls (5.8 ± 1.1 vs. 2.2 ± 0.9, p < 0.001).The follow-up period was longer for cases than controls (22.3 ± 6.8 months vs. 17.2 ± 3.5 months, p = 0.007).The mean time to recurrence for cases was 7.4 ± 2.4 months.Complications were rare, with only 1 case each of dysesthesia and motor weakness.In the control group, dysesthesia occurred in 2 patients, and no motor weakness was reported.Other than that, there were no other major complications.These findings are summarized in Table 1.
Radiologic Parameters
Baseline radiologic parameters were compared between case and control groups (Table 2).There were no significant differences between the groups in DH, FH, total lumbar lordosis angle and dynamic segmental angle.However, cases had a higher degree of DWA (3.0°± 1.1° vs. 0.5°± 1.4°, p < 0.001) and larger CCA (8.8°± 5.1° vs. 4.7°± 2.5°, p= 0.004) than controls.SLA was Values are presented as number (%).OR, odds ratio; CI, confidence interval; TELF, transformianl endoscopic lumbar foraminotomy.*p < 0.05, statistically significant differences.significantly smaller in cases compared to controls (11.0°±7.4° vs. 18.0°± 5.4°, p = 0.001) (Fig. 4).The cutoff values for the radiologic parameters significantly associated with restenosis, which yielded the best sensitivity and specificity, were identified using R statistical software, and are presented in Table 3.For the DWA, a cutoff of 1.8° resulted in sensitivity and specificity of 94.4% and 92.1% respectively, with an AUC of 0.969.The CCA had a cutoff value of 7.9°, yielding a sensitivity of 55.6% and specificity of 92.1%, with an AUC of 0.757.For the SLA, a cutoff value of 17.1° provided a sensitivity of 83.3% and a specificity of 60.5%, with an AUC of 0.775.
Surgical Types
Table 4 shows the comparison of surgical types between case and control groups in detail.In the case group, the distribution of surgical types was as follows: TELF without discectomy (12 patients, 66.7%) and TELF with discectomy (6 patients, 33.3%).In the control group, the distribution was: TELF without discectomy (36 patients, 94.8%), TELF with discectomy (2 patients, 5.2%). www.e-neurospine.org905 The case group had a higher rate of patients undergoing discectomy in addition to TELF rather than the control group.In the control group, most patients underwent TELF without discectomy.With an odds ratio of 0.12, not performing discectomy was found to be more favorable for the development of restenosis.This difference was statistically significant (p = 0.017), indicating an association between surgical types and the risk of restenosis after endoscopic foraminotomy for LFS.
DISCUSSION
Apart from foraminal lesions, simple central stenosis, disc herniation, and other lesions without accompanying instability can typically be treated initially using nonfusion techniques.Unstable LFS with segmental instability, however, is commonly addressed with fusion techniques as the gold standard.In contrast, stable LFS without segmental instability has traditionally been treated with fusion techniques, which are the primary target subset in minimally invasive spine surgery to be replaced by nonfusion surgery.Consequently, numerous authors have sought to perform foraminotomy without fusion using both microscopic and endoscopic methods, achieving significant results.][18][19][20][21] Foraminotomy for LFS offers the benefits of motion preservation and reduced concern about adjacent segment degeneration, but it also appears to have disadvantages compared to fusion.According to the review of Ju and Lee. 22for complications following foraminotomy, the primary complications include recurrent stenosis with incidental durotomy, motor weakness, and dysesthesia.Technical issues, such as the surgeon's level of experience, may cause complications other than recurrent stenosis, but recurrent stenosis can occur even with complete nerve decompression achieved during surgery.This complication should be treated as a problem inherent in the surgical method called FELF, relating to its reliability and durability.It has been a significant concern for endoscopic surgeons due to limitations like the inability to entirely replace fusion surgery in cases of restenosis, resulting in the need for additional revision fusion surgery.Thus, identifying the risk factors for restenosis is a crucial first step in refining FELF into a more reliable and robust surgical technique.
In our study, preoperative baseline radiologic parameters in-dicated that DH was lower and FH was higher in the case group compared to the control group, DWA and CCA at the surgical level were angled towards the lesion, total lumbar lordosis angle and SLA were reduced, and dynamic segmental angle was high.Among these parameters, DWA, CCA, and SLA were found to be statistically significant risk factors, with DWA being the strongest.Additionally, SLA showed a more significant difference between the groups than total lumbar lordosis angle, suggesting that local alignment deserves more attention than global alignment.Data published by Haimoto et al. 23 studying risk factors for restenosis after microscopic foraminotomy reported similar findings, with DWA being the most statistically significant risk factor.Some differences in statistical significance between the 2 studies may be due to the difference in power related to the number of samples.The data of Yamada et al. 24 on recurrence after microscopic decompression for LFS identified degenerative lumbar scoliosis as a significant risk factor, suggesting that coronal alignment significantly affects restenosis.These findings are similar to those of this study.However, we found no other significant differences in risk factors attributable to technical differences between microscopic and endoscopic techniques for LFS.When performing TELF, the rationale for conducting additional discectomy in the interbody space alongside TELF for protruded discs is to prevent the potential risk of postoperative disc material leakage and symptom recurrence.However, our data showed that the rate of TELF without discectomy in the control group was significantly higher, and the rate of TELF with discectomy in the case group was higher than the control group, with an odds ratio of 0.12, with a statistically significant difference.This result suggests that additional discectomy of the interbody space, which has been performed based on surgeon preference to prevent postoperative recurrence of foraminal disc herniation, is a risk factor for recurrence.While preemptively performing discectomy can remove the nucleus pulposus that may herniate later, it can also lower the DH on the ipsilateral side.Therefore, from the pathophysiological perspective of foraminal stenosis, it may develop the vertical or circumferential stenosis and cause symptoms to recur. 1 As a result, it is essential to achieve neural decompression through proper bone work, soft tissue removal, and careful resection of protruding disc and bony spur.However, it is recommended to avoid performing discectomy in the interbody space.
This study is a retrospective case-control investigation with a limited sample size, which implies that the evidence is relatively weak.To obtain stronger evidence, it would be essential to vali-
Fig. 2 .
Fig. 2. Endoscopic view of surgical procedure (TELF with fragmentectomy for left foraminal stenosis at the L4/L5 level).(A) Placement of a bevel-ended working sheath on the left L5 base of the superior articular process (SAP).(B) Bone work on the SAP using an endoscopic burr.(C) Bone work on the isthmus, lower part of the upper pedicle, and selectively on the lower body upper endplate.(D, E) Removal of the ligamentum flavum and foraminal ligament to expose the exiting nerve root (ENR).(F) Selective removal of the protruding disc and osteophyte after retracting the ENR.(G, H) Exposure of the axillary side of the dural sac and decompression of the ENR from medial to lateral.SAP, Superior articular process; Lt., left.
Table 2 .
Comparisons of baseline radiologic parameters between case and control groups
Table 4 .
Comparison of surgical types between case and control groups
Table 3 .
Recommended cutoff values of radiologic parameters significantly associated with restenosis | 2023-09-24T15:04:23.569Z | 2023-09-01T00:00:00.000 | {
"year": 2023,
"sha1": "47a98e75faeb97b755be776199e3df1f592fc2a0",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ScienceParsePlus",
"pdf_hash": "616566dd80455478e3ba5d343775a9a2a5f1173b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
118973136 | pes2o/s2orc | v3-fos-license | Selection Rule for Electromagnetic Transitions in Nuclear Chiral Geometry
In order to find the selection rules that can be applied to the electromagnetic transitions when the chiral geometry is achieved, a model for a special configuration in triaxial odd-odd nuclei is constructed which exhibits degenerate chiral bands with a sizable rotation. A quantum number obtained from the invariance of the Hamiltonian is given and the selection rule for electromagnetic transition probabilities in chiral bands is derived in terms of this quantum number. Among the available candidates for chiral bands of odd-odd nuclei, in which the near degeneracy of two $\Delta I = 1$ bands is observed, the measured electromagnetic properties of the two bands in $^{128}_{55}$Cs$_{73}$ and $^{126}_{55}$Cs$_{71}$ are consistent with the rules, while those of $^{134}_{59}$Pr$_{75}$ and $^{132}_{57}$La$_{75}$ are not.
I. INTRODUCTION
The possible occurrence of chirality in nuclear structure was pointed out more than ten years ago [1]. Since then, the observation of two almost degenerate ∆I=1 bands possibly with the same parity has been reported, especially in the A ≈ 130 odd-odd nuclei. Though the observed near degeneracy of the two bands is a primary indication of chiral geometry, this geometry can be pinned down in a more definitive way if electromagnetic transition probabilities expected for the chiral bands are experimentally confirmed.
Chirality in triaxial nuclei is characterized by the presence of three angular-momentum vectors, which are generally noncoplanar and thereby make it possible to define chirality.
An example is shown in Fig. 1a. When chiral geometry is realized, observed two chiraldegenerate states are written as where left-and right-handed geometry states are written as | IL and | IR , respectively. If there were no tunneling between the R and L systems, the energies of the states associated with opposite handedness would be degenerate and one obtains IL | E2 | IR ≈ 0 and IL | M1 | IR ≈ 0 for states with I ≫ 1, where the electric-quadrupole and magnetic-dipole operators are denoted by E2 and M1, respectively. If ≈ in the above expressions is replaced by =, two intra-bands or inter-bands transitions or two static moments, which correspond to each other within the observed pair of chiral bands, are equal. The correspondence is illustrated in Fig. 1b. This is a trivial and straightforward rule in the case of the ideal chiral bands. However, we want to take one step further with the selection rules.
II. MODEL, QUANTUM NUMBER AND SELECTION RULE
A limiting case of the particle-rotor model is considered which may be applicable to the majority of odd-odd nuclei, in which the observation of the near degeneracy of two ∆I = 1 bands has so far been reported 1 . In our opinion an application of this special theoretical limit outweighs a loss of generality.
The model consists of a triaxially deformed core with γ = 90 • coupled to one proton particle and one neutron hole in the same single-j-shell. 2 Taking the long, short, and intermediate axes of the triaxial body as the 1-, 2-and 3-axes, the rotational Hamiltonian of the core is written as where R expresses the core angular momentum and the γ-dependence of the hydrodynamical moments of inertia is assumed. The γ-dependence is approximately supported also by microscopic numerical calculations of moments of inertia. In the case of a single-j-shell configuration the triaxially quadrupole deformed potential can be written for γ = 90 • as [3] for the proton particle and as for the neutron hole, using the fact that the one-particle matrix-elements of (Y 22 + Y 2−2 ) are proportional to those of (j 2 1 − j 2 2 ). In Eqs. (3) and (4) j p1 and j p2 (j n1 and j n2 ) denote the components of the proton (neutron) angular-momentum operator j p ( j n ) along the 1and 2-axes, respectively. The proportionality constants in (3) and (4), which are linear in the quadrupole deformation parameter β, are positive and exactly the same if protons and neutrons are in the same single-j-shell. It is noted that in respective Hamiltonians of the proton particle, the neutron hole and the core the energetically preferred directions of relevant angular momenta are j p // ± 2-axis, j n // ± 1-axis, and R // ± 3-axis.
However, one must look for the energy minimum for a given total angular momentum I where I = R + j p + j n . Thus, the relative direction between the three vectors depends on the magnitude of I. (See Fig. 3.) 1 The contents of the present section are based on the work published in Ref. [2]. 2 The "rotation with γ = 90 • " is equivalent to the "γ = −30 • rotation" in the Lund convention except that the intermediate axis is the quantization axis (taken to be the 3-axis).
(ii) Invariance under the operation A which consists of: (a) a rotation exp(i(π/2)R 3 ) or , combined with (b) an exchange of valence proton and neutron.
Denoting the operator that exchanges valence proton and valence neutron by C, we assign the values C = +1 and C = −1 to the components of the intrinsic neutron-proton wave functions which are symmetric and antisymmetric under the operation C, respectively.
Eigenstates of the total Hamiltonian have a quantum number A = ±1, irrespective of whether the chiral geometry is achieved or not. The possible combinations of R 3 and C for a given value of A are shown in Table 1.
For E2 transitions we take into account only the collective part, namely only the core contribution. Then, in order to obtain non-zero B(E2) values, we must have both ∆C = 0 and ∆R 3 = 0. The former is required since the neutron and proton are spectators under the E2 transitions, while the latter is required since E2 matrix elements with ∆R 3 = 0 vanish for the shape of γ = 90 • . Since the E2 operator can make only |∆R 3 | ≤ 2, the above condition of both ∆C = 0 and ∆R 3 = 0 leads to when we examine the contents of the eigenstates with a given A value shown in Table 1.
The M1 transition operator in the particle-rotor model is written as [4] (M1) µ = 3 4π If we take, for example, g ℓ = g f ree ℓ and g ef f s = 0.6 g f ree s and g R = 0.5, we obtain Then, we obtain B(M1) ≈ 0 for ∆C = 0, since the M1 operator is almost antisymmetric under the exchange of valence proton and valence neutron. Since the M1 operator can make examining the contents of eigenstates with a given A value in Table 1.
The static quadrupole moment of a triaxially deformed core with γ = 90 • vanishes. On the other hand, the static magnetic moment in the present model is not negligible, since the magnetic moment operator can be written as and has an extra term g R I compared with the M1 transition operator in (6). In Now, when bands are, by definition, arranged so that ∆I = 2 E2 transitions are enhanced and always allowed within respective bands, the sign of A in a given band must change at every increase of I by 2, as illustrated in Fig. 2a. And, from the arguments described in the previous paragraphs the quantum number A of the final state must have a different sign from that of the initial state, in order to have strong E2 or M1 transitions. The consequence of the present selection rules with chiral geometry is illustrated in Fig. 2a.
We have chosen the particle-rotor Hamiltonian so that if at all possible the chiral geometry may easily appear at moderate values of I. So, next, we try to numerically diagonalize our particle-rotor Hamiltonian, taking j = h 11/2 for both valence neutron and valence proton. The relation between three angular momentum vectors, R, j p and j n , in the lowest-lying states for a given I is illustrated in Fig. 3. At small rotation it is energetically cheaper for the vector R to be placed on the plane which is specified by j p and j n , when a given value of I has to be constructed. See the left figure of Fig. 3. On the other hand, since at very high spins the rotational energy becomes dominant, R points to the direction of the 3-axis to save the energy, and both j p and j n start to rotationally align also in the direction of the 3-axis.
See the right figure of Fig. 3. Consequently, only at moderate rotation the chiral geometry may be expected. This is what is seen in the calculated level scheme and transitions shown in Fig. 2b.
It is noted that the invariance of the total Hamiltonian and the selection rule for electromagnetic transitions described above also apply in the presence of pair correlation that is treated in the BCS approximation.
III. COMPARISON WITH EXPERIMENTAL DATA AND PERSPECTIVES
There are a series of odd-odd nuclei in the A ≈ 130 region [7], in which the energies of two ∆I = 1 rotational bands are observed to be roughly degenerate (namely, up till a few hundreds keV energy difference), though in fact the equality of spin-parity has hardly been experimentally proved in any of those nearly degenerate states. Up till several years ago the observed nearly degenerate pair-bands in 134 Pr were indeed supposed to be the best example of chiral pair bands. However, the observed B(E2; I → I − 2) values of two intra-band transitions, which should be equal in the case of chiral pair bands, turned out to be different at least by a factor of two [5,6]. Moreover, measured M1 transitions strongly violated the selection rule [2] described in the present paper. Thus, the pair bands in 134 Pr are no longer supposed to be an example of chiral pair bands. However, if so, one may ask; if the observed bands in 134 Pr are not the chiral pair bands, what are they ? This is an even more interesting question.
Among the odd-odd nuclei in the A ≈ 130 region there are at present two nuclei, 128 Cs and 126 Cs, in which not only the pair bands are nearly degenerate but also the measured electromagnetic transitions [8,9] seem to be in agreement with the present selection rule.
Deviations of the actual situation in nuclei from the simple assumption made in the present model may modify the selection rule described in the present paper. Nevertheless, the present selection rule should serve as a starting point for the study of more complicated nuclear systems. In this connection the confirmation of chiral geometry in odd-A nuclei such as Nd isotopes, in which j p in the present model is replaced by the rotationally-aligned angular momentum of the S-band, is strongly wanted. Thus, the selection rule for electromagnetic transitions in that kind of chiral geometry in odd-A nuclei should be worked out so that it can be used for examining available experimental data. states for a given total angular momentum I, where I = R + j p + j n . In the figure j n and j p can be exchanged, for example. In respective Hamiltonians of the neutron hole, the proton particle and the core the energetically preferred directions are such that j n , j p and R point to the ±1 (long), ±2 (short), and ±3 (intermediate) axes, respectively. | 2011-02-01T14:19:59.000Z | 2011-02-01T00:00:00.000 | {
"year": 2011,
"sha1": "7d2ed0f20d8cf73c2dd52f3a8df7ead403dfcf42",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1102.0163",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "7d2ed0f20d8cf73c2dd52f3a8df7ead403dfcf42",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
258585280 | pes2o/s2orc | v3-fos-license | Noble Metal Ion‐Directed Assembly of 2D Materials for Heterostructured Catalysts and Metallic Micro‐Texturing
Assembling 2D‐material (2DM) nanosheets into micro‐ and macro‐architectures with augmented functionalities requires effective strategies to overcome nanosheet restacking. Conventional assembly approaches involve external binders and/or functionalization, which inevitably sacrifice 2DM's nanoscale properties. Noble metal ions (NMI) are promising ionic crosslinkers, which can simultaneously assemble 2DM nanosheets and induce synergistic properties. Herein, a collection of NMI–2DM complexes are screened and categorized into two sub‐groups. Based on the zeta potentials, two assembly approaches are developed to obtain 1) NMI‐crosslinked 2DM hydrogels/aerogels for heterostructured catalysts and 2) NMI–2DM inks for templated synthesis. First, tetraammineplatinum(II) nitrate (TPtN) serves as an efficient ionic crosslinker to agglomerate various 2DM dispersions. By utilizing micro‐textured assembly platforms, various TPtN–2DM hydrogels are fabricated in a scalable fashion. Afterward, these hydrogels are lyophilized and thermally reduced to synthesize Pt‐decorated 2DM aerogels (Pt@2DM). The Pt@2DM heterostructures demonstrate high, substrate‐dependent catalytic activities and promote different reaction pathways in the hydrogenation of 3‐nitrostyrene. Second, PtCl4 can be incorporated into 2DM dispersions at high NMI molarities to prepare a series of PtCl4–2DM inks with high colloidal stability. By adopting the PtCl4–graphene oxide ink, various Pt micro‐structures with replicated topographies are synthesized with accurate control of grain sizes and porosities.
Introduction
2D materials (2DMs), or nanosheets, are a class of nanomaterials and have drawn extensive attention due to their extraordinary chemical, physical, and mechanical properties, together with their chemical tunability and processable nature [1] 2DM nanosheets have been utilized as promising building block units for the fabrication of higher dimensional structures, which have been applied to catalysts, [2] electrochemical energy storage devices, [3] water desalination, [4] wearable electronics, [5] and soft robots. [6] However, during the self-assembly processes, 2DM nanosheets are prone to re-stack or aggregate due to their strong van der Waals forces, which decrease the accessible surface areas of assembled 2DM structures and sacrifice the nanoscale properties of 2DM units. [7] To date, many fabrication strategies have been adopted to assemble 2DM nanosheets into higher dimensional structures (e.g., membranes, foams, aerogels) via incorporating molecular, [8] polymeric, [9] or nanoparticle additives, [10] inducing tetraamminepalladium(II) nitrate (TPdN) were able to serve as efficient ionic crosslinkers to quickly agglomerate various 2DM dispersions. By utilizing micro-textured GO assembly platforms, various TPtN-2DM and TPdN-2DM hydrogels were fabricated in a facile and scalable fashion. Afterward, the TPtN-2DM hydrogels were freeze-dried and thermally reduced to synthesize various Pt-decorated 2DM aerogels (Pt@2DM). The Pt@2DM heterostructures demonstrated high, substratedependent catalytic activities and promoted different reaction pathways in the hydrogenation of 3-nitrostyrene . Second, PtCl 4 and AuCl 3 were incorporated into 2DM dispersions at high ion molarities to produce a series of PtCl 4 -2DM and AuCl 3 -2DM inks, which were able to maintain high colloidal stability. By adopting the PtCl 4 -GO inks, a graphene-templated synthetic route was demonstrated to fabricate various Pt replicas with accurate control of grain sizes and porosities.
Molecular Interactions of Noble Metal Ions (NMI) and 2D-Material (2DM) Nanosheets
In this work, three representative 2DM nanosheets, including GO, montmorillonite (MMT), and Ti 3 C 2 T x MXene, were adopted, and their TEM images are shown in Figure S1 (Supporting Information). The average diameters of GO, MMT, and MXene nanosheets were characterized to be ≈2 µm, ≈500 nm, and ≈2 µm, respectively. As shown in Figure 1a, the GO, MMT, and MXene dispersions demonstrated the average zeta potentials of <−30 mV, originating from the oxygen-containing and/or other functional groups with strong negative dipoles on the surfaces of 2DM nanosheets (e.g., COOH and OH groups on GO; OH groups on MMT; F, O, and OH groups on MXene). [16] Three kinds of 2DM dispersions were then mixed with various NMI solutions, and the resultant NMI-2DM complexes showed different colloidal behaviors with varying zeta potentials (Figure 1a,b, the NMI concentrations were kept at 5 mm). By setting two thresholds of >−5 mV and <−25 mV, these NMI-2DM complexes were categorized into two subgroups (Table 1). First, the TPtN-2DM and TPdN-2DM complexes showed immediate and irreversible agglomerations with average zeta potentials >−5 mV. On the other hand, the PtCl 4 -GO, PtCl 4 -MMT, AuCl 3 -GO, and AuCl 3 -MMT complexes maintained high colloidal stability, most of which exhibited average zeta potentials <−25 mV. Furthermore, it is worth mentioning that Ti 3 C 2 T x MXene nanosheets were oxidized in contact with K 2 PdCl 4 , PdCl 2 , AuCl 3 , and KAuCl 4 , and noble metal nanoparticles were formed on the 2DM surfaces, as supported by the X-ray diffraction (XRD) spectra in Figure S2 (Supporting Information). There are several articles focusing on the oxidation of MXene nanosheets in O 2 -saturated water, [17] but the time scale for this oxidation was at the week/month scale, much slower than the NMI-induced MXene oxidation (at the minute scale).
From the screening results above, it was clear that TPtN and TPdN had stronger intermolecular interactions with 2DM nanosheets and destabilized the resultant complexes, which can be explained as follows. The tetraamine groups of TPtN and TPdN are both strong hydrogen-bond acceptors and donors, which can interact with the hydrogenbond donors (e.g., COOH and OH groups on GO, OH groups on MMT,OH groups on MXene) and hydrogen-bond acceptors (F, and O groups on MXene) on the surfaces of 2DM nanosheets. [9a] Additionally, the colloidal stability of NMI-2DM complexes is highly sensitive to the electrostatic interactions between 2DM nanosheets and NMI components. TPtN and TPdN are prone to dissociate the nitrate groups, forming [Pt(NH 3 ) 4 ] 2+ and [Pd(NH 3 ) 4 ] 2+ complex cations, [18] respectively. On the other hand, AuCl 3 and PtCl 4 are known to form [AuCl 3 (OH)] − and [PtCl 4 (OH)] − , respectively. When different noble metal salts were added into the 2DM dispersion, the colloidal stability varied based on the overall net charges of NMI-2DM complexes. For instance, when TPtN was added, the negative charges of Table 1. Summary of average zeta potentials of NMI-2DM complexes. The NMI-2DM complexes with average zeta potentials >−10 mV are suitable for the NMI-induced 2DM gelation approach. The NMI-2DM complexes with average zeta potentials <−25 mV are suitable for the NMI 2DM ink approach. (*): Unfavorable oxidation of MXene nanosheets or in situ formation of noble metal nanoparticles are observed in the K 2 PdCl 4 -MXene, PdCl 2 -MXene, AuCl 3 -MXene, and KAuCl 4 -MXene complexes. (**) Although the average zeta potentials of NMI-MMT complexes were slightly lower than −10 mV, freestanding MMT hydrogels were clearly observed. 4 ] 2+ , and the TPtN-2DM complexes exhibited nearly zero net charges and irreversible agglomeration. On the other hand, the addition of PtCl 4 resulted in the complexation of 2DM nanosheets and [PtCl 4 (OH)] − , both of which exhibited negative charges and preserve high colloidal stability. [15,19] As both TPtN and TPdN served as effective ionic crosslinkers, low critical coagulation concentrations (CCCs) were observed, testing as low as 100 µm for the TPtN-2DM and TPdN-2DM complexes. As shown in Figures 2-4, an NMI-induced gelation method was developed to fabricate various NMI-2DM hydrogels and aerogels in a scalable manner without using any polymeric binders. On the other hand, the CCCs of PtCl 4 -2DM and AuCl 4 -2DM complexes were much higher at >10 mm. As shown in Figure 5, a variety of NMIcomplexed 2DM inks demonstrated high colloidal stability even under high ion molarities, which were adopted for tem-plated synthesis to enable microstructural metallic texturing with controllable topographies, grain sizes, and porosities. Figure 2a illustrates the NMI-induced 2DM gelation method to fabricate various NMI-2DM hydrogels and aerogels, which involved four major steps: i) loading of NMI solutions onto crumple-like GO topographies, ii) upward diffusion of NMI into a deposited 2DM nanosheet suspension, iii) formation of NMI-2DM hydrogels, and iv) production of NMI-2DM aerogels via freeze drying.
Mechanically-Driven Fabrication of Crumple-Like GO Topographies as Versatile Assembly Platforms
To create the crumple-like GO topography, the GO dispersion was vacuum-filtered into a planar GO membrane followed by transferring it onto a thermally responsive polystyrene substrate (i.e., shrink film). The thermally responsive substrate was previously treated with atmospheric plasma to increase the surface hydrophilicity and enhance the adhesion energy between the GO membrane and polystyrene substrate. [20] As shown in Figure S3 (Supporting Information), by adjusting the areal loadings of GO nanosheets from 2.7 µg cm −2 to 1.8 mg cm −2 , the thicknesses of planar GO membranes were controlled from 0.02 to 10 µm, respectively. Afterward, the GO-coated shrink film was heated above the glass-transition temperature (T g ) of polystyrene (i.e., 100 °C) for 5-10 min, and the pre-strain was released to contract the underlying substrate into ≈20% of its original dimensions. [21] By harnessing interfacial instability during substrate contraction, the upper-layer GO membrane was mechanically deformed into an out-of-plane architecture that displayed chaotic crumpling patterns. By controlling the thicknesses of GO membranes from 0.02 to 10 µm, the average size of crumpling patterns shifted from 3.0 × 1.0 µm 2 to 0.5 × 0.2 mm 2 (top-down scanning electron microscope (SEM) images in Figure 2b), and the average height of out-of-plane features varied from 6.0 µm to 1.8 mm (confocal laser images in Figure 2c), respectively.
As shown in the cross-sectional SEM image in Figure 2d, the mechanically deformed GO layer was conformally attached to the shrunk polystyrene substrate, and the higher dimensional GO microstructure exhibited numerous valley-like microchannels that benefit from high-capacity water adsorption. [22] By extracting the creases of crumple-like GO topographies from their top-down SEM images, their respective skeleton images are shown in Figure S4 (Supporting Information), and the crease length distributions are summarized in Figure 2e. As the thicknesses of GO membranes were controlled from 0.02 to 10 µm, the average crease length increased from 0.04 to 4.3 µm, respectively.
Furthermore, the water adsorption capacities of various crumple-like GO topographies (at different GO thicknesses) were characterized, and the planar GO membranes were used as control experiments. As shown in Figure 2f, the crumple-like GO topographies with interconnected water micro-channels demonstrated 1.5-2-times higher water uptake than the planar counterparts. When the GO thickness increased from 0.02 to 10 µm, the average water uptakes of crumple-like GO topographies increased from 1.2 to 11.3 mg cm −2 , respectively. When the GO thickness reached 5 µm, the resultant GO topography showed the highest water uptake of ≈13 mg cm −2 (equivalent to 14.7 g (of adsorbed water) g (of GO coating) −1 ), which was close to some reported polymeric hydrogels. [23] It is worth mentioning that, although high water uptake of 11.3 mg cm −2 was observed in the 10-µm-thick case, its water uptakes were inconsistent due to heterogeneous substrate deformation ( Figure S5, Supporting Information).
Facile Fabrication of NMI-2DM Hydrogels and Aerogels via Assembly Platforms
Benefitting from superior water uptakes (≈13 mg cm −2 ), the crumple-like GO topographies (5-µm-thick) were able to serve as versatile assembly platforms, enabling facile fabrication of NMI-2DM hydrogels and aerogels without using polymeric binders. We chose GO nanosheets to create crumpled topographies primarily because of their inherent hydrophilicity. As shown in Figure S6 (Supporting Information), planar GO, MXene, and MMT membranes exhibited static contact angles of water (θ w ) below 90°, indicating hydrophilic surfaces. First, the crumple-like GO topographies were loaded with NMI solution (e.g., TPtN). Afterward, various 2DM dispersions, including GO, MXene, and MMT, were individually deposited onto the TPtN-loaded GO topographies, during which TPtN was gradually diffused out to form various TPtN-2DM hydrogels. As shown in Figure S7 (Supporting Information), all of the crumple-like GO, MXene, and MMT topographies were able to serve as efficient assembly platforms for creating thick TPtN-GO hydrogels.
As shown in Figure 3a,c, interference reflection microscopy (IRM) was employed to visualize the colloidal interactions between GO nanosheets and two NMI solutions, including PtCl 4 and TPtN. IRM utilizes a narrowband LED light source (511 nm here) along with placing the nanosheets close to an interface with a refractive index mismatch to generate a common path interferometer, which significantly enhances the image contrast enabling direct visualization of single layers of 2D materials (e.g., GO nanosheets). As shown in Movie S1 (Supporting Information) and Figure 3a, when a dilute dispersion of GO nanosheets was dropped onto the water-loaded assembly platform, the GO nanosheets (with the lateral dimensions of ≈2 µm) continuously flowed through the interconnected microchannels. As shown in Movie S2 (Supporting Information) and Figure 3b, when the GO dispersion was dropped onto the PtCl 4 -loaded assembly platform (0.1 m of PtCl 4 solution), a few GO nanosheets were observed to attach onto the platform surface. As shown in Movie S3 (Supporting Information) and Figure 3c, when the GO dispersion contacted with the TPtN-loaded assembly platform (1 mm of TPtN solution), instantaneous coagulation of GO nanosheets was clearly observed. The results indicate that TPtN acted as an efficient ionic crosslinker and effectively screened the electrostatic double layers of GO nanosheets. As shown in Figure 3d and Figure S8 (Supporting Information), by following the NMI-induced gelation method, a thick, homogeneous, freestanding layer of TPtN-GO hydrogel was obtained on the assembly platform, while the PtCl 4 -GO complex remained fluid-like and overflowed the assembly plat-form. Upon 24 h of air drying, the TPtN-GO film was readily detachable in the form of a flat and freestanding film ( Figure S9, Supporting Information).
By utilizing the crumple-textured assembly platforms, a variety of NMI-GO complexes were prepared, including TPtN-GO, PtCl 4 -GO, and K 2 PtCl 4 -GO, and their dynamic rheological behaviors were characterized. As shown in Figure 3e, the storage moduli (G′) and loss moduli (G″) of a TPtN-GO hydrogel were characterized to be 5.0 × 10 4 and 2.5 × 10 4 Pa (over a frequency of 0.1 to 100 s −1 ), respectively. In comparison, the G′ and G″ of PtCl 4 -GO and K 2 PtCl 4 -GO complexes were both 100 times lower. Figure 3f further demonstrates that the TPtN-GO hydrogel (G′ = 5.2 × 10 4 Pa) was stiffer than the TPtN-MXene (G′ = 3.7 × 10 4 Pa) and TPtN-MMT hydrogels (G′ = 1.0 × 10 4 Pa), which can be rationalized as follows. As GO and MXene possess rich oxygen-containing and/or other functional groups with strong negative dipoles (COOH and OH groups on GO; F and O groups on MXene), [16a,b,d] the coordinate bonds between tetraammineplatinum ions and 2DM nanosheets were relatively strong causing the hydrogels to exhibit a higher G′. In comparison, MMT nanosheets only possess some hydroxyl groups, so the TPtN-MMT hydrogel exhibited a lower G′. [16c] Second, the platinum cations interacted with the GO's aromatic frameworks through cation-π attractions, which further strengthened molecular interactions and thus led to the highest G′. [24] Next, a freeze-drying step was conducted to transform TPtN-2DM hydrogels into aerogels. As shown in the SEM images in Figure 3g,i, all the TPtN-2DM aerogels exhibited cellular micro-structures with aligned open pores, templated by ice crystals during the freezing step. The pore size distribution profiles of three TPtN-2DM aerogels were analyzed via ImageJ. As illustrated in Figure S10 (Supporting Information), all of the TPtN-2DM aerogels had macroscale pores, with pore areas ranging from 5 to 500 µm 2 (TPtN-GO aerogel), 5 to 800 µm 2 (TPtN-MMT aerogel), and 5 to 1400 µm 2 (TPtN-MXene aerogel). Different 2DM nanosheets led to different pore sizes due to the differences in aspect ratios, hydrophilicity, and surface charge, all of which are critical during the lyophilization process. [25] As shown in the inset of Figure 3g,i, the energydispersive X-ray spectroscopy (EDS) maps confirmed that TPtN was uniformly distributed across the microstructures of TPtN-2DM aerogels. In Figure 3j, the XRD spectra reflected that, within the TPtN-GO aerogels, cubic NMI crystals (i.e., TPtN) were detected with low-intensity peaks at 17.5°, 24.2°, 33.2°, 37.1°, 40.8°, 50.5°, 53.4°, and 64.2°, while the (002) peak of GO multilayers remained at 11.6°. Afterward, XPS was employed to investigate the oxidation state of the platinum in the TPtN-GO aerogels. As shown in Figure 3k, the characteristic peaks representing Pt 4f 7/2 and Pt 4f 5/2 were observed at 73.0 and 76.2 eV, respectively, indicating that platinum ions remained in the 2+ oxidation state within the 2DM aerogels. [26] Figure S11 (Supporting Information) shows the survey scan of TPtN-GO aerogels. By increasing the GO thickness from 1 to 5 µm, the crumple size of the resulting GO topography increased from 20 × 15 to 100 × 75 µm 2 , and the water adsorption capacities increased from 6.8 to 13.7 mg cm −2 , respectively. During the NMI-induced 2DM gelation process, the crumplelike GO topography with a higher adsorption capacity was able to load more NMI solution and induce the NMI-2DM hydrogels/aerogels with higher metallic loadings. As shown in Figure S12 (Supporting Information), by using a crumple-like assembly platform with the GO thickness of 5 µm, the resulting TPtN-GO aerogel demonstrated a TPtN loading of 33.5 wt.%, higher than the one fabricated using the platform with the GO thickness of 1 µm (20.8 wt.%).
Growth of Pt Nano-Clusters/Sheets on 2DM Aerogels as Heterostructured Catalysts
Next, the TPtN-GO, TPtN-MMT, and TPtN-MXene aerogels were thermally reduced at 250 °C in the forming gas (5% H 2 in N 2 ), and the Pt nano-clusters/sheets were in situ grown on the 2DM surfaces (abbreviated as Pt@2DM). Concurrently, the GO nanosheets were reduced into reduced GO (rGO). The SEM image of a Pt@rGO aerogel (after the thermal annealing process in the forming gas) is shown in Figure S13a (Supporting Information). Additionally, the pore size distribution profiles of TPtN-GO aerogel (before annealing, Figure S10a, Supporting Information) and Pt@rGO aerogel (after annealing, Figure S13b, Supporting Information) are compared, and both aerogels show similar pore size distribution profiles with the pore areas ranging from 5 to 500 µm 2 . These results indicate the annealing step (and the reduction of TPtN) did not cause any significant microstructural contraction and pore shrinkage. Figure 4a,c shows the high-angle annular dark field (HAADF) scanning transmission electron microscope (STEM) images of Pt nanoparticles deposited on the 2DM aerogels, with the size distributions of Pt nanoparticles summarized in Figure 4d. All Pt@2DM heterostructures contained similar Pt loadings, estimated to be ≈9.5 wt.%. The Pt@rGO heterostructures primarily consisted of small Pt nanoclusters with an average diameter of 1.6 nm, containing several to a dozen Pt atoms each. On the other hand, the Pt@MMT and Pt@MXene heterostructures contained ultrathin Pt nanosheets with average diameters of 3.7 and 4.2 nm, respectively. It is worth noting that some larger Pt nanosheets (with diameters up to 15 nm) were found on the MXene surfaces. As further evidenced in the HAADF STEM images (in the inset of Figure 4b) and fast Fourier transform analyses ( Figure S14, Supporting Information), the Pt nanosheets on MMT had uniform thicknesses, supported by constant image contrast across the Pt nanosheets. While the exact thickness was not known, the observation of bright single Pt atomic columns on top of the nanosheets indicated that the Pt nanosheets were likely single nanometers in thickness. Similarly, the Pt nanosheets on MXene were a few nanometers thick but possessed multiple layers of Pt that successively decreased in diameter, similar to the Tower of Hanoi. [27] As all Pt@2DM aerogels were prepared through the same processes, these TEM images indicate that the Pt morphology was controllable by altering the underlying 2DM substrate, with the different surface functional groups and substrate compositions impacting the growth of Pt nano-clusters/sheets. Hard and soft acid and base theory has been previously used to rationalize the coordination of metal atoms on 2DM substrates and the effect of 2DM substrates on the observed Pt morphology. [28] Pt metal is a soft acid that most readily receives electrons from and binds most strongly to soft bases, such as thiolates, some halogens, and hydrocarbons, and less readily receives electrons from hard bases like hydroxide, carboxylate, and fluoride are weaker binding ligands. Based on the Pt morphologies observed by HAADF STEM, the binding strength between Pt atoms and 2DM substrates followed the order of Pt@rGO > Pt@MMT > Pt@MXene. The absence of Pt nanocrystals on rGO in favor of amorphous nanoclusters suggests that rGObound Pt atoms are most strongly among the three 2DM substrates. rGO consisted of a mixture of graphene regions, holes, and oxygen-containing defective regions, [29] where oxygen is mainly present in the graphene lattices in the form of COC bonds. [30] Prior work has shown that weakly bound electrons from defects in the graphene lattice and oxygen-containing moieties act as soft bases and therefore coordination sites for metal atoms. [31] On the other hand, MMT is decorated with hydroxyl functional groups, [16c] which were hard bases that do not bind Pt atoms as strongly. The weaker binding strength between Pt and MMT allowed Pt atoms to crystallize and form 2D nanosheets. Finally, the MXene nanosheets were predominantly functionalized by fluorine groups, [16b,d] which have a larger chemical hardness than hydroxyl groups and therefore bound Pt atoms most weakly. Therefore, Pt nanoparticles began to take on some 3D character in the Pt@MXene aerogel by displaying multi-tier plate structures.
With distinct 2DM substrates and Pt morphologies, three Pt@2DM heterostructures displayed high, substrate-dependent catalytic activities. To determine the activities of Pt@2DM catalysts, the liquid-phase hydrogenation of 3-NS was selected as a probe reaction (Figure 4e), containing parallel reaction pathways to produce 3-ethylaniline (3-EA) via reaction intermediates 3-ethylnitrobenzene (3-ENB) and 3-vinylaniline (3-VA). As shown in Figure S15 (Supporting Information), control tests were performed using rGO, MMT, and MXene aerogels (also reduced at 250 °C in the forming gas) in the absence of Pt catalysts. In the same reaction timeframe (1 h), the MMT and rGO catalysts demonstrated low conversions of 3-NS at ≈10% and high selectivity toward the 3-ENB product, while the MXene catalyst showed higher conversion at ≈30% with high selectivity in forming 3-VA product, which cohered to the experimental observation in the literature. [32] The studies determined that the surface CTiO x groups and the layered structure of MXenes facilitate the mass transfer and adsorption-desorption processes which are responsible for the catalytic performance of Ti 3 C 2 T x . [32a] In the presence of Pt catalysts, as shown in Figure 4g, the Pt@MXene, Pt@MMT, and Pt@rGO heterostructures demonstrated much higher catalytic activities with reactions rates of 674, 298, and 98 mol 3-NS mol Pt −1 h −1 , respectively. The dominating reaction pathway in these catalysts is the hydrogenation of the vinyl groups (CC) to form the 3-ENB and 3-EA products.
Both Pt morphologies and 2DM substrates had substantial impacts on the catalytic activities of Pt@2DM heterostructures. Taking in which the maximum activity appears at ≈1.0 nm Pt particle sizes. [33] Although the width of Pt nanosheets in the MXene substrate is assumed to lead to low catalytic activity, the thickness of Pt nanosheet (≈1.0 nm) seems to be a dominating factor that enabled the exceptionally high activities in the hydrogenation of 3-NS. The 2DM substrates also had critical effects on the activities of Pt@2DM catalysts. After the thermal treatment at 250 °C, the MXene surfaces preserved a large amount of electron-rich functional groups (e.g., F, TiO), as evidenced in the XPS spectra ( Figure S16a, Supporting Information). In contrast, the oxygen-containing groups of GO nanosheets were mostly removed after thermal treatment ( Figure S16b, Supporting Information). As illustrated in Figure 4f, the electronegativity of MXene surfaces can attract the positively charged vinyl groups of 3-NS, favoring the hydrogenation of 3-NS by the surface/edge sites of Pt nanosheets via the hydrogen spillover effects. [34] On the other hand, the rGO surfaces lack prominent electronegative groups, so the adsorption of 3-NS to either rGO substrate or Pt nanoclusters was weak, retarding the catalytic activities in the hydrogenation of 3-NS.
With distinct Pt morphologies and 2DM electronegativities, three Pt@2DM catalysts promoted different reaction pathways in the hydrogenation of 3-NS. To understand the observed reaction selectivity, the adsorption behavior of 3-NS was interpreted via 1) the possible adsorption modes of 3-NS and 2) the adsorption strengths of two functional groups of 3-NS (CC and NO 2 ). First, the adsorption mode of 3-NS is sensitive to the Pt structure. [35] As the dimensions of Pt nanocrystals decrease, the adsorption of vinyl groups (CC) can transit from the ethylidyne mode on threefold, the di-σ mode on twofold, and the π-bonded mode on isolated single atoms, with the adsorption strength in the rank of ethylidyne mode > di-σ mode > π-bonded mode. [36] When the Pt particle size is small enough such as single atoms, the supported Pt catalysts do not adsorb unsaturated vinyl groups, which leads to a high selectivity toward hydrogenation of the NO 2 group in 3-NS. [37] In contrast, the adsorption of the nitro group (NO 2 ) is not structure-dependent because NO 2 has an end-on geometry, and the monodentate adsorption (rather than bidentate adsorption) is always preferred on the catalyst. For the electronic properties, the vinyl group is electron-rich, whereas the nitro group is electron-deficient, and thus nucleophilic sites on the catalyst will favor the adsorption of the nitro group, while repelling the vinyl group, and vice versa. [35] When the conversion of 3-NS reached 80%, the percentages of various synthetic products, including 3-ENB, 3-VA, 3-EA, and intermediate products, were measured in Figure 4g. First, in the Pt@MXene catalysts, MXene's high electronegativity encouraged the interactions between vinyl groups of 3-NS and multitier Pt nanosheets. Upon adsorption, 3-NS can lay flat on the Pt nanosheets and fully hydrogenate. Figure S17 (Supporting Information) records the time profiles of synthetic products in the use of Pt@MXene catalysts, and the reaction pathway from 3-NS to 3-VA was highly prevented, showing 0% of 3-VA after 10 min of 3-NS hydrogenation. Second, in the Pt@MMT catalysts, MMT's mild electronegativity allowed either vinyl or nitro groups of 3-NS to be adsorbed on the Pt nanosheets followed by 3-NS hydrogenation, thus showing equal amounts of 3-VA and 3-ENB in the synthetic products. Last, in the Pt@rGO catalysts, the vinyl group of 3-NS adsorbed in either the di-σ or π-bonded mode, indicating that 3-NS did not lay flat on the Pt surfaces for full hydrogenation. The residual electron pairs after thermal reduction on rGO would repel the negatively charged NO 2 group of 3-ENB, making it hard to re-adsorb on Pt nanoclusters for further hydrogenation. Figure 5a shows the complexation method to prepare a series of NMI-2DM inks with high colloidal stability, including PtCl 4 -GO, AuCl 3 -GO, and PtCl 4 -MMT inks (digital photos in Figure S18, Supporting Information). Specifically, the PtCl 4 -GO ink was selected for templated synthesis of microstructural Pt topographies. By mixing a GO dispersion with a PtCl 4 solution at different concentrations (ranging from 0.1 to 50 mm), various PtCl 4 -GO inks were prepared and deposited onto shrink films followed by overnight drying. The ink-coated substrates were then heated above the T g of polystyrene (≈100 °C) to induce thermal contraction. By harnessing surface instability, the upper PtCl 4 -GO coatings were mechanically deformed into isotropic crumples and became microstructural templates. Upon annealing and calcination at high temperatures, PtCl 4 was thermally reduced to Pt nanoparticles within the rGO multilayers followed by interlayer nanoparticle assembly. After the carbonaceous components were completely combusted at 800 °C, the resultant Pt replicas were able to resemble the crumple-like topographies from the PtCl 4 -GO templates.
Pre-Complexed NMI-GO Inks for Microstructural Noble Metal Texturing
The success of this complexation method heavily relied on the high colloidal stability of PtCl 4 -GO inks. As shown in Figure 5b, the zeta potentials of TPtN-GO, PtCl 4 -GO, and K 2 PtCl 4 -GO complexes were characterized over a wide range of NMI molarities (from 0.1 to 75 mm). Compared to the TPtN-GO complexes that quickly flocculated, the PtCl 4 -GO and K 2 PtCl 4 -GO complexes remained fluid-like at all tested molarities (up to 50 mm). To avoid the complication of K + counterions, the PtCl 4 -GO complex was adopted for the templated synthesis of Pt replicas. Similar studies were conducted on the AuCl 3 -GO and PdCl 2 -GO complexes in Figure S19 (Supporting Information). By increasing the PtCl 4 molarities in the PtCl 4 -GO inks, Young's moduli of PtCl 4 -GO coatings increased accordingly ( Figure S20, Supporting Information), and the crumpling wavelengths of PtCl 4 -GO templates increased after thermal contraction, respectively ( Figure S21, Supporting Information).
After the annealing and calcination processes in the air, the crumple-like topographies were transcribed from PtCl 4 -GO templates to Pt replicas. As shown in the thermal gravimetric analysis (TGA) profiles in Figure S22 (Supporting Information), the GO template was partially decomposed at 150 °C and provided a spatially confined environment for the growth and assembly of Pt nanoparticles. At 450 °C, the rGO templates began to get oxidized into CO and CO 2 . As the temperature was elevated to 800 °C, the rGO template was completely removed, leaving the freestanding Pt replica with resembled chaotic crumples (Figure 5e). It is worth mentioning that the presence of Pt nanoparticles slightly enhanced the carbon oxidation rates of the PtCl 4 -GO sample, consistent with the experimental www.afm-journal.de www.advancedsciencenews.com 2215222 (10 of 13) © 2023 The Authors. Advanced Functional Materials published by Wiley-VCH GmbH observation in the literature. [38] The TGA profile of an AuCl 3 -GO template is shown in Figure S23 (Supporting Information); a similar profile is observed in comparison with PtCl4-GO, but the presence of Au nanoparticles does not enhance the carbon oxidation rates. High-temperature annealing and calcination processes of PtCl 4 -GO templates yielded metallic Pt replicas, and three diffraction peaks at 40.3°, 46.7°, and 67.9° were observed in the XRD analysis (Figure 5c), which corresponded to the reflections (111), (200) and (220), respectively, and was consistent with the face-centered cubic structure of platinum (JCPDS Card 04-0802).
The characteristic crumpling features of Pt replicas were tunable by adjusting the PtCl 4 molarities in the PtCl 4 -GO inks. When the PtCl 4 -GO template (from the ink at 0.1 mm) was calcined, the resulting Pt product was non-continuous and did not well resemble the isotropic crumples. By increasing the PtCl 4 molarities to 1 and 10 mm, the Pt replicas were able to resemble fine microtextural features, while the crumpling wavelengths were reduced to 20%-50% of the wavelengths of PtCl 4 -GO templates. When the PtCl 4 molarity increased further to 50 mm, the excess noble metal precursors overwhelmed the spatially confining effect and led to the appearance of a bulk metal phase.
To further characterize these underlying metallic nanostructures, we performed TEM analyses on these crumple-like Pt replicas in the inset of Figure 5e. As the PtCl 4 molarities in the PtCl 4 -GO inks increased, the porosity of resulting Pt replicas continued to decrease. The Pt replica (from the ink at 10 mm) consisted of close-packed, interconnected, annealed Pt nanoparticle arrays. These TEM images provided mechanistic insights into the growth and assembly of Pt replicas. The thermal decomposition of PtCl 4 -GO templates initially produces atomically dispersed Pt species or ultrafine clusters that are spatially confined in 2D gallery spaces. The existence of the rGO template through most of the annealing process causes preferential in-plane (x-y) mobility of these Pt clusters, so that the growth and annealing of Pt nanoparticles produce interconnected and array-like sub-structures that are the basic building blocks of the final Pt replicas. At higher temperatures, the rGO templates begin to decompose, gradually losing their spatial confinement effect to fully suppress z-directional expansion. The broad applicability of this approach was further illustrated by extending this templated synthesis to other noble metals (e.g., Au) ( Figure S24, Supporting Information). The SEM and TEM images of microtextured Au replicas are shown in Figure S25 (Supporting Information). Compared to the microtextured Pt replica, the Au replica did not resemble the crumple-like microstructures of AuCl 3 -GO templates to the same degree. The dampened replication capability may be due to the fact that Au nanoparticles had a high tendency to fuse together, thus losing some degrees of microstructural templating.
Conclusion
In summary, the use of NMI species for 2DM assembly demonstrated several advances to develop a collection of NMI-2DM complexes that retained the nanoscale functions of 2DM units and induced synergistic properties. Depending on the resultant zeta potentials, the NMI-2DM complexes were able to be fab-ricated into either mechanically robust hydrogels or electrostatically stable inks. First, by utilizing TPtN-loaded assembly platforms, various TPtN-2DM hydrogels and aerogels were produced in a facile manner. Followed by freeze drying and thermal reduction, various Pt@2DM heterostructures were synthesized and demonstrated high, substrate-dependent catalytic performance for 3-NS hydrogenation. Second, PtCl 4 was incorporated into 2DM dispersions to prepare a series of PtCl 4 -2DM inks with high colloidal stability. By adopting the PtCl 4 -GO ink, various Pt replicas with resembled topographies were synthesized with accurate control of grain sizes and porosities. It is worth noting that the Pt@MXene aerogels have promising potential in the field of heterostructured catalysts. From our perspective, an ideal hydrogenation catalyst that can achieve high activity and selectivity to 3-VA could be a heterostructure composed of single Pt atoms scattered across a negatively charged MXene substrate. First, when the Pt particle size decreases to only a few atoms, these tiny Pt catalysts become energetically unfavorable for adsorbing the unsaturated vinyl groups, leading to a high selectivity toward hydrogenation of the NO 2 group in 3-NS. Second, since the NO 2 group is electrondeficient, the MXene substrate should be functionalized with more negatively charged groups to facilitate the electrostatic diffusion of 3-NS to the surfaces of Pt@MXene catalysts. Future work will involve fine-tuning the Pt shape from 2D nanosheets to single atoms on the MXene surfaces to promote the adsorption of nitro groups over vinyl groups at higher diffusion rates. Additionally, understanding fundamental interactions between NMI species (and noble metals) and 2DM nanosheets is valuable for environmental applications, including wastewater treatment, [39] and electrochemical energy storage. [40]
Preparation of Ti 3 C 2 T x MXene Nanosheets: Ti 3 C 2 T x MXene nanosheets were prepared according to the previous work. [22] LiF (3.0 g) was added to 40 mL of 9.0 m HCl solution under vigorous stirring. After the dissolution of LiF, 1.0 g of Ti 3 Al 2 C 2 MAX powder was slowly added into the HF-containing solution, and the mixture was stirred at 35 °C for 24 h. Next, the suspension was split into two centrifuge tubes, and 5.0 mL of cold HCl solution (2.0 m) was added to each tube. The suspension was then centrifuged at 8000 rpm for 5 min, and the supernatant was replaced with cold HCl solution (2.0 m); this process was repeated three times. Afterward, the solid residue was washed with DI water multiple times until the pH value increased to ≈6.0. Subsequently, 35 mL of DI water was added to each tube to re-disperse the washed solid residue, and the mixture was sonicated for 1 h and centrifuged at 3000 r.p.m. for 30 min. The supernatant was collected as the final dispersion of Ti 3 C 2 T x MXene nanosheets with a concentration of ≈10-12 mg mL −1 .
Preparation of MMT Nanosheets: MMT nanosheets were prepared by mixing the as-received MMT crystals in DI water at 10 mg mL −1 followed by ultrasonication for 2 h and continuous stirring for 12 h. The dispersion was then centrifuged at 4000 rpm for 60 min, and the supernatant was collected as the final dispersion of MMT nanosheets with a concentration of ≈7 mg mL −1 . The MMT dispersion was diluted to 5 mg mL −1 for further usage.
Fabrication of Crumple-Like GO Topographies: A clear shrink film was cut into multiple squares with the dimensions of 10 × 10 cm 2 and cleaned with ethanol. Afterward, the cut shrink film was treated with atmospheric plasma (a Harrick Plasma expanded plasma cleaner composed of a borosilicate glass chamber and a 50-60 Hz, 373 W generator). The chamber pressure was pumped down to and maintained at 0.5-1.5 torr, and high radio frequency atmospheric plasma was generated for 5 min. To obtain a planar GO thin film with the controlled thickness varying from 0.2 to 10 µm, the areal loadings of GO nanosheets were adjusted accordingly on a hydrophobic polyvinylidene fluoride (PVDF) membrane (0.22 µm pore, Merck Millipore) followed by vacuum filtration for 8 hours. After air drying, the GO membrane was detached from the PVDF membrane in an ethanol bath and transferred onto a plasma-treated shrink film. Afterward, the GO-coated shrink film was heated in an oven at 150 °C for 7 min to induce thermal shrinkage. By harnessing interfacial instability, the upperlayer GO coating was deformed into the crumple-like GO topography, which was utilized as a versatile platform to assemble noble metal ions (NMIs) and 2D-material (2DM) nanosheets into NMI-2DM complexes.
Measurement of Water Storage Volumes on Crumple-Like GO Topographies: The crumple-like GO topographies were first weighed to obtain their dry masses. DI water was then dropped on the GO topographies, and excess water was then removed by tilting the substrate. The water-filled GO topographies were then weighed to obtain their wet masses. The difference between dry and wet masses was regarded as the water storage mass on a crumple-like GO topography. The water storage volumes were obtained by dividing the water storage mass by the water density. The measurement of water storage volumes was repeated multiple times.
Fabrication of NMI-2DM Hydrogels and Aerogels: Noble metal salt-2DM complexation hydrogels (abbreviated as NMI-2DM hydrogels) were prepared on the crumple-like GO topographies through a cation-induced gelation process. For each noble metal salt, the corresponding solution was prepared at 0.1 m, ≈150 µL of which was then deposited on top of a crumple-like GO topography with an area of ≈12 cm 2 . The metal salt solution was allowed for 10 min to fully permeate the microchannels of the GO topography. After the surplus solution was removed, 3.0 mL of 2DM dispersion (at 10 mg mL −1 , ranging from GO, MXene, and MMT) was deposited on top of the GO topography in a swift motion, forming a thick, uniform NMI-2DM hydrogel layer. The NMI-2DM hydrogel then sat for 4 h to complete the ionic crosslinking reaction. The NMI-2DM hydrogel was then placed into a sealed petri dish and frozen with liquid nitrogen. The frozen sample was lyophilized in a freeze dryer (at -85 °C and 10 −3 atm, Labconco FreeZone) for two days, obtaining an NMI-2DM aerogel.
In Situ Observation via IRM: In situ observation of NMI-2DM hydrogel formation was performed on an inverted Zeiss optical microscope. A field aperture with a size of 1 mm was placed in the back focal plane of the objective lens to limit the lens's numerical aperture and increase image contrast. A green LED with a wavelength of 511 ± 22 nm was used together with a 20× objective lens. Imaging was performed through a 0.17 mm glass coverslip to generate a common path interferometer, resulting from the light reflection off the glass-water interface.
A crumple-like MMT topography was first created by drop-casting an MMT dispersion onto a plasma-treated shrink film with an areal density of 1.0 mg cm −2 . Once air-dried, the planar MMT-coated shrink film was heated in an oven at 150 °C for 7 min to release the pre-strain to contract the underlying polystyrene substrate, and the upper-layer MMT coating was deformed into the crumple-like topography. The polystyrene substrate was then dissolved in a dichloromethane bath, and a freestanding MMT topography was transferred onto a glass slide. Next, the MMT topography received ≈100 µL of metal salt solution (0.1 m PtCl 4 and 1 mm TPtN) or deionized water and was then placed under IRM. Next, ≈10 µL of GO dispersion (at 1 × 10 −3 mg mL −1 ) was deposited onto the MMT topography, where IRM was recorded.
Preparation of Pt-Decorated 2DM Aerogels (Pt@2DM Aerogels): By utilizing the crumple-like GO topographies as the assembly platforms, TPtN-GO, TPtN-MXene, and TPtN-MMT aerogels were obtained and then thermally reduced in a tube furnace (OTF-1200X-S, MTI Corporation). Each TPtN-2DM aerogel was first placed in an alumina crucible inside a quartz tube. The tube furnace was then purged with forming gas (5% of H 2 and 95% of N 2 ) at 160 mL min −1 for 15 min. Last, the TPtN-2DM aerogel was reduced at 250 °C for 2 h, obtaining a Pt@2DM aerogel.
3-Nitrostyrene Hydrogenation Measurement: The catalytic performance of Pt@2DM aerogels was tested in a 50 mL stainless steel autoclave with an inner Teflon coating. For each test, 10 mg of a respective Pt@2DM aerogel was dissolved in 12 mL ethanol along with 0.6 mmol 3-Nitrostyrene (3-NS) and 0.24 mmol 1-Butanol as an internal standard. The reactor was then sealed and purged with 4 bar H 2 (Airgas, research grade) 3 times. Next, the reactor was pressurized to 3 bar, heated to 30 °C, and simultaneously stirred at 900 rpm. The products were then collected and filtered at pre-determined intervals using a syringe filter. The product solution was analyzed using a gas chromatography instrument (Agilent 7890A) equipped with a methylsiloxane capillary column (HP-1, 50.0 m × 320 µm × 0.52 µm) and a flame ionization detector (FID). The selectivity of products was calculated based on moles of product over the moles of reacted 3-NS.
Preparation of Noble Metal Salt-GO Inks (NMI-GO Inks):
Various NMI-GO inks were prepared by mixing a GO dispersion (at 10 mg mL −1 ) with solutions of noble metal salts (at 0.1 m of AuCl 3 , PtCl 4 ) at different volume ratios. The concentration of noble metal ions was tuned from 0.1 to 50 mm.
Fabrication of Crumple-Like Noble Metal Topographies: The as-prepared NMI-GO ink was first drop-casted onto a plasma-treated shrink film at an areal density of 1 mg cm −2 . Once air-dried, the planar NMI-GO-coated shrink film was placed in between two baking sheets and then heated in an oven at 150 °C for 10 min to release the pre-strain to contract the underlying polystyrene substrate. Afterward, the resulting crumple-like NMI-GO topography was calcined at a high temperature in air for 2 h (PtCl 4 -GO sample at 800 °C, AuCl 3 -GO sample at 700 °C); the heating profile was set to be 5 °C min −1 . This obtained a crumple-like noble metal topography.
Characterization: The surface morphologies of crumple-like 2DM topographies, noble metal topographies, and NMI-2DM aerogels were investigated using a field emission scanning electron microscope (FESEM, Hitachi SU-70) operating at 10.0 kV for low, medium, and high-resolution imaging, equipped with an EDS for elemental analyses. Before SEM imaging, all the samples were sputtered with a layer of AuPd (≈1.0 nm). The 3D representations of crumple-like GO topographies were imaged by a Keyence microscope (VK-X3000 Series). Transmission electron microscopy (TEM) was performed using a JEM 2100 FEG TEM/STEM at an acceleration voltage of 200 kV. The noble metal topographies and NMI-2DM aerogels were suspended in ethanol and sonicated for 30 min, and then the dispersions were dropped on lacey carbon grids for TEM High-angle annular dark field scanning transmission electron microscopy (HAADF STEM) imaging was performed with a probe-corrected JEOL JEM-ARM200F TEM at an acceleration voltage of 200 kV. Thermogravimetric analysis (TGA) was carried out on a Shimadzu TGA-50 Series system, with the heating profile starting at 5 °C min −1 to 225 °C and then 10 °C min −1 . All of the TGA measurements were carried out under an N 2 flow rate of 60 mL min −1 , and the sample masses were controlled to be ≈20 mg.
The crystallinity of NMI-2DM aerogels and noble metal topographies were identified by XRD using a Bruker D8 Advance instrument with LynxEye detector and Cu Kα radiation (λ = 1.5418 Å). Rheology of various NMI-2DM hydrogels was evaluated with an AR2000 stresscontrolled rheometer in between a 20-mm parallel plate and a conical plate at 25 °C. Dynamic frequency sweeps were performed in the linear viscoelastic region of each hydrogel sample. Zeta potentials of various NMI-2DM complexes were measured by a Nano-ZS90 Zetasizer (Malvern Instruments) with DTS1070 capillary cells. XPS was obtained by an X-ray photoelectron spectrometer (Kratos AXXIS UltraDLD) using a microfocused Al X-ray beam (100 µ, 25 W) with a photoelectron take-off angle of 90°.
Supporting Information
Supporting Information is available from the Wiley Online Library or from the author. | 2023-05-10T15:05:43.685Z | 2023-05-07T00:00:00.000 | {
"year": 2023,
"sha1": "e1e5301dacc18d27b990d23fd03ae213657c8420",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/adfm.202215222",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "a3a7ddee2a5ec16d4a75700c419c8b1517944183",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": []
} |
17780762 | pes2o/s2orc | v3-fos-license | Traumatic Brain Injury in Late Adolescent Rats: Effects on Adulthood Memory and Anxiety
The consequences of traumatic brain injury (TBI) sustained during late adolescence (7 weeks-old) on spontaneous object recognition memory and on anxiety-like behaviors in the elevated plus maze were tested in rats during adulthood. Testing took place at two different post injury times, in separate groups: three and six weeks, when animals were 10 and 13 weeks old, respectively. The rats were either submitted to controlled cortical impact injury, an experimental model of focal TBI with contusion, or were sham-operated. TBI animals failed to remember the familiar object and had a significantly lower performance than sham-operated animals, indicating memory disruption, when the retention delay was 24 h, but not when it was 3 h. TBI did not have any significant effect on the main anxiety-related behaviors, but it reduced time in the central platform of the elevated plus maze. The effects of TBI on memory and on anxiety-like behaviors were similar at the two post injury times. In both TBI and sham-operated groups, animals tested six weeks after surgery had lower anxiety-related indices than those tested at three weeks, an effect that might be indicative of reduced anxiety levels with increasing age. In summary, focal TBI with contusion sustained during late adolescence led to object recognition memory deficits in a 24-h test during adulthood, but did not have a major impact on anxiety-like behaviors. Memory deficits persisted for at least six weeks after injury, indicating that spontaneous modifications of these functional disturbances did not take place along this time span.
Introduction
Traumatic brain injury (TBI) afflicts millions of people worldwide.In contrast to other kinds of acquired brain damage, that show their highest prevalence in old populations, TBI is the leading source of acquired brain damage in children and youngsters.It causes a large range of deficits, which can be maintained or even aggravated over time, leading to subsequent long-term personal, social and economical burdens (Thurman, 2014).In children and adolescents the prevalence of single and repeated events of mild TBI leading to concussive damage is higher than that of more severe injuries up to the point that they have been considered a silent epidemic that should be more fully characterized (Petraglia, Dashnaw, Turner, & Bailes, 2014).
However, a review of epidemiological research of pediatric TBI carried out in North America, Europe, Australia and New Zealand indicated that the incidence of hospitalized (and thus often moderate or severe) brain injuries, as well as fatal injuries, consistently peaked among late adolescents (compared to younger ages), and that males had a higher risk of injury than females (Thurman, 2014).
TBI can be roughly classified as either focal or diffuse.Focal damage results from a direct impact to the skull, and produces focal contusions as well as hematomas, while diffuse damage, which causes widespread axonal injury, is the result of rapid acceleration-deceleration of the head.It is estimated, though, that 50% or more patients with moderate-to-severe TBI exhibit a combination of focal and diffuse damage (Andriessen, Jacobs, & Vos, 2010).In both focal and diffuse TBI the mechanisms of injury can be classified as either primary or secondary.Primary injury is a direct consequence of mechanical deformation of brain tissue occurring immediately after being exposed to an external force, and giving rise to contusion, bleeding and axonal rupture, with subsequent necrotic neuron and glial cell death.Secondary injury, which evolves over a period of hours to days, and even months, after the primary insult, is the result of biochemical and physiological events, such as ischemia, inflammation, altered neuronal and glial functions, loss of membrane integrity, etc, that ultimately lead to neuronal cell death (O'Connor, Smyth, & Gilchrist, 2011).In parallel with neurodegenerative changes, other phenomena with potential reparative capacity, such as cell proliferation, neurogenesis, and a wide array of plasticity-related mechanisms, can also be observed (Saha, Jaber, & Gaillard, 2012).The conjunction of long-term neurodegenerative and neuroreparative phenomena leads to dynamic histological and functional changes over time.
Multiple animal models have been designed in order to replicate the diverse physiopathological and functional consequences of human TBI.The most widely used are lateral fluid percussion (LFP), controlled cortical impact (CCI), and weight-drop.In LFP a fluid wave impacts dura and fills into insterstitial spaces, causing focal brain injury, as well as a certain degree of diffuse injury.In CCI a pneumatic device causes a piston to impact on the dura at predetermined speed and depth.CCI is considered a model of focal contusion, but it also involves widespread damage to both gray and white matter regions.In weight-drop models, the skull is exposed (with or without a craniotomy) to a free falling weight.Variations of these models have also been designed in order to model mild concussion and more diffuse patterns of injury.These variations include medial fluid percussion and closed head injury by means of a piston impacting on the skull (and not the dura) or by dropping a weight on a disk cemented onto the skull (Gold et al., 2013;O'Connor et al., 2011;Xiong, Mahmood, & Chopp, 2013).
In general terms, these studies indicate progressive enhancement of lesion severity over time, although evolution profiles may vary depending on the outcome measure targeted (Osier, Carlson, DeSana, & Dixon, 2014).
Temporal evolution of functional deficits has also been studied with several TBI animal models by comparing memory performance at different post injury times.In rodents submitted to TBI at adult ages, persistence of spatial memory deficits has been reported after LFP over periods of weeks (Bramlett, Green, & Dietrich, 1997), and even up to one year (Pierce et al., 1998), but there is one report indicating that the memory deficits found in the first month after the initial damage were no longer present at a later time; this functional recovery might be related to increased neurogenesis and survival of new neurons (Sun et al., 2007).Using a non-spatial memory task, object recognition memory (ORM), persistence of deficits has been also described after closed head injury (Chen et al., 2013;Siopi et al., 2012) , and CCI (Darwish et al., 2014).However, there are also instances of partial recovery over time.For example, Tsenter and colleagues (Tsenter et al., 2008) found that ORM deficits induced by closed head injury in mice were lower 28 days after lesion than 3 days post injury.Finally, some reports indicate that memory deficits can show a delayed appearance.For example, Milman and colleagues (Milman, Rosenberg, Weizman, & Pick, 2005) found memory deficits in the water T-maze and passive avoidance in mice submitted to weight drop injury when testing took place 30 and 90 days post injury, but not 7 days post injury.
Besides memory and other cognitive functions, there is also an elevated prevalence of emotional alterations in TBI patients (Malkesman, Tucker, Ozl, & McCabe, 2013), which can significantly impair the quality of life.Animal studies on this topic have led to inconsistent results, perhaps as a consequence of differences in TBI models and severity, as well as in other methodological considerations (specific tests used, post injury testing times, etc).For example, in adult rodents increased anxiety has been reported after LFP (Liu et al., 2010), closed head injury (Meyer, Davies, Barr, Manzerra, & Forster, 2012), and an impact acceleration variation of weight drop injury (Pandey, Yadav, Mahesh, & Rajkumar, 2009), while decreased anxiety levels have been found after CCI (Washington et al., 2012).Finally, lack of changes in anxiety and emotional reactivity have also been described after closed head injury (Siopi et al., 2012).Anyhow, changes in emotional processes can interfere with performance in memory tasks, and this fact underscores the importance of taking measures of emotional reactivity when assessing learning and memory functions.
Age at the time of brain injury is a significant factor affecting long-term functional outcome.On the one hand, it has been described that TBI leads to poorer functional outcome in old rats compared to adult and young rats (Mehan & Strauss, 2012).On the other hand, there is evidence that brain injury in juveniles can lead to increased severity of symptoms, compared to injury in adulthood, probably as a result of disrupted neurodevelopment (Kamper et al., 2013).In spite of that, the number of animal studies on this topic using ages equivalent to human infancy and childhood is relatively scarce (although it has increased in the latter years).This scarcity is even more pronounced for adolescence, a period of high vulnerability to stress and other vital experiences (Lynn & Brown, 2010;Schneider, 2013), so that it has been claimed to be a hole in animal literature concerning the long-term effects of TBI sustained during this period of life (Hartman, 2011).Moreover, the number of studies that have tested emotional and cognitive function after different post injury times in order to analyze the temporal evolution of these functions after TBI is low.These studies have been mainly carried out with rodents lesioned at ages equivalent to human neonates and toddlers and have used several TBI models, such as CCI (Ajao et al., 2012;Kamper et al., 2013), a modified midline CCI that induces concussion-like injury (Huh, Widing, & Raghupathi, 2008), a closed head CCI (Huh, Widing, & Raghupathi, 2011), or impact acceleration injury by means of weight drop (Adelson et al., 2001).Long-term evolution (up to 12 weeks) of memory deficits has also been examined in rats that sustained CCI injury at an age (4 weeks old) corresponding approximately to late childhood or beginning of adolescence (Park et al., 2014), but, to our knowledge, studies comparing memory performance and/or emotional disturbances after different long-term post injury times in rodents submitted to TBI during late adolescence are lacking.
To sum up, at any age TBI gives rise to a complex conjunction of long-term neurodegenerative and neuroreparative phenomena, but the former tend to win the battle over the latter if no effective treatments are administered, so that pervasive deficits are common, especially after moderate or severe degrees of injury.Moreover, the neurodegenerative and neuroreparative phenomena initiated by TBI can interfere with normal brain development.For these reasons it was hypothesized that TBI in late adolescent animals would lead to persistent deficits of memory and emotional functions during adulthood.To test this hypothesis we have studied, in male rats, the long-term effects of TBI sustained during late adolescence (7 weeks old) on ORM and anxietyrelated behaviors during adulthood at two different post injury times: three and six weeks.TBI was induced by means of CCI, which, as stated before, is considered a model of focal brain damage, affecting mainly cortical and subcortical structures proximal to the impact site, but also producing widespread gray and white matter damage (Budde, Janes, Gold, Turtzo, & Frank, 2011;Hall et al., 2005).In addition, CCI reproduces most (although not all) pathophysiological and functional features of human TBI and allows rather precise control of injury severity (Gold et al., 2013;Xiong et al., 2013).CCI has been widely used in adult rodents, and, to a lower extent, in rodents of pediatric ages, and it has been characterized as a useful model of focal experimental TBI in immature rats (Adelson, Fellows-Mayle, Kochanek, & Dixon, 2013).According to other works using the same or similar parameters (Turtzo et al., 2012;Yu et al., 2009), the degree of injury inflicted by the CCI parameters applied can be considered as moderate or moderate-to-severe.
Materials and Methods
Ethics and animal welfare.All procedures were performed in compliance with the European Community Council Directive for care and use of laboratory animals (86/609/EEC), and with the related directive of the Autonomous Government of Catalonia (DOGC 2073 10/7/1995).
Fifty-two Sprague-Dawley albino rats (Prolabor, Barcelona, Spain), six-weeks old on their arrival to the laboratory, were used.Upon arrival, they were kept in the quarantine room for one week.Thereafter, they were singly housed in 52 x 28 x 18 cm cages.
The age of the animals at the beginning of the experimental procedures was seven weeks, and their mean initial body weight was 262.77 g (SD ± 27.07).Food and water were available ad libitum.The animals were kept under conditions of controlled temperature (20-22°C) and humidity (40-70 %), and maintained on a 12-h light-dark cycle (lights on at 8:00 a.m.).
Experimental groups
Four groups of animals were used: TBI-3W, Sh-3W, TBI-6W and Sh-6W.These four groups were the result of combining the following two conditions: 1) lesion: TBI (TBI groups) or sham operations (Sh groups), and 2) post-surgery delay: three (TBI-3W and Sh-3W groups) or six weeks (TBI-6W and Sh-6W groups).Assignment of the rats to the groups was random.
Stereotaxic surgery and TBI model (CCI)
For stereotaxic surgery, anesthesia was induced with 5% isoflurane (Forane, Abbot Laboratories, SA, Madrid, Spain) in oxygen (2 l/min) in a Plexiglas chamber ( 20x 13 x 13cm) for 7 min.The animals were then placed in a stereotaxic frame (David Kopf Instruments, Tujunga, USA ) and the anesthesia was continued by delivering 2% isoflurane in oxygen (1 l/min) through a nose mask.The scalp was incised on the midline, and after the skull was exposed, a craniectomy (4 mm diameter) was performed over the right hemisphere (4.5 mm posterior to Bregma and 3 mm from midline).The pneumatically operated TBI device (Pittsburgh Precision Instruments, Inc., USA) with a 3 mm tip diameter impacted the brain at a velocity of 6 m/s reaching a depth of 2 mm below the dura matter layer, and remained in the brain for 150 ms.The impactor rod was angled 15° to the vertical to maintain a perpendicular position in reference to the tangential plane of the brain curvature at the impact surface.A transducer connected to the impactor measured velocity and duration to verify consistency.Thereafter, the scalp was sutured.To control for postoperative pain, a single 0.2 ml subcutaneous injection of buprenorphine (Buprex, Schering-Plough, SA, Madrid, Spain) was administered.
Animals of Sh-3W and Sh-6W groups were operated in a similar way, except that no impact was applied.
Elevated plus maze test
The animals were tested in an elevated plus maze (EPM) either three (TBI-3W and Sh-3W groups) or six (TBI-6W and Sh-6W groups) weeks after being operated.
The EPM (Cibertec S.A., Madrid, Spain) consisted of four black methacrylate arms arranged in the shape of a plus sign.Each arm was 10 cm wide, 49 cm long and elevated 31.5 cm above the ground.The four arms were joined at the centre by a 10 cm x 10 cm Before the first animal and between subjects, the EPM was carefully wiped with a 70% ethanol and dried in order to avoid the presence of olfactory cues.
Object recognition memory (ORM).
Object recognition memory procedures were started the day after testing in the EPM.Training was carried out in an open box (65.5 cm width x 65.5 cm length x 35 cm height) made of a conglomerate covered with brown melamine and enclosed in a soundattenuating cage (72 cm width x 72 cm length x 157 cm height) made from white melamine, and ventilated by an extractor fan.The illumination on the floor of the box was 30 lux.The objects used varied in shape, color and size, and consisted of Lego pieces, a hanger and a drink can.They were fixed to the floor of the box with doublesided adhesive tape so that the rats could not move them.They were not known to have any ethological significance for the rats, and had never been seen by the animals.A prior pilot study had shown that rats of the same strain and age had no spontaneous preference for any of them.The objects for the recognition task were available in duplicate copies.All behavioral sessions were recorded with a video camera mounted above the experimental apparatus and controlled with video tracking software ANY-Maze (Stoelting Europe, Dublin, Ireland).All the measures were acquired through ANY-Maze software, except for object exploration, which was scored off-line by a trained observer who was unaware of the treatment condition and position of novel and familiar objects.To avoid the presence of olfactory cues, the apparatus and objects were thoroughly cleaned with a solution of 70% alcohol in distilled water and dried before the first rat, and after each animal.
To habituate the animals to the experimental box, three habituation sessions were carried out (two on the same day, separated by a 2-hr interval, and the third one on the following day).The animals were introduced into the recognition memory box, under the same lighting and sound conditions as during training but without any objects, and were allowed to explore it for 12 min.Total distance moved and number of defecations were recorded.
Neophobia test.In order to habituate the animals to the presence of unknown objects, a so-called neophobia test was carried out 2 h after the last habituation session.An unfamiliar object was exposed in the center of the open box.The animals were placed in the box facing away from the object and allowed to explore for 10 min.Latency of first object exploration, total time exploring the object, and total distance moved, were recorded.Throughout the experiment, exploration of an object was defined as directing the nose to the object at a distance ≤ 2 cm or touching it with the nose.Turning around or sitting on the object was not considered exploratory behavior.
Acquisition trial and memory tests.ORM training began the day after the neophobia test.During the acquisition session two identical objects were placed near two adjacent corners of the cage.The rat was placed in the experimental apparatus, facing the center of the opposite wall, and was allowed to explore for 15 min.Two memory tests were carried out, the first one 3 h after the acquisition trial, and the second one 24 h after it (that is, 21 h after the first memory test).In the first retention test, one copy of the object used on the acquisition session (familiar object), as well as a novel object were placed in the same two corners of the cage as in the acquisition trial.The novel object was presented in the left corner for half of the animals, and in the right corner for the other half.In the second retention test, one copy of the object used on the acquisition session (familiar object) and a novel object (different to the one presented in the first retention test) were also presented.The position of the familiar object was exchanged between the first and the second retention tests.The specific objects used as either familiar or novel were balanced, so all the possible combinations were present in each group.These procedures were intended to reduce potential biases due to preferences for particular location or for a particular object.Both retention tests had a duration of 5 min.
The variables that were recorded were: time spent exploring each object, latency to first object exploration, total object exploration times, and total distance moved.The identity (novel vs. familiar) of the object that was visited in the first place was also recorded in the retention trials, while the time exploring the object in the left and right corners was also recorded in the acquisition session.To determine the possible existence of a side bias, a left-right ratio [(time exploring the object in the left corner -time exploring the object in the right corner)/total object exploration time] was calculated for acquisition session.A ratio significantly different from 0 indicates a preference for either the left (when values are positive) or the right (when values are negative) object, while a ratio not differing significantly from 0 indicates a lack of preference for any corner.
Two measures were used to analyze cognitive performance: Percent novel object exploration time [(time exploring the novel object / total exploration time) x 100], and discrimination index [(time exploring the novel object -time exploring the familiar object)/ total time spent on both objects].A value significantly higher than chance (50%) for percent novel object exploration time, and higher than 0 for discrimination index indicates that the animal devotes a significantly higher amount of time to explore the novel object than the familiar one.Thus, and since ORM is based on the natural tendency of rats to explore novelty, values significantly higher than 50% and 0, respectively, are considered a good recall of the familiar object, whereas values close to 50% or 0, respectively, (i.e., animals exploring both objects similarly) are considered to reflect a lack of recall (Akkerman et al., 2012).Both indices of relative exploration make it possible to adjust for any differences in total exploration time (Akkerman et al., 2012).
A criterion of ≥10 s of exploration during the acquisition session was established for animals to be included in the statistical analyses of ORM performance, since low exploration times may distort encoding processes in this task.This criterion was selected because a methodological study found that the minimal amount of exploration that was required for reliable discrimination performance was 9-10 s (Akkerman et al., 2012).
Brain processing.
Twenty-four h after the second memory test, the animals were euthanized with an overdose of sodium pentobarbital (Dolethal, 200 mg/kg; Vetoquinol SA; Madrid, Spain) and intracardially perfused with 4% paraformaldehyde (PFA; Sigma-Aldrich, Madrid; Spain) in phosphate buffer saline.The brains were then extracted and submerged in a 4% PFA solution, rinsed with phosphate buffer, and submerged into a cryoprotective solution (sucrose 30% in phosphate buffer) for 3-4 days at 4ºC.Finally they were stored at -80ºC.
Nissl staining.Coronal slices, 40 µm width, were obtained using a cryostat (Shandon Cryotome FSE, Thermo electron corporation, Waltham, USA), and mounted on gelatin coated slides.In order to examine the macroscopic effects of TBI, one out of every ten coronal sections throughout the extent of brain tissue where the lesion cavity was visible were stained with cresyl violet in the animals of TBI-3W and TBI-6W.These sections were digitalized with a scanner (HP Scanjet G4050).Using Fiji image analysis software digital images were calibrated, and the areas of the following regions in the hemispheres ipsilateral and contralateral to the cortical impact were measured: lesion cavity, hippocampal formation, and lateral ventricle.For volume calculations, the areas obtained in each slice were multiplied by 0.04 mm (slices width) and by 10 (number of sections until the next slice analyzed).
In each section, an interhemispheric ratio score was computed [(ipsilateral area / contralateral area) x 100] for the hippocampal formation and the lateral ventricle.The mean ratio scores for all the sections in each rat were used for a more standardized comparison between the two TBI groups.Given that these ratio scores are expected to be significantly similar to 100 if there is no volume change due to brain damage, and significantly different to 100 if otherwise, one sample t-tests were also used to determine whether interhemispheric ratio scores for each group were statistical different to 100.
Statistical analyses.
The statistical analyses were carried out with the statistical programming language R (R Development Core Team, 2011) and the support of the graphical user interface Deducer (Fellow, 2012).
Most of the behavioral data were analyzed by means of a linear model analysis of variance (ANOVA) with a full factorial 2x2 design.The two independent variables (factors) were lesion (two categories: TBI and sham), and post-surgery delay (two categories: three and six weeks).For the analyses of variables recorded during the habituation tests, repeated measures linear model ANOVA was used, with three repeated measures (one per each habituation session) for the dependent variable.When the conditions for application of linear model ANOVA were not fulfilled non parametric tests (Kruskal-Wallis rank sum test, comparing the four experimental groups) were used.Two-sample t-tests were applied for the comparison between the histological data of the two TBI groups, as well as for any other comparisons between two conditions.One-sample t-tests were used when it was required to determine whether mean values per group were statistically different to a given reference value.
Finally, contingency table tests were used, for each retention test, to determine whether there was any significant relationship between the experimental condition and the distribution of proportions of animals visiting the novel or the familiar object in the first place.
Statistical significance was set at the level of P<.05.
Elevated plus maze
Table 1 indicates the mean (and SD) values per group for each of the measurements taken in the EPM.
A significant main effect of post-surgery delay was found on open arm entries [F(1,45)=5.26;P=.026], and on entries ratio [F(1,45)= 4.619; P= .037],while neither the main factor lesion nor the interaction between the two factors were significant.
Specifically, both open arm entries and entries ratios were higher in the animals in the 6 week condition compared to rats in the 3 week condition, regardless of whether they had sustained TBI or had been sham-operated.
No effect of the main factor lesion was found, except for time in the central platform [F(1,45)=6.92;P=.011].Specifically, TBI animals remained less time in this EPM location than sham rats.No other significant main effects or interactions were found on the EPM measures.
Object Recognition Memory
Habituation trials.A significant main effect of session was found for total distances moved [F(2,90)=47.07;P<.001], while lesion, post-surgery delay, and their interaction, were not significant.Polynomial contrasts indicated that the evolution of distances moved fitted a quadratic function (t=4.39;P<.001), with a sharp decrease from the first to the second habituation sessions.
Exploration and locomotor activity in the neophobia test.One animal in Sh-6W group was excluded from the analyses of object exploration latency in the neophobia test because it had null object exploration.
No significant main effects or interaction were found for object exploration time and object exploration latency, while a significant lesion x post-surgery delay interaction was found for total distance moved [F(1,45)=5.50;P=.023].Analyses of nested effects indicated that, in sham groups, animals tested three weeks after surgery moved longer distances in the neophobia test than rats tested six weeks after surgery [F(1,46)=5.72.P=.021].In contrast, post-surgery delay had no effect on distances moved by TBI animals.
Object exploration and locomotor activity in the acquisition and retention trials.
No significant effect of the main factors and their interaction was found for total exploration time, latency of first object exploration, and total distance moved in the acquisition session.One sample t-test (bilateral) indicated that the left-right side ratio of the acquisition session did not differ significantly from 0 in any of the groups; thus, no side bias was found for any of the groups.
With regard to the retention tests, no significant effects were found for distance moved, but there was a significant main effect of lesion on total object exploration during the 3-h retention test, indicating that TBI groups explored significantly less than sham groups [F(1,45)=4.50;P=.039].TBI animals also tended to explore less than sham rats on the 24-h retention test, but this difference only approached significance (P=.061).
In the 3-h and 24-h retention tests, contingency table tests indicated that there was no significant relationship between the experimental condition and the proportions of animals that first visited either the novel or the familiar object; i.e., these proportions were not statistically different across groups.
Discrimination indices and percent novel object exploration in the retention trials.
According to the established criterion (a minimum of 10 s exploration time in the acquisition session), five subjects were excluded from the analyses of discrimination index and percent novel object exploration time (1 in TBI-3W, 2 in TBI-6W and 2 in Sh-6W groups).The final sample for memory analyses was thus composed of 44 subjects, 9 in TBI-3W, 12 in TBI-6W, 12 in Sh-3W, and 11 in Sh-6W.
With regard to the second (24-h) retention test, one-sample t-test showed that in the two sham groups percent time exploring the novel object was significantly higher than chance (50%) [Sh-3W: t(11)=8.67;P<.001; Sh-6W: t(10)=4.27;P<.001], and discrimination index was significantly higher than 0 [Sh-3W: t( 11 One sample t-tests (unilateral) indicated that both TBI groups had interhemispheric ratio scores significantly lower than 100 for the hippocampal formation [TBI-3W: t(9)=-3.64;P= .002;TBI-6W: t(13)=-7.26;P<.001], and significantly higher than 100 for the lateral ventricle [TBI-3W: t(9)=3.41;P=004; TBI-6W: t(13)=2.97;P=.005].These data indicate that, in both TBI groups, the volume of the hippocampal formation was significantly reduced, and that of the lateral ventricle was significantly expanded, in the hemisphere ipsilateral to the lesion compared to the contralateral hemisphere.Two-sample t-test analyses indicated that there were no significant differences between the two TBI groups in hippocampal and lateral ventricle ratio scores, as well as in the mean volume of lesion cavity.
Discussion
The main results of the present work indicate that, in concordance with our hypothesis, TBI sustained during late adolescence induces severe deficits in ORM during adulthood at two different post injury times (three and six weeks), but only when memory was tested 24 h after the acquisition trial and not when it was tested at 3 h.Specifically, TBI animals failed to remember the familiar object in the 24-h retention test and had a performance in this test that was significantly lower than that of shamoperated rats.Memory deficits were similar in the animals tested three weeks after TBI compared to those tested six weeks post-surgery, suggesting that no spontaneous modifications of TBI-related ORM deficits took place along this time span, which is rather long if we take into account that three weeks of a rat's life during early adulthood are estimated to be roughly equivalent to two years of human life (Sengupta, 2013).
The different outcome of TBI on 3-h vs. 24-h retention may be a consequence of differences in the requirements associated to each test, such as memory load, which is higher in the second retention test, as well as possible differences in the neural circuitry participating in each test.The involvement of the perirhinal cortex in ORM is not under dispute (Brown, Barker, Aggleton, & Warburton, 2012;Winters, Saksida, & Bussey, 2008), but this structure interacts with the hippocampus, and with other regions in and outside the medial temporal lobe to contribute to this memory task (Warburton & Brown, 2014).The specific role of the hippocampal formation within this circuit has not been fully elucidated, but it seems to play a more significant role in ORM when the delay between the sample phase and memory tests is increased (Hammond, Tull, & Stackman, 2004), and when spatial requirements are emphasized (Warburton & Brown, 2014).Using the same ORM procedures as in the present work, positive correlations have been found between memory performance in a 24-h retention test (but not in a 3-h retention test) and the number of novel immature neurons (cells double labelled for bromodeoxyuridine and doublecortin) in the dentate gyrus, in rats (Jacotte-Simancas et al., 2014).Although correlational analyses do not involve any causal relationship, they nonetheless suggest a higher involvement of the hippocampal formation in the 24-h retention test, at least with the specific procedures used here.However, this is not to say that damage to the hippocampal formation be the solely responsible for the ORM deficits found.Damage to other structures such as cortical areas, thalamus and striatum (Zhao, Loane, Murray, Stoica, & Faden, 2012), reduced number of mature neurons in the perirhinal cortex (Jacotte-Simancas et al., 2014), and a wide variety of pathophysiological and neurochemical events (widespread inflammatory reactions and oxidative stress, demyelination, axonal injury, and alterations of several neurotransmitter systems, impaired neuroendocrine function, etc) (Biegon et al., 2004;Budde et al., 2011;Zhang, Han, Zhang, Sun, & Ling, 2014), may also contribute to ORM deficits after CCI in rodents.
The animals were 7 weeks old at the time of initial injury.In spite that not a clear definition of adolescence in rats is available, for male rats this age seems to correspond to late adolescence (McCormick & Mathews, 2010;Schneider, 2013).
During adolescence multiple neurodevelopmental phenomena, such as substantial synaptic pruning in several brain areas, changes in the activity of multiple neurotransmitter systems, etc, take place (Schneider, 2013).Some of these neurodevelopmental processes have been linked to the specific neuroendocrine status associated to this period of life, such as the substantial increase of gonadal steroid hormones and growth hormone (GH), as well as to differential reactivity of the hypothalamic pituitary adrenal (HPA) axis and higher stress vulnerability compared to other ages (Masel & Urban, 2014;McCormick & Mathews, 2010;Schneider, 2013).
Besides a role in development, gonadal hormones, GH, and hormones of the HPA axis also participate in a variety of emotional and cognitive functions (Masel & Urban, 2014;McCormick & Mathews, 2010;Sisk & Zehr, 2005).In turn, altered neuroendocrine function is common after TBI in humans, including children and adolescents (Masel & Urban, 2014).Animal research has reported reduced levels of GH and testosterone after repeat (but not single) mild closed head injury in adolescent rats (Greco, Hovda, & Prins, 2014, 2013).In adult rats, disrupted hormonal stress responses after mild LFP (Griesbach, Hovda, Tio, & Taylor, 2011), and long-term alterations of HPA axis, gonadal hormones and GH after CCI (Kasturi & Stein, 2009;Taylor et al., 2008Taylor et al., , 2010;;Zhang et al., 2014) have been reported.Since no endocrine measurements have been done in the present work, an influence of altered hormonal status on the behaviors tested cannot be either confirmed or disregarded.For example, it is known that stress-related increase of corticosterone has disruptive effects on ORM in male rats, while both estrogens and testosterone exert positive modulatory effects on this task (Luine, 2014).
In turn, GH participates in a wide variety of cognitive and non-cognitive functions, and during adolescence this hormone and its downstream mediator insulin like growth factor 1 regulate the expression of a wide variety of genes related to brain function (Yan et al., 2011).Zhang and colleagues (Zhang et al., 2014) found that CCI injured adult male rats had lower levels of overall objet exploration in an ORM task, similar to what has been found in the present work.Interestingly, object exploration was increased by GH replacement therapy, but only in GH deficient rats (which constituted 54.28% of all injured animals).No ORM deficits were found by Zhang and colleagues, probably because memory was only tested after a relatively short delay (1 h).Thus, GH deficiency may contribute to altered levels of object exploration during ORM training, but not be its sole cause.Whether GH deficiency might contribute to the detrimental effects of TBI on ORM at longer testing intervals remains to be tested.Anyhow, the possibility that CCI-related endocrine alterations in rodents may lead to different emotional and cognitive outcomes during adolescence than at other periods of life awaits further investigation.
Several studies have analyzed the temporal evolution of memory deficits after TBI in juvenile rodents by testing memory functions at different post injury times.
Using CCI in rats lesioned at postnatal day 17, no spatial memory deficits were found when animals were tested 30 and 60 days post injury (Ajao et al., 2012), but a longer follow-up study found spatial memory deficits 3 and 5 months post injury that seemed to have resolved by 6 months.Using models of diffuse TBI, and in rats lesioned at postnatal day 17, persistent memory deficits along three months post injury were reported (Adelson, Dixon, & Kochanek, 2000).In another work, diffuse TBI in rats of the same age was reported to induce deficits in acquisition of a spatial task when the animals were tested a few days post injury as well as 3 weeks later, but spatial memory was only affected at the latter time period (Huh et al., 2008).Thus, although lingering memory deficits are generally reported after TBI in immature rodents, there are also some instances of late-onset disturbances, and/or attenuation of the severity of deficits after a long period of time.With regard to rodents lesioned during adolescence, the majority of studies have tested memory function at a single post injury time (For example, Appelberg, Hovda, & Prins, 2009;Jacotte-Simancas et al., 2014;Mannix et al., 2014;Mehan & Strauss, 2012;Prins, Hales, Reger, Giza, & Hovda, 2010).A recent study examined the long-term evolution of step down avoidance memory in rats submitted to CCI at 4 weeks of age, which would correspond to late childhood/beginning of adolescence.Memory deficits were found to persist from the first post injury time tested (7 days) to the last testing time, at 12 weeks (Park et al., 2014).The results of the present work indicate that CCI also causes persistent memory deficits in late adolescent rats, since impairment of 24-h ORM was present three weeks after injury and remained unchanged well into adulthood, six weeks after injury.TBI animals had similar locomotion amounts (distances moved) than sham rats in the ORM cage during acquisition and retention trials.In contrast, they exhibited lower object exploration times in the retention tests, but not in the neophobia and acquisition sessions.These data might reflect a somehow reduced exploratory drive after CCI in adult and immature rats, in concordance with other reports (Ajao et al., 2012;Wagner, Postal, Darrah, Chen, & Khan, 2007;Zhang et al., 2014).Since ORM is based on exploratory activity, reduced object exploration during retention might have mediated the ORM deficits.This seems unlikely, though, because TBI rats spent a similar proportion of time exploring the novel object than sham animals in the first retention test, in spite of lower overall exploration times.Furthermore, the specific ORM measure used is known to minimize any possible influences of overall object exploration on memory (Akkerman et al., 2012).The ORM deficits cannot be attributed, either, to a side bias (which was not detected in any group) or to any putative influence of the object (familiar or novel) that was visited in the first place in the retention tests on percent time exploring the novel object.
In contrast to the detrimental effects on 24 h ORM, TBI only had minor effects on emotional reactivity.Thus, the EPM measures more directly related to anxiety, such as open arm entries and time ratio, were not affected by TBI.The only significant difference between TBI and sham groups in the EPM was the finding that TBI animals spent less time in the central platform than sham rats, an effect opposite to a report indicating that male (but not female) preadolescent rats submitted to mild TBI/concussion by means of a modified weight drop injury spent more time in the central platform of an EPM than control rats when tested shortly after injury (Mychasiuk, Farran, & Esser, 2014).The meaning of time in the central platform is not clear, but it has been suggested that this measure may be related to risk assessment and decision making (about whether or not to enter the unsafe areas) (Casarrubea et al., 2013;Cruz, Frei, & Graeff, 1994).Thus, focal TBI with contusion might be associated to a lower risk assessment capacity in face of new and potentially threatening environments, without any significant alteration of anxiety-like behaviors.A comparison of anxiety-related behaviors after TBI in animal literature has led to rather inconsistent results, as indicated in the introduction section (Malkesman et al., 2013).
With regard to immature rodents, Kamper and colleagues (Kamper et al., 2013), using rats submitted to CCI at postnatal day 17, failed to detect any change in anxiety-like behavior in the zero maze at any of the post injury testing times (3, 5, and 6 months); however, with the same model increased anxiety was found 60 days post injury, but not earlier (Ajao et al., 2012).This indicates that the effects of TBI on anxiety may vary depending on the time elapsed since injury.In concordance with this, using a model of concussion Mychasiuk and colleagues found that rats injured at 30 days of age did not differ from shams in time in open arms of an EPM when testing took place one day after injury (Mychasiuk et al., 2014).In contrast, increased anxiety-like behaviors were found when testing took place 33 days after injury regardless of whether the animals had received a single concussion or two concussive injuries separated by one month.There were, however, some differences between male and female rats (Mychasiuk, Hehar, van Waes, & Esser, 2015).Overall, these results suggest that anxiety-like behaviors may vary depending on post-surgery delay, as well as on other variables, such as age and sex of the animals, kind of animal model of TBI, amount of prior handling, etc. Anxiety-like behaviors, while not being influenced by TBI, were affected by post-surgery delay.Thus, in both TBI and sham conditions, animals tested six weeks after surgery (when they were 13 weeks old) showed a higher number of entries into the open arms and higher entries ratio than animals tested three weeks after surgery (at an age of 10 weeks old).This time-dependent effect on anxiety-like behaviors may be indicative of a slight reduction of anxiety with age, a finding which would be concordant with the progressive reduction of anxiety-like behaviors reported from adolescence to young adulthood, and from the latter to middle adulthood, by Lynn & Brown (2010).Additionally, or alternatively, the differences between the two time points might be due to the different lengths of the interval in which rats were left essentially undisturbed, from surgery to testing, rather than age.Surgery-testing interval also had an effect on locomotion during the neophobia test, where sham-3W animals moved more than sham-6W rats, while this effect was not seen in TBI rats.These data are concordant with a report of higher locomotion at postnatal day 72 than at postnatal day 117 in rats introduced for the first time in a cage containing novel objects (Saul et al., 2012), a condition with some similarities to the neophobia test.These results might, therefore, reflect the existence of possible age-related differences in locomotion under certain circumstances in sham-operated rats, which would be blocked by TBI.No significant differences were found in the gross volumetric histological measures of brain damage between the two TBI groups (and, thus, between the two post injury times examined).Therefore, similar histological outcomes paralleled similar behavioral deficits in both TBI groups.The possibility, however, that differences in other measures related to brain damage may exist cannot be disregarded.Also differences among groups might have arisen at longer follow-up periods, as it has been described after several TBI models in adult and juvenile rodents (Kamper et al., 2013;Osier et al., 2014).
In summary, experimental TBI by means of CCI during late adolescence (7 weeks old) induced ORM deficits when the animals were challenged with a 24-h (but not with a 3-h) retention delay.These deficits were evidenced at the two post injury times examined, three and six weeks, when the animal ages were 10 and 13 weeks old, respectively, indicating persistence of memory disturbances well into adulthood.TBI also had subtle effects on behaviors related to exploratory drive and risk assessment, but did not have a major impact on the main anxiety-like behaviors.Longer follow-up studies should be carried out after late adolescent CCI injury, as well as after other TBI models, to examine whether this behavioral profile is modified at older ages and whether temporal evolution of memory deficits and emotional reactivity differs depending on the kind of lesion inflicted and its severity.6W groups >3W groups: Indicates a significant effect of the main factor surgery-testing interval.Specifically, the mean pooled values of TBI-6W and Sh-6W groups were higher than the mean pooled values of TBI-6W and Sh-6W.
TBI<Sham: Indicates a significant effect of the main factor lesion, with the mean pooled values of the two TBI groups being lower than those of the two sham groups.
square platform.Two of the arms opposite each other had no sides and were open.The other two arms were closed on the sides, with 40 cm high walls, but open on the top.The open arms had 1 cm high edges as a tactile guide to prevent the animals from falling off these arms.The source of light was a light bulb suspended 1.6 m above the centre of the EPM giving illumination of approximately 60 lux on the floor of the central platform, 80 lux on the floor of the open arms, and 30 lux on the floor of the closed arms.The rats were placed in the centre of the maze, always facing the same open arm.Each animal was tested for 5 min in a single session.An automated system (Test 4B, Cibertec S.A., Madrid, Spain), consisting of ten pairs of photoelectric cells that were strategically located in several parts of the apparatus, enabled us to record exploratory behavior in the EPM.The measurements recorded for all the subjects were: time spent in open arms, closed arms and central platform; number of open, closed, and total arm entries; incursions into the end of the open arms, defecations and micturitions, and grooming and rearing episodes.The open arm entries/total arm entries ratio (entries ratio) and the time in open arms/time in all four arms ratio (time ratio) were also calculated for all the subjects.During the EPM session a masking noise was provided by an electric fan.
)=8.61; P<.001; Sh-6W: t(10)=4.25;P<.001], indicating a good recall of the familiar object.In contrast, these values did not differ significantly from chance reference values in both TBI groups (TBI-3W and TBI-6W), indicating a lack of recall.Linear model ANOVA showed a significant main effect of lesion on percent time exploring the novel object [F(1,40)=8.97;P=.004], and on discrimination index [F(1,40)=8.94;P=.004], in the 24-h retention, while neither post-surgery delay or the interaction between the two factors were significant.This indicates that TBI groups spent less time exploring the novel object and had a lower discrimination index than sham groups.Measures of brain damage.
Figure
Figure 2A depicts the mean interhemispheric ratio scores for the volumes of the
Figure 1 .
Figure 1.Performance in the object recognition memory tests: Mean (+SEM) percent
Figure 2 .
Figure 2. A. Mean interhemispheric ratio scores (+ SEM) for the volume of the
Table 1 .
Mean values (standard deviation) of the measures taken in the EPM for each experimental group.Statistical effects are indicated in the last column. | 2016-10-07T18:53:55.667Z | 2015-03-02T00:00:00.000 | {
"year": 2015,
"sha1": "886c02d42f93bc4af69bcafdf48d1a72ed971bf7",
"oa_license": "CC0",
"oa_url": "https://ddd.uab.cat/pub/artpub/2015/130914/behneu_a2015m3iENG.pdf",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "886c02d42f93bc4af69bcafdf48d1a72ed971bf7",
"s2fieldsofstudy": [
"Psychology",
"Biology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
235279339 | pes2o/s2orc | v3-fos-license | Identification of modal parameters of response signals in the white noise excitation with harmonics based on cepstrum
Operational Modal Analysis (OMA) makes up for the shortcomings of the traditional experimental modal analysis based on known excitation-response and only needs to identify the structure’s modal parameters based on the vibration response signal of the structure. The OMA method is mostly based on white noise or random excitation. However, for rotating parts, the excitation is often not ideal white noise, and there are standard harmonic components caused by periodic signals. Without processing the harmonics, using the working modal analysis technology to identify the modal parameters is easy to produce false modal information. In order to study the recognition of modal parameters with harmonic excitation, in this paper, the three-free spring-mass-damping model is used as the research object to apply white noise excitation with harmonics, and the method of cepstrum editing is used to remove the harmonics. The Covariance-Driven Stochastic Subspace Identification (SSI-COV) method is used to obtain the structural modal parameters. The simulation results show that using the cepstrum editing method to filter out the harmonic interference components before using the working modal analysis technology to identify the modal parameters can effectively obtain the real modal information.
Introduction
Modal analysis is the process of analyzing the structure's inherent characteristics or modal parameters, such as the structure's natural frequency, damping ratio, and modal shape, through the use of vibration system theory. The computational modal analysis uses the finite element method to discretize the vibration structure, establish the mathematical model of the eigenvalue problem, and use various approximate methods to solve the eigenvalues and eigenvectors of the system and obtain modal information such as the natural frequency of the structure. Experimental modal analysis can be divided into the traditional experimental modal analysis (Experimental Modal Analysis, EMA) and operating modal analysis (Operational Modal Analysis, OMA) [1] according to whether the excitation method is measurable. EMA technology obtains the excitation and response signals of the structure through experiments, establishes the input-output model of the system, and obtains the structure's modal parameters by using the frequency response function. However, the OMA technology does not need to collect excitation signals; only the response signals are needed to identify modal parameters. In recent years, the OMA technology based solely on the response of unknown incentives or environmental incentives have attracted more and more attention at home and abroad and has been widely used in machinery, civil engineering, and other fields. The need for working modal analysis technology based only on response signals first appeared in the field of civil engineering because the use of force hammers or vibration exciters to excite large structures such as bridges and buildings, the vibration signals obtained are far greater than those in the environment It is difficult and costly under incentives. When the vibration signal caused by the harmonic excitation part is identified in the modal parameter, the resonance peak generated will often be the same Resonant frequencies with similar natural frequencies are mistaken for the natural frequencies of the structure, which can quickly generate false modal information, which leads to a decrease in the robustness of modal parameter identification [2]. For example, in the cutting conditions of machine tools, false modal information cannot provide accurate dynamic parameters, affecting the prediction of chatter boundaries. Therefore, the harmonic interference generated during modal parameter identification of rotating machinery in the machining state is a problem that needs to be solved. To solve the problem of operating modal analysis in the presence of harmonics, we can see a variety of methods to solve harmonic interference, which is generally divided into two categories [3]: The first type is to deal with the problem of harmonic interference in the presence of harmonic interference. The operating modal analysis method is improved; the second type eliminates the harmonic signal before the operating modal analysis. Brincker [4] proposed in 2000 that the use of statistical characteristics to find that the probability density curves of the harmonic excitation signal and the pure white noise excitation signal are different can be used to detect the harmonics. Modak [5] uses the random decrement method to remove the false modal information caused by harmonics. However, when the harmonic frequency is close to the system's natural frequency, the method based on statistical characteristics cannot achieve harmonic removal. Preprocessing the signal before OMA processing is called signal preprocessing technology, such as the time synchronization averaging method, nonparameter removal method, cepstrum editing method, etc. Compared with the previous two methods, cepstrum editing is the most direct and lower cost.
In this paper, the object adopted is a multi-degree-of-freedom vibration system. The primary purpose is to eliminate the harmonic components in random excitation through cepstrum analysis, and use the random subspace method based on covariance drive to identify the modal parameters of the system [6], and obtain the inverted Spectrum analysis can effectively remove the harmonic part and avoid false modal information when performing working modal analysis.
Cepstral properties
The formation process of the cepstrum is shown in Figure 1. The cepstrum is divided into complex cepstrum and real cepstrum. For the time domain signal, the complex cepstrum is defined as [7]: In equation (1), ( ) F f Represents the frequency spectrum of the time domain signal, ( ) A f is the amplitude spectrum of ( ) F f , Represents the frequency of the complex cepstrum. The convolution relationship can be converted into an additive form through the cepstrum operation: ( 2 ) In equation (2), ( ) y t represents the response signal of a linear system, ( ) f t represents the convolution of the input signal, ( ) h t represents the impulse response function of the transmission path. Perform Fourier transform on the response signal to get the complex spectrum of the response signal.
( 3 ) Logarithmic transformation of the complex frequency spectrum and inverse Fourier transformation can get the response signal's complex cepstrum. Take the logarithm of the amplitude spectrum of the input signal and then perform the inverse Fourier transform to obtain the signal's real cepstrum. In the cepstrum domain, the cepstrum of the system's vibration signal can be expressed as the sum of the cepstrum of the input signal and its impulse response function.
Cepstrum has the characteristics of homomorphic processing. Homomorphic processing is a method that tries to transform nonlinear problems into linear problems for processing. It can separate two signals synthesized by multiplication or convolution. Since the cepstrum of a periodic signal has periodic spikes, the periodic signal components can be determined according to the position of the peak in the cepstrum of the vibration signal. The cepstrum of the random excitation signal is random, and there is no peak. From this feature, the periodic signal can be judged, and the frequency of the periodic signal can be estimated. Periodic signals and random signals can be separated through the cepstrum editing process to eliminate harmonic interference. The editing process of the cepstrum is shown in Figure 2.
Simulation analysis
To verify the effectiveness of the above method, MATLAB is used to carry out numerical simulation of the three-degree-of-freedom linear time-invariant vibration system. In Figure 3. According to Newton's second law, the vibration system's differential equation of motion is written in matrix form. 1 1 1 2 2 1 1 2 2 1 1 2 2 2 2 3 3 2 2 2 3 3 2 2 3 3 3 3 4 3 3 3 4 3 3 0 When applying a random excitation containing the first harmonic to the mass of the system, set the sampling frequency to 100Hz, and the sampling time to 100s, and use the state space method to set the corresponding parameters to obtain the system acceleration response signal in Simulink simulation, as shown in Figure 4. The excitation signal is Figure 4. Time domain response signal Figure 5. power density spectrum In Figure 4, the harmonic part is not visible, so the cepstrum signal diagram of the response signal is calculated. Figure 5 shows the power spectral density function of the response signal. There is a peak at 5Hz. The direct use of the response signal for modal parameter identification will lead to false modes. Therefore, when identifying the modal parameters, harmonics need to be eliminated. Take the acceleration response signal of the first layer and find the actual cepstrum to get Figure 6. According to the periodic signal's cepstrum's signal characteristics, it can be seen in the figure that there is a spike corresponding to the harmonic frequency at 0.2s, and the harmonic excitation causes the spike. The Pap filter is used to remove the harmonic components at the spikes.
It can be seen in the power spectral density diagram of the response signal shown in Figure 6 that the harmonic components are effectively removed. Finally, the covariance-driven random subspace method in the working modal analysis is used to identify the modal parameters of the response signal, and the According to the above simulation results, it is concluded that the random excitation obtained after removing the harmonics can effectively identify the modal parameters of vibration through the SSI-COV method. For the response signal obtained from the white noise excitation with harmonics, in this case, it is very convenient to use the method proposed in this paper to complete the modal parameter identification.
Conclusion
This paper studies the modal parameter identification with harmonic excitation uses cepstrum editing to remove the harmonic part of the vibration response signal and uses the covariance-driven random subspace method for modal parameter identification. The nature of the cepstrum and the process of cepstrum analysis are introduced in detail. The advantages of this harmonic removal method are high robustness and low cost. After removing the harmonic part, the random subspace method based on random excitation is used for modal parameter identification, which effectively obtains the vibration system's real modal information. The simulation results prove the validity of the working modal analysis based on cepstrum editing. Therefore, it has good engineering application value. | 2021-06-03T00:24:04.810Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "f25883f90490366e3aaec9e43725954807a76577",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1846/1/012009",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "f25883f90490366e3aaec9e43725954807a76577",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
4383872 | pes2o/s2orc | v3-fos-license | Long non-coding RNA BC168687 small interfering RNA reduces high glucose and high free fatty acid-induced expression of P2X7 receptors in satellite glial cells
Purinergic signaling contributes to inflammatory and immune responses. The activation of the P2X purinoceptor 7 (P2X7) in satellite glial cells (SGCs) may be an essential component in the promotion of inflammation and neuropathic pain. Long non-coding RNAs (lncRNAs) are involved in multiple physiological and pathological processes. The aim of the present study was to investigate the effects of a small interfering RNA for the lncRNA BC168687 on SGC P2X7 expression in a high glucose and high free fatty acids (HGHF) environment. It was demonstrated that BC168687 small interfering (si)RNA downregulated the co-expression of the P2X7 and glial fibrillary acidic protein and P2X7 mRNA expression. Additionally, HGHF may activate the mitogen-activated protein kinase signaling pathway by increasing the release of nitric oxide and reactive oxygen species in SGCs. Taken together, these results indicate that silencing BC168687 expression may downregulate the increased expression of P2X7 receptors in SGCs induced by a HGHF environment.
Introduction
Type 2 diabetes mellitus (T2DM) is a prevalent endocrine and metabolic disease. Changes in life style and accelerations in the aging process have contributed to the increasing prevalence of T2DM. It is a chronic non-communicable disease that particularly affects those with cardiovascular or cerebrovascular diseases (1)(2)(3). In addition, diabetic neuropathy may occur, which involves the excessive excitation of primary afferent receptors and central neurons, leading to pain, and other adverse effects (4). The activation of satellite glial cells (SGCs) has been reported to be an essential factor in several experimental models of pain (5)(6)(7). Hyperglycemia and dyslipidemia are hallmark features of pre-diabetes (8,9). Obesity-associated dysregulation of glucose and lipid metabolism has been associated with diabetes, and high blood sugar and free fatty acids (FFA) in serum are thought to contribute to neurological disorder development (10,11). Thus, cell injury inducing a high glucose high free fatty acid (HGHF) environment may effectively model the condition of neurological disorders in T2DM (12,13).
Adenosine 5'-triphosphate (ATP) is an important messenger that is involved in numerous processes, including the transmission of pain signals. It may also act as an acute pro-inflammatory danger signal and a crucial mediator of neuroinflammation. In an environment of inflammation or stress, levels of extracellular ATP (eATP) rapidly approach near millimolar levels and become the main stimulation of pro-inflammatory pathways (14). Subclasses of purinergic 2 (P2) receptors include P2X and P2Y. P2X receptors, particularly the P2X purinoceptor 7 (P2X 7 ), are strongly associated with immunity and inflammation (14). P2X 7 receptors are highly expressed in immune cells and are activated as a result of pro-inflammatory cytokine release (15). In SGCs, eATP may activate the P2X 7 receptor, thus possibly contributing to the development of chronic inflammatory disease (16).
Long non-coding RNAs (lncRNAs) are non-protein-coding RNA transcripts >200 nucleotides in length. Increasing evidence has highlighted the role of lncRNAs in physiology and disease (17,18). LncRNAs are involved in diverse regulatory processes, including the alteration of chromatin and transcriptional state, nuclear architecture, splicing and mRNA translation (19,20). LncRNA BC168687 is evolutionarily conserved across numerous species and significantly increased levels have been detected in the dorsal root ganglion (DRG) of type 2 diabetic rats (21). Therefore, BC168687 was selected for examination. The present study revealed that lncRNA BC168687 small interfering RNA (siRNA) may downregulate P2X 7 receptor expression induced by a HGHF environment in primary cultured SGCs.
Materials and methods
Primary culture. The present study was approved by the Ethical Committee of Nanchang University (Nanchang, China) and animals were treated according to the Guidelines for the Care and Use of Animals (22). Fetal Sprague-Dawley rats (n=6; male; 7-9 g) were obtained from the Laboratory Animal Science Department of Nanchang University (Nanchang, China). All rats were housed in clean, standard metabolic cages and kept at a constant temperature of 37˚C with 35-65% humidity. The rats were kept in a 12 h light/dark cycle and had free access to food and water. On the third day, rats were anesthetized using ether. The DRGs of fetal rats were extracted with microforceps and rapidly transferred into Dulbecco's modified Eagle Medium/F12 (DMEM/F12) medium (HyClone; GE Healthcare Life Sciences, Logan, UT, USA) and incubated at 4˚C for 30 min prior to the next step. Following the detachment of redundant fibers with ophthalmic forceps, the DRGs were incubated with collagenase type III (0.1 mg/ml; Beijing Solarbio Science and Technology, Ltd., Beijing, China) for 15 min at 37˚C. The collagenase was removed by centrifugation at 168 x g for 5 min and DRGs were pre-incubated with 0.25% trypsin-EDTA (0.5 mg/ml; Beijing Solarbio Science and Technology, Ltd.) in a cell incubator for 35-40 min at 37˚C. DMEM/F12 containing 10% fetal bovine serum (FBS; Biological Industries, Kibbutz Bei-Haemek, Israel) was subsequently used to terminate enzymatic digestion.
The DRGs were blown gently using sterile disposable pipettes before being passed through a cell strainer (aperture, 70 µm; 200 mesh). Glial cells (5x10 5 cells/ml) were inoculated on polylysine-coated coverslips into 24-well plates to obtain cell climbing slides. SGCs were purified from glial cells by replacing the medium twice every 24 h. The purified SGCs were sustained in DMEM/F12 containing 10% FBS (Biological Industries), 100 U/ml penicillin and 100 mg/ml streptomycin sulfate at 37˚C in a humidified incubator with 5% CO 2 . To imitate hyperglycemia and dyslipidemia, 40 mM D-glucose (Beijing Solarbio Science and Technology, Ltd.) and 0.60 mM FFAs were added to DMEM/F12 medium. FFAs were a mixture of oleate (Sigma-Aldrich; Merck KGaA, Darmstadt, Germany) and palmitate (Sigma-Aldrich, Merck KGaA) at a 2:1 ratio (w/w) (23,24). In addition, 20 mM D-Mannitol (Beijing Solarbio Science and Technology) was added into DMEM/F12 as an isotonic control.
Cell viability test. The viability of SGCs was analyzed with the TransDetect Cell Counting kit (CKK; Beijing TransGen Biotech, Co., Ltd., Beijing, China). A suspension of 100 µl SGCs (5x10 3 cells/ml) from each group was placed onto a 96-well microplate. Each group was tested in triplicate. Following culture of SGCs at the different concentrations of D-glucose and FFA for 72 h, 10% CCK diluent was added to each well. Cells were subsequently maintained in a cell incubator for 2 h. A wavelength of 450 nm was used to detect the absorbance using a multimode plate reader. The data was analyzed with GraphPad Prism v6.0 (GraphPad Software Inc., La Jolla, CA, USA).
RT-qPCR. RNA was extracted with Transzol Up (Beijing TransGen Biotech, Ltd.) and reverse transcribed at 37˚C for 1 h using a RevertAid™ First Strand cDNA Synthesis kit (Thermo Fisher Scientific, Inc., Waltham, MA, USA). The concentration of cDNA for each group was detected to be ~4x10 3 ng/µl using the NanoDrop2000 (Thermo Fisher Scientific, Inc.). The primer sequences used (Sangon Biotech Co., Ltd., Shanghai, China) were as follows: BC168687 forward, 5'-GGA CAA GTC CTT AGC CAT GC-3' and reverse, 5'-CAA CAC CGT TGG ATC CTT CT-3'; P2X 7 forward, 5'-GCA CGA ATT ATG GCA CCG TC-3' and reverse, 5'-CCC CAC CCT CTG TGA CAT TC-3'; and β-actin forward, 5'-CCT AAG GCC AAC CGT GAA AAG A-3' and reverse, 5'-GGT ACG ACC AGA GGC ATA CA-3' . RT-qPCR was performed using the SYBR Premix Ex Taq (Takara Biotechnology Co., Ltd., Dalian, China) and the StepOnePlus™ Real-Time PCR system (Thermo Fisher Scientific, Inc.). The thermo cycling conditions were as follows: 95˚C for 30 sec, 60˚C for 15 sec, 95˚C for 15 sec, 60˚C for 1 min and 95˚C for 15 sec. The melting curve was used to determine the amplification specificity and results were analyzed using the StepOnePlus Real-Time PCR system. The average threshold cycle (Cq) value for the target minus the average value for β-actin was used to calculate the ∆Cq value (∆Cq=Cq target-Cq reference). The ∆∆Cq value was calculated as follows: ∆Cq test sample-∆Cq calibrator sample. The relative quantity (RQ) of the gene expression was calculated using the following equation: 2 -ΔΔCq (21).
Immunocytochemistry. Immunocytochemistry was performed with the SPlink Detection kit (cat no. SP-9001; OriGene Technologies, Inc., Beijing, China) and the working solutions provided by the manufacturer were used if not otherwise specified. Cell climbing slides (diameter, 8 mm) were removed from DMEM/F12 and washed three times in PBS and subsequently fixed in 4% paraformaldehyde (Beijing Solarbio Science and Technology, Ltd.) for 15 min at room temperature. Following three washes with PBS, slides were blocked with normal goat serum at 37˚C for 30 min. The slides were washed in PBS and incubated with P2X 7 primary antibody (cat no. APR-004-AO; 1:200; Alomone Labs, Jerusalem, Israel) overnight at 4˚C. Slides were washed in PBS and incubated with Biotin labeled goat anti-rabbit IgG polymer secondary antibody for 15 min at 37˚C. Slides were washed again with PBS prior to incubation with alkaline phosphatase-labeled streptavidin for 15 min at 37˚C. Slides were subsequently stained with 3,3'-diaminobenzindine solution (OriGene Technologies, Inc.) at room temperature for 10 min and sealed by neutral balsam (OriGene Technologies, Inc.). The expression of P2X 7 receptors was visualized with a fluorescence inverted microscope (magnification, x200) and the integrated optical density (IOD) of the P2X 7 receptors was calculated using Image-Pro Plus v6.0 (Media Cybernetics Inc., Rockville, MD, USA).
Measurement of intracellular nitric oxide (NO) and ROS.
Intracellular NO was measured using the nitrate reductase method (21). The NO concentration in each group was calculated according to the formula provided in the Nitric Oxide Assay kit (Nanjing Jiancheng Bioengineering Institute, Nanjing, China). The IOD values of each group were calculated using a multimode plate reader.
Intracellular ROS levels were inspected with the ROS Assay kit (Nanjing KeyGen Biotech Co., Ltd., Nanjing, China). Following the removal of DMEM/F12 from SGCs, diluted 2',7'-dichlorofluorescin diacetate (DCFH-DA; 1:1,000; 10 µM; cat no. S0033, Beyotime Institute of Biotechnology) was added. Sample were placed into 24-well plates and incubated for 20 min at 37˚C, then subsequently washed three times with serum-free DMEM/F12. The fluorescence density was detected at an excitation wavelength of 488 nm and an emission wavelength of 525 nm with the multimode plate reader.
Statistical analysis. Results are presented as the mean ± standard error. GraphPad Prism v6.0 (GraphPad Software Inc.) and Image-Pro Plus v6.0 (Media Cybernetics Inc.) were used to perform the statistical tests. The unpaired Student's t-test was used when comparing two groups and the one-way analysis of variance with the Bonferroni correction was used for multiple comparisons. P<0.05 was considered to indicate a statistically significant difference. (Fig. 1A).
Screening
The viability of SGCs in a FFA environment was also significantly decreased at FFA concentrations ≥0.60 mM: Control, 0.49±0.02; isotonic control, 0.49±0.01; 0.15 mM, 0.48±0.01; 0.30 mM, 0.47±0.00; 0.60 mM, 0.46±0.01; 1.2 mM, 0.44±0.01; and 2.4 mM, 0.44±0.02 (Fig. 1B). Based on the aforementioned results, 40 mM D-glucose and 0.6 mM FFA were selected as the final concentrations to produce a HGHF environment. P2X 7 receptor and lncRNA BC168687 expression in SGCs in a HGHF environment. The relative expression level of BC168687 was determined using RT-qPCR ( Fig. 2A) and the expression of P2X 7 receptors was analyzed with immunocytochemistry ( Fig. 2B and C). The results indicated that the relative expression levels of BC168687 and P2X 7 were higher in the HGHF environment compared with the control group ( Fig. 2A and B; P<0.01).
BC168687 siRNA downregulates the expression of P2X 7 and GFAP in SGCs. Following transfection of the siRNAs into SGCs for 72 h, P2X 7 and GFAP expression was detected by western blot analysis (Fig. 4A and B). The relative expression of P2X 7 in each group was as follows: Control, 0.50±0.02; HGHF, 0.65±0.01; HGHF+BC168687si, 0.43±0.03; HGHF+NCsi, 0.63±0.02 and HGHF+VD 0.67±0.01. The variance analysis was statistically significant between the HGHF+BC168687si and HGHF group (P<0.001). The relative expression of GFAP in each group was as follows: Control, Expression of P2X 7 protein and GFAP in the HGHF group was signficantly increased compared with the control group (P<0.01). Compared with the HGHF group, the P2X 7 protein and GFAP expression levels were signficantly decreased in the HGHF+BC168687si group (P<0.01). No significant differences were observed among the HGHF, HGHF+NCsi and HGHF+VD groups. Therefore, BC168687 siRNA may attenuate the upregulation of the P2X 7 receptor and GFAP induced by a HGHF environment in SGCs.
P2X 7 and GFAP co-expression is induced by a HGHF environment in SGCs.
Immunofluorescence was used to detect the co-expression of P2X 7 receptor and GFAP in SGCs (25). The co-expression quantities of the P2X 7 receptors and GFAP in the five groups was detected following 72 h of siRNA transfection, based on the co-localization of P2X 7 and GFAP in SGCs (Fig. 5). Compared with the control group, the P2X 7 receptor and GFAP co-expression quantities were increased in the HGHF group. The co-expression quantities of the HGHF+BC168687si group were decreased compared with the HGHF group. No apparent difference was observed among the HGHF, HGHF+NCsi and HGHF+VD groups. Therefore, it was inferred that BC168687 siRNA may reduce the P2X 7 receptor upregulation induced by a HGHF environment.
BC168687 siRNA reduces the upregulation of p-ERK1/2 expression induced by a HGHF environment in SGCs.
The expression level of phosphorylated-ERK1/2 protein in SGCs was detected by western blot analysis. The relative expression levels of p-ERK1/2 protein in each group were as follows: Control, 0.50±0.02; HGHF, 0.58±0.02; HGHF+BC168687si, 0.52±0.10; HGHF+NCsi, 0.65±0.03; and HGHF+VD, 0.60±0.01. Compared with the control group, the p-ERK1/2 protein expression in the HGHF group was signficantly increased. The expression level of p-ERK1/2 protein in the HGHF+BC168687si group was significantly decreased compared with the HGHF group ( Fig. 6A; P<0.01). No significant difference was observed among the HGHF, HGHF+NCsi and HGHF+VD groups. Based on the results obtained, it was concluded that BC168687 siRNA was able to reduce the upregulation of p-ERK1/2 signalling induced by a HGHF environment in SGCs. Effect of BC168687 siRNA on ATP levels in SGCs induced by a HGHF environment. As a proinflammatory mediator released from SGCs, ATP contributes to the initiation and maintainence of neuropathic pain (26). The results revealed that the concentrations of ATP (pM) in each group were as follows: Control, 63.33±11.55; HGHF, 140±17.32; HGHF+BC168687si, 76.67±5.77; and HGHF+NCsi, 176.67±35.12. ATP levels in the HGHF group were signficantly increased compared with the control group ( Fig. 6B; P<0.01) and the levels of ATP in the HGHF+BC168687si group were significantly decreased compared with the HGHF group (P<0.01). There was no significant difference between the HGHF and HGHF+NCsi groups.
Effect of BC168687 siRNA on NO and ROS levels in SGCs induced by a HGHF environment. NO and ROS are oxidative injury factors released from SGCs that are also considered to contribute to the initiation and maintainence of neuropathic pain (27)(28)(29) (Fig. 7A). The variance was statistically significant between the HGHF+BC168687si and HGHF group (P<0.01). Intracellular ROS levels were measured by fluorescence density. The results of the ROS assay kit were as follows: Control, 2,655±243.98; HGHF, 3,394±141.74; HGHF+BC168687si, 2,807±58.03; and HGHF+Ncsi, 3,642±213.18 (Fig. 7B). NO and ROS levels in the HGHF group were significantly increased compared with the control group (P<0.01). NO and ROS levels in the HGHF+BC168687 si group were signficantly decreased compared with the HGHF group (P<0.01). There was no signifcant difference between the HGHF and HGHF+NCsi groups.
Discussion
Compared with short-chain ncRNAs, including microRNAs, siRNAs and Piwi-interacting RNAs, lncRNAs account for the majority of ncRNAs that regulate biological mechanisms and processes (30,31). They participate in the regulation of transcription and intracellular signal transduction pathways, including those involved in organism development (32). Therefore, dysregulated lncRNA expression may contribute to the development of numerous human diseases (33)(34)(35). P2X 7 receptors are expressed in SGCs and studies have demonstrated that P2X 7 receptors contribute to neuropathic pain (36)(37)(38)(39). High levels of glucose and FFAs have been identified as a primary cause of nervous system dysfunction in diabetes (8,13). NcRNAs lack the ability to encode proteins, but possess regulatory functions, and are involved in almost all physiological and pathological processes (30,31,(40)(41)(42). The present study demonstrated that BC168687 expression in SGCs in a HGHF environment group was significantly increased compared with the control group. P2X 7 receptor expression was also upregulated in SGCs in a HGHF environment, inferring the involvement of BC168687 in pathological processes mediated by P2X 7 receptors in SGCs.
The P2 receptor family is comprised of ligand-gated ion channel P2X receptors and G-protein coupled P2Y receptors (43). Autocrine release of ATP by glial cells activates P2X 7 receptors and may amplify pain signals through a cascade reaction (44)(45)(46)(47). Thus, inhibiting P2X 7 receptors may relieve inflammatory and chronic neuropathic pain (37,39). The present study demonstrated that a HGHF environment increased ATP release in SGCs and BC168687 siRNA was able to decrease this release. P2X 7 mRNA and protein expression in SGCs in the HGHF group was significantly increased compared with the control group. Expression of P2X 7 mRNA and protein was significantly decreased in the HGHF+BC168687si group compared with the HGHF group, suggesting that BC168687 is associated with the upregulation of P2X 7 receptors observed in the HGHF group.
The increasing incidence of T2DM along with its comorbidities makes it urgent to understand the pathogenesis and regulatory mechanisms of the disease. The specific involvement of lncRNAs in diabetes is unclear (48,49). Diabetes is a chronic inflammatory disease, and the expression P2X 7 receptors may be upregulated by inflammatory injury (25,50). In an inflammatory state, ATP can be released from sensory neurons and SGCs in an autocrine or a paracrine fashion and activate P2 receptors (7,51). Excessive P2X 7 receptor excitation by ATP can promote the opening of plasma membrane pores, and may increase the release of pro-inflammatory cytokines, including interleukin-1β, interleukin-6 and tumor necrosis factor-α (37,52). These cytokines further induce glial cells to release pro-inflammatory mediators and exacerbate neuronal damage (14,45).
Oxidative stress is one of the important factors leading to diabetic neuropathy. Along with ATP, NO and ROS are released from glial cells and contribute to chronic neuropathic pain in diabetes (27,53,54). The present study indicated that HGHF significantly increased the release of NO and ROS, and these levels were decreased in the HGHF+BC168687si group. This suggests that BC168687 contributes to the pathological processes involving P2X 7 receptors, leading to neuropathic and peripheral inflammatory pain.
The mitogen-activated protein kinase (MAPK) signaling pathway is involved in cell proliferation, differentiation and adaptation, and may also contribute to the development of neuronal injury and disease. The MAPK family contains p38 MAPKs, ERK and c-Jun N-terminal kinases (55,56). The signaling pathways of MAPKs are crucial to signal transduction between neurons and glial cells, both of which are also essential for persistent pain (57,58). However, different MAPKs have distinct actions in glial cells following injury (58). Active ERK1/2 signaling occurs between the nucleus and the cytoplasm, and ROS is able to influence the ERK MAPK signaling pathway through phosphorylation (57,59). In the present study, an upregulation of p-ERK1/2 signaling was observed in SGCs in the HGHF group. Thus, it may be inferred that the ERK MAPK signaling pathway is involved in the aberrant activation of SGCs in a HGHF environment. Overall, it was concluded that BC168687 may be involved in the increased expression of P2X 7 receptors in SGCs in a HGHF environment, and BC168687 siRNA may have the potential to alleviate diabetic neuropathic pain mediated by P2X 7 . These findings suggest BC168687 siRNA as a novel treatment for P2X 7 associated diseases including diabetic neuropathic pain. Further research to elucidate the specific mechanisms of BC168687 siRNA are required. | 2018-04-03T00:00:38.419Z | 2018-02-13T00:00:00.000 | {
"year": 2018,
"sha1": "59f924c113ce4d00bf1c4899929a897bc0d709e6",
"oa_license": "CCBYNCND",
"oa_url": "https://www.spandidos-publications.com/10.3892/mmr.2018.8601/download",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "59f924c113ce4d00bf1c4899929a897bc0d709e6",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
270424288 | pes2o/s2orc | v3-fos-license | Mustard ( Brassica rapa ) and Eggplant ( Solanum melongena ) Cultivation in Agrivoltaic System in Coastal Area, Case Study of Baron Technopark, Yogyakarta
Agrivoltaic is a system developed from plant cultivation and photovoltaic installations. Opportunities for creating this system in Indonesia are still quite large, including in the Gunungkidul Yogyakarta area. This study aimed to determine the growth response and yield of mustard greens and eggplant in the Agrivoltaic system in Baron, Gunungkidul, Yogyakarta, as a preliminary study of the use of photovoltaic for plant cultivation activities. The research was conducted in a photovoltaic installed in the Baron Techno Park area, BRIN, Gunungkidul Regency, Yogyakarta. An unpaired T-test design was used in this research, which compared the growth and yield of mustard greens and eggplant under and outside the solar panels. The development of mustard and eggplant plants showed quite a high variation between rows below and outside the panel. Therefore, the rows were also used as a treatment in this research. The results showed that mustard greens and eggplants can be grown in agrivoltaic systems. The location of the plants (below and outside the panel) did not affect the growth and yield of mustard plants, while eggplant did. The position of the row inside and outside the panel also affects the growth and yield of mustard greens and eggplant. Mustard and eggplant plants planted outside the panels will be harvested 15 days faster than those under the panels.
INTRODUCTION
Food and energy are crucial aspects that require sustainable support to ensure optimal utilization for the betterment of society.The cultivation system primarily focuses on producing food and horticultural crops.In traditional and modern agricultural practices, energy plays a significant role in various forms (Garcia & Callejo, 2022).Energy is essential for land processing, maintenance, and harvesting.Additionally, agricultural cultivation indirectly generates energy through biomass production.Biomass is derived from plants that convert sunlight into plant material through photosynthesis (McKendry, 2002).Consequently, biomass is a vital energy source for humans and falls under renewable energy sources.Renewable energy will play a crucial role in meeting the escalating energy demands in a sustainable and environmentally friendly manner.Alongside biomass, solar power derived from sunlight is the most abundant and readily available source among all renewable energy sources (Chamara & Beneragama, 2020).Therefore, energy acts as the connecting link between agricultural crop cultivation and solar power.
Agrivoltaics is a method that integrates plant cultivation with solar power in the exact location (Santra et al., 2018).According to Kostik et al. (2020), an agrivoltaic system can efficiently use land by producing food, fuel, or energy.Solar panels must be installed over a considerable area to harness energy from a solar power system.The placement of these panels may impact the microclimatic conditions of the region, leading to an increase in temperature beneath the solar panels.However, having plants underneath or in between the solar panels can help mitigate this rise in temperature.Agrivoltaic systems offer advantages in terms of temperature regulation for both the solar panels and the plants (Garcia & Callejo, 2022).Plants engage in evapotranspiration, which indirectly contributes to lowering the temperature beneath the solar panels.Conversely, installing solar panels can decrease evapotranspiration in plants, thereby reducing water loss in both plants and soil.
Agrivoltaics has not gained significant traction in Indonesia, unlike in several other Asian countries such as Japan, India, and Malaysia, where Agrivoltaic projects have been previously implemented (Maity et al., 2023).
Similarly, research and publications on agrivoltaics or using solar panels/energy for plant cultivation in Indonesia are scarce.However, Hidayanti et al. (2019) successfully employed solar panels in a hydroponic system for plant cultivation, while Budiyanto et al. (2022) utilized solar power for hydroorganic rice farming.Consequently, further research is imperative to explore the potential of agrivoltaics, particularly in the context of plant cultivation.
The selection of plants in an agrivoltaic system involves various considerations.While almost all plants can be utilized, including horticultural crops, food crops, and fruits, certain factors must be considered.For instance, vines can be planted vertically in an agrivoltaic system using poles on solar panels (Malu et al., 2017).The height of the plants is an important consideration when choosing plant types for an agrivoltaic system to ensure they do not obstruct the solar panels, particularly in the case of vegetable plants (Santra et al., 2018).Gafford et al. ( 2019) conducted a study using three plant varieties from the Solanaceae family, namely tomatoes, jalapeño peppers, and chiltepin peppers, in an agrivoltaic system.Chamara & Beneragama (2020) mentioned that food crops and vegetables can be cultivated in agrivoltaic systems, especially those that can adapt to limited sunlight.Even rice plants, which typically require full sunlight, can be grown using an agrivoltaic system by carefully considering the shape and arrangement of the solar panels (Gonocruz et al., 2021).
In this study, the selection of mustard greens and eggplant was based on several considerations.Both plants have relatively large or broad leaves that can maximize the process of photosynthesis, thus influencing the reduction of temperature beneath the solar panel.Mustard greens have a relatively short growing period, enabling quick yield, while eggplants have a plant height that is compatible with the size of the solar panels installed in the research location.Eggplants have a longer production lifespan and can be harvested multiple times.Additionally, the region is characterized by karst topography from Gunungsewu, which poses challenges for plant cultivation.Wahyuhana and Sukmawati (2019) noted that the southern coast of Gunungkidul Regency exhibits unique topographic features, mainly undulating hills on karst formations.This geographical setting can lead to uneven water distribution, potentially resulting in drought and decreased land productivity.Winarno et al. (2003) mentioned that staple food crops like cassava, rice, corn, and peanuts are commonly grown in the area through intercropping practices.However, the cultivation of horticultural plants, such as vegetables, is limited due to suboptimal land and environmental conditions for plant development.
Therefore, this research was conducted to understand the growth response and yield of mustard greens and eggplants in the Agrivoltaic system in Baron, Gunungkidul, Yogyakarta, as an initial study of solar panel utilization for farming activities.
MATERIAL AND METHODS
The study was conducted in the Baron Techno Park area, which is situated in Ngresik, Kanigoro, Saptosari District, Gunungkidul Regency, Yogyakarta (Figure 1).The research site experiences varying environmental conditions and is prone to extreme conditions due to its location in Baron, the southern coast of Java (Figure 2).The initial investigation was conducted in early 2022 (Ahmad et al., 2022), and the research activities were carried out from May to October 2022.The research utilized green mustard seeds (caisim) of the Shinta F1 variety, eggplant seeds of the kopek type (long round fruit with a colored blunt tip) of the Yuvita F1 variety, manure, liquid organic fertilizer, NPK fertilizer, insecticide (decis), herbicide, 20 cm diameter polybag (for mustard greens), and 30 cm diameter polybag (for eggplant).The equipment employed included pot trays for seeding activities, hoes, and spatulas for planting, sprinklers and water hoses for maintenance/watering, cutting scissors and scales for harvesting, rulers, tape measures, calipers, and leaf area meters for measuring leaf area.
The experimental method used in the research involved an unpaired T-test design to compare the growth and yield of mustard greens and eggplant planted in polybags under and outside the solar panels (Figure 3).The research data obtained was analyzed using the Independent Sample T-Test for data within and outside the panel.Furthermore, Analysis of Variance (ANOVA) was employed to analyze the plant row data within and outside the panel, with a significance level of 5%.In cases where significant differences were observed from the ANOVA results, further testing was conducted using Duncan's Multiple Range Test (DMRT).The statistical analysis was performed using the IBM SPSS Statistics 22 application.
The study encompassed a comprehensive four-stage process: preparation, seeding and planting, maintenance, and harvesting.The initial stage involved meticulous preparation.The designated area was meticulously cleared of weeds, both beneath and outside the panels, utilizing herbicide to ensure a conducive research environment.For mustard greens, a plant spacing of 25 cm within rows and 40 cm between rows was employed, resulting in five plants per panel.Conversely, a plant spacing of 50 cm within rows and 80 cm between rows was adopted for eggplant plants, yielding three rows per panel.The chosen planting distances were tailored to accommodate the solar panel structures utilized in the study.The planting medium consisted of a mixture of soil and manure in a 1:1 ratio.
The second phase involves the process of seeding and planting.Initially, mustard and eggplant seeds are sown before being transplanted into polybags.Following this, the third phase focuses on maintenance.Regular watering is conducted daily, either in the morning or evening.The growth stage of the plants determines the amount of water provided.Regarding fertilizers, the study employed NPK fertilizer and liquid organic fertilizer.For mustard plants, 3 g/plant of NPK fertilizer is administered twice, specifically at the 2-week and 4-week marks after planting.This dosage aligns with the findings of Letahiit et al. (2022).On the other hand, eggplant plants receive a dosage of 20 g/plant of NPK fertilizer, also given twice, at the 2-week and 8-week intervals after planting.This dosage is based on the research conducted by Raksun et al. (2019), who concluded that a dosage of 20 g per plant resulted in optimal growth and yield for eggplant plants.Liquid organic fertilizer is applied simultaneously with watering at a dosage of 1 ml/L, as indicated on the packaging label.Manual methods are employed to control plant pests, such as removing weeds surrounding the plants and physically removing visible pests.Towards the end of the study, numerous eggplant plants were attacked by pests, prompting the use of decis pesticides through spraying.As stated on the packaging, the recommended dosage of 2 ml/L was utilized.Additionally, eggplant plants require support to ensure upright growth and minimize physical damage caused by wind.Stakes are installed 3 weeks after planting to provide the necessary support.Lastly, pasting eggplant plants involves removing water shoots, leaving the fourth and subsequent shoots intact.This practice aims to maintain a balanced nutritional state, shape the plant canopy, and facilitate maintenance tasks.
The fourth phase in the agricultural process is known as harvest.The timing of harvesting mustard and eggplant plants is not synchronized, as it depends on the readiness of the plants for harvesting.Mustard plants that are cultivated outside the panel are harvested at 35 DAP (Harvest Stage Time), whereas those grown under the panel are harvested at 45 DAP.Harvesting mustard greens involves uprooting the entire plant, including its roots.On the other hand, eggplant harvesting is carried out in multiple stages, with intervals of 3-7 days between each stage.The harvested eggplant fruits are easily identifiable due to their vibrant and shiny appearance.The fruit stalk is cut 2 cm above the stem's base to harvest the eggplants using scissors.Eggplant plants cultivated outside the panel are ready for harvesting at 50 DAP, while those grown under the panel are harvested at 65 DAP.This variation is responsible for the differing growth and yields of mustard greens and eggplants, whether they are located under or outside the panel.
RESULT AND DISCUSSION
The research findings generally indicate that plants grown outside solar panels exhibit better growth and yield outcomes than those grown under the panels, particularly eggplant plants.
This disparity is believed to be linked to the impact of shading on plant growth.It is widely recognized that certain plant species are more tolerant to shade than others.Numerous research studies have been conducted to assess the influence of shading on various plant species across different seasons.The biomass generated through photosynthesis in shade-tolerant and shade-intolerant plants tends to increase as the level of shading decreases (Touil et al., 2021).Gonocruz et al. (2021) determined that the optimal shading level for rice plants in agrivoltaic systems is between 27-39%, while there is limited literature available on shading requirements for vegetable plants like mustard greens and eggplants.The biomass produced by plants is essential for their growth and development processes, ultimately leading to agricultural products such as leaves in mustard plants and fruits in eggplant plants.
Mustard and eggplant plants are cultivated in polybags arranged in rows with specific spacing.By adjusting the plant spacing, the growth of the plants can be enhanced, leading to improved physical quality of mustard greens.According to Nugraha et al. ( 2021), optimal plant height and number of leaves can be achieved by maintaining a planting distance of 40x40 cm2 for mustard greens with organic cultivation.Similarly, it is necessary to adjust the spacing between polybags for eggplant plants.The regulation of plant spacing is crucial as it affects the plant's ability to absorb sunlight and CO2.Water and nutrients.When each plant receives an adequate amount of these resources, competition among plants can be minimized.Nainggolan et al. (2019) found that the best yields of eggplants were obtained when the plant spacing was set at 60x70 cm2.In the research conducted, the spacing of 25x40 cm2 for mustard greens and 50x80 cm2 for eggplant was chosen to accommodate the solar panel buildings used.
The research utilized a planting medium of soil and manure in a 1:1 ratio.Incorporating manure into the planting medium enhances the nutrient composition, encompassing both macro and micronutrients.
Additionally, manure plays a crucial role in boosting water retention capacity, soil microbial activity, cation exchange capacity, and enhancing soil structure (Anjarwati et al., 2017).This choice was made due to the low fertility levels of the soil sourced from the research site, as indicated by soil sample analysis results (Table 1).Seeding should be conducted before planting to ensure the production of healthy, adaptable seeds upon transfer to the planting site.The age of the transplanted seedlings plays a crucial role in their adaptability and growth rate.Seedlings that are too young or undersized may result in low adaptability to the environment.In contrast, although larger, those transplanted too late may not have sufficient time during the production stage to acquire optimal environmental conditions, leading to suboptimal outcomes (Setyoaji & Setiawan, 2021).Research findings (Ramli et al., 2017) indicated that seedlings aged 1 week post-sowing exhibited the highest plant height, leaf count, and yield.In the study, mustard seedlings were transferred to polybags when they were approximately 2 weeks old or had 2-3 leaves.Buhaerah & Kuruseng (2016) concluded in their study that the most suitable age for transplanting eggplant plants is 2 weeks after planting.However, in their research, eggplant seedlings were moved to polybags when they were around 4 weeks old or had 4-5 leaves.The appropriate timing for transplanting is influenced by the plant species, variety, environmental factors, and cultivation methods (Ramli et al., 2017).
Growth and Yield of Mustard Crops
Cabbage plants generally thrive well in agrivoltaic systems, although there are slight differences in growth and yield between cabbage plants under the panels and those outside.The analysis of the development and yield data of cabbage plants can be found in Table 2. 3.52 ± 0.2 a 3.96 ± 0.2 a Note: figures followed by the same letter in the row show no difference based on the independent T-test.Table 2 shows that only the average number of leaves exhibits a significant disparity between mustard greens cultivated under and outside the panel.Mustard greens grown under the panels demonstrate a slightly higher number of leaves than those grown outside the panels.However, despite having varying average values, other parameters such as plant height, fresh shoot weight, and root fresh weight do not exhibit statistically significant differences.This result suggests that mustard plants possess a solid adaptability for agrivoltaic systems due to their shade tolerance.Mustard plants can thrive in shaded areas or receive 3-5 hours of sunlight daily.Additionally, they can withstand high temperatures of up to 35°C and temperatures as low as -3°C.These research findings slightly differ from a study conducted by Kumpanalaisatit et al. (2022), which indicated that, overall, the growth of Pakcoy mustard greens outside the panel (control) was superior when compared to those under the panel, considering parameters such as stem diameter, number of leaves, leaf size, and plant fresh weight.However, no difference was observed in terms of plant height.Another study on lettuce plants cultivated using an agrovoltaic system and a control system (not an agrivoltaic system) revealed an equal number of leaves at 30 DAP (hours since treatment).Still, from 37 to 58 DAP, there was a noticeable difference.The control treatment exhibited more leaves than the agrivoltaic system (Zheng et al., 2021).
The growth of mustard plants is more consistent outside the panels than those grown under the panels.As a result, the harvesting process is conducted at different times.Mustard plants outside the panel are harvested earlier, precisely at 35 DAP, while those under the panel are harvested at 45 DAP.The disparity in mustard plant growth between outside and below the panel is illustrated in Figure 6.This discrepancy is believed to be attributed to the amount of sunlight the plants receive.A low-light environment hinders the growth rate of lettuce plants.The number and width of lettuce leaves exhibit variations at different stages of plant development.At 30 DAT, there is no difference in the number of lettuce leaves.However, after 37 DAT, the agrivoltaic system showed a lower number of lettuce leaves than the control group (nonagrivoltaic system).Furthermore, the width of new lettuce leaves demonstrates contrasting outcomes after 44 DAP, with the control treatment displaying superior results compared to the agrivoltaic system (Zheng et al., 2021).3 indicate no significant difference in the growth and yield of mustard plants between rows located below the panel.Conversely, Table 4 illustrates variations in the development and yield of mustard plants among rows situated outside the panel, except for the plant height parameter.Notably, mustard plants in row 5 exhibit the most favorable growth and yield outcomes.This observation suggests that row 5 may receive the highest amount of sunlight, as depicted in Figure 7 and Figure 8. Fresh weight of roots (g) 4.50 ± 0.6 a 3.00 ± 0.4 a 3.60 ± 0.7 a 2.70 ± 0.2 a 3.80 ± 0.5 a Note: figures followed by the same letter in the row show no difference based on the DMRT test at the 5% level.support this notion, highlighting the direct influence of high-intensity sunlight on plant photosynthesis, particularly in leaf organs.However, it is essential to note that this study's measurement of leaf area parameters was limited to selecting mustard leaf samples without distinguishing between those planted under or outside the panel (Table 5).Leaf area data is crucial for assessing the photosynthetic activity of plants.Nevertheless, the measurement of plant photosynthetic activity, which is closely linked to plant physiological processes, has not been conducted in this study.Plant photosynthesis is ultimately quantified by measuring the biomass produced by the plant, which reflects the photosynthesis results.According to Weraduwage et al. (2015), leaf area growth plays a significant role in light interception.It is vital in determining plant productivity through various physiological processes, particularly leaf photosynthesis.
Consequently, mustard plants can generally be cultivated using an agrivoltaic system, either under or outside the panels, without adversely affecting their growth and yield.However, mustard plants grown outside the panel exhibit a faster harvesting time than those grown inside the panel.Mustard greens cultivated outside the panel are typically harvested at 35 DAP (hours since transplanting), whereas those grown inside the panel are harvested at 45 DAP.
Eggplant Growth and Yield
Eggplant plants can generally grow well in the agrivoltaic system, although the growth and yield show differences between eggplant plants under the panel and those outside the panel.The results of data analysis on the development and yield of eggplant plants can be seen in Table 6 and Figure 9. et al., 2021), and mustard greens cultivation (Kumpanalaisatit et al., 2022) conducted using agrivoltaic systems.In the case of tomatoes, plants under the panel displayed the lowest fruit yield compared to those in the open as a control group.Rice plants grown under an agrivoltaic system demonstrated satisfactory growth and yields even at shade levels ranging from 17% to 39%.Mustard plants also exhibited enhanced growth and yield in the control treatment (outside the panel) when contrasted with those under the panel.This observation is further supported by Weselek et al. (2019), who noted that solar panels in agrivoltaic systems not only reduce the duration of sunlight exposure for plants but can also impact changes in microclimatic conditions beneath the panels, such as increased air and soil temperatures, particularly in regions with high solar radiation levels.In certain instances, these conditions may have adverse effects on plants, such as diminishing the quality of potato tubers.
The growth and yield of eggplants between the rows inside and outside the panel also exhibit differences (see Tables 7 and 8).354.11 ± 44.4 b 899.97 ± 78.8 a 770.33 ± 66.3 a Note: Numbers followed by the same letter in the row show no difference based on the DMRT test at the 5% level.
Tables 7 and 8 indicate that the placement of plants inside and outside the panel significantly impacts the growth and yield of eggplant plants, except for the stem diameter parameter.Eggplant plants in row 1 within the panel exhibited the most favorable growth and yield.Conversely, eggplant plants outside the panel in rows 3 and 2 displayed superior growth and yield compared to row 1.This phenomenon is attributed to the positioning of the plants that receive the highest amount of sunlight (Figure 10).These findings align with previous research on tomato cultivation utilizing an agrivoltaic system (Al-Agele et al., 2021).In the case of tomatoes, they are planted in rows treated as individual plots.Over time, tomato plants in each row exhibit varying growth characteristics.Those in rows receiving more sunlight demonstrate enhanced growth.However, the orientation of the panels in this study differs from that used in the tomato cultivation study (Al-Agele et al., 2021).Specifically, the height of the panels on the southern side was more significant than on the northern side in this study, whereas the opposite was confirmed in the tomato plant study.Consequently, the southern rows exhibited superior growth and yield in this study, whereas in the tomato study, the northern rows showed better growth and yield.(Weselek et al., 2019).
At the end of the research, around early October 2022.almost all eggplant plants were infested by the flea beetle pest (Epilachna spp), also known as the outing-outing pest.The pest attacks the leaves and fruits, resulting in the eggplant fruits being unable to be harvested (Figure 11c).The pest causes damage to the plants, resulting in decreased production due to disrupted plant growth and development.The flea beetle is one of the main pests that attack eggplant plants.The beetle damages the plants by eating the epidermis layer on the underside of the leaves, leaving the upper part intact so that the affected leaves become skeletonized and dry like a net (Figure 11b) (Apriliyanto & Setiawan, 2019).This pest is active in attacking in the morning; its activity decreases during the day, becomes active again in the afternoon, and reduces activity at night (Arsi et al., 2022).The pest infestation is suspected to be due to the eggplant plants growing more extensive and not being thinned out, causing the plants to overlap and increase humidity around the plants, which can attract pests and diseases to attack the plants.The intensity of pest attacks can be influenced by planting distance, plant maintenance, plant age, and environmental conditions such as temperature and humidity (Arsi et al., 2022).Typically, eggplants have the potential to be cultivated using the agrivoltaic system, either within or outside the panel structure.However, it is essential to note that the growth and outcomes of eggplants vary depending on their placement.Eggplants planted outside the panel are typically ready for harvest earlier, precisely at 50 DAP (Harvest Start Time), whereas those grown within the new panel structure commence their harvest at 65 DAP.
CONCLUSION
In an agrivoltaic system, both mustard and eggplant plants can be grown.Mustard plants can be positioned under or outside the panels without impacting their growth and yield.On the other hand, while eggplant plants can be placed inside the panel, their growth and yield may not be as optimal as when they are placed outside the panel
Figure 2 .
Figure 2. Research siteThe research utilized green mustard seeds (caisim) of the Shinta F1 variety, eggplant seeds of the kopek type (long round fruit with a colored blunt tip) of the Yuvita F1 variety, manure, liquid organic fertilizer, NPK fertilizer, insecticide (decis), herbicide, 20 cm diameter polybag (for mustard greens), and 30 cm diameter polybag (for eggplant).The equipment employed included pot trays for seeding activities, hoes, and spatulas for planting, sprinklers and
Figure 3 .
Figure 3. Plan of the research site
Figure 4 .
Figure 4. Research Flow Diagram Figure5shows the varying sunlight received by plants at different times during the research (image data collected in early September 2022).This variation is responsible for the differing growth and yields of mustard greens and eggplants, whether they are located under or outside the panel.The research findings generally indicate that plants grown outside solar panels exhibit better growth and yield outcomes than those grown under the panels, particularly eggplant plants.
Figure 6 .
Figure 6.Mustard plants at 30 DAP The results presented in Table 3 indicate no significant difference in the growth and yield of mustard plants between rows located below the panel.Conversely, Table 4 illustrates variations in the development and yield of mustard plants among rows situated outside the panel, except for the plant Fresh weight of crown (g)54.21± 5.4 b 74.87 ± 5.9 a 72.41 ± 6.0 ab 83.22 ± 9.8 a 84.89 ± 6.0 a Fresh weight of roots (g) 2.67 ± 0.4 c 3.59 ± 0.3 bc 4.03 ± 0.3 b 4.10 ± 0.4 b 5.41 ± 0.4 a Note: figures followed by the same letter in the row show no difference based on the DMRT test at the 5% level.
Figure 7 .Figure 8 .
Figure 7. Plan of mustard rows outside the panel
Figure 10 .
Figure 10.Row position of eggplant plants inside and outside the panel Pest infestation on eggplant plants at the end of the study
Table 1 .
Analytical results of soil samples used in the study
Table 2 .
Growth and yield of mustard under and outside panels
Table 3 .
Growth and yield of mustard under panel
Table 4 .
Growth and yield of mustard outside panel
Table 5 .
Sample data of mustard leaf area Leaves numbers
Table 6 .
Growth and yield of eggplant under and outside panels Numbers followed by the same letter in the row show no difference based on the independent T-test.
Table 7 .
Eggplant growth and yield under panel Note: Numbers followed by the same letter in the row show no difference based on the DMRT test at the 5% level.
Table 8 .
Eggplant growth and yield outside panel | 2024-06-13T15:24:37.293Z | 2024-05-31T00:00:00.000 | {
"year": 2024,
"sha1": "03ff93ebe2f4002a5f4d3311b76d0858d5a6c3a5",
"oa_license": null,
"oa_url": "https://doi.org/10.36378/juatika.v6i2.3570",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "012980e84863dde364b462bc7c4f73247512c465",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science"
],
"extfieldsofstudy": []
} |
244213022 | pes2o/s2orc | v3-fos-license | Supercapacitor Assisted Hybrid PV System for Efficient Solar Energy Harnessing
In photovoltaic (PV) systems, maximum power point (MPP) is tracked by matching the load impedance to the internal impedance of the PV array by adjusting the duty cycle of the associated DC-DC converter. Scientists are trying to improve the efficiency of these converters by improving the performance of the power stage, while limited attention is given to finding alternative methods. This article describes a novel supercapacitor (SC) assisted technique to enhance the efficiency of a PV system without modifying the power stage of the charge controller. The proposed system is an SC—battery hybrid PV system where an SC bank is coupled in series with a PV array to enhance the overall system efficiency. Developed prototype of the proposed system with SC assisted loss circumvention embedded with a DC microgrid application detailed in the article showed that the average efficiency of the PV system is increased by 8%. This article further describes the theoretical and experimental investigation of the impedance matching technique for the proposed PV system, explaining how to adapt typical impedance matching for maximum power transfer.
Introduction
Renewable energy sources are the key to meet the energy demands of the 21st century. Solar Photovoltaic (PV) technology is one of the most rapidly growing renewable energy generation methods used to harness the sun's energy. During the last decade, there has been a massive decrease in the energy generation costs through PV technologies compared to other energy generation methods [1]. Even though solar PV technologies have many advantages, the efficiency of conventional solar panels is still at a lower value compared to other energy harnessing methods. At present, conventional monocrystalline silicon solar panels have the highest efficiency of around 25% [2], wasting the 75% of incident solar irradiation on solar panels. Due to this reason, it is essential to find an efficient way to utilize the energy generated by solar panels.
Standalone (off-grid) PV systems are used in rural areas where the national electricity grid is not accessible. These systems mainly consist of a PV array, charge controller and battery bank, as shown in Figure 1. Over the years, several types of standalone PV systems were introduced with novel power management and controlling strategies [3][4][5], although most of them have the same hardware. The charge controller is the most crucial part of a standalone PV system because it must extract, deliver and store maximum energy from the PV array into the battery bank to optimize the system efficiency. Modern commercial maximum power point tracking (MPPT) charge controllers employ switchmode DC-DC converters to extract maximum available power from the PV array while stepping up or down the PV array voltage into the required voltage. The maximum power point (MPP) of the PV array is achieved by matching the instantaneous internal impedance of the PV array (under varying solar irradiance) to the load impedance by varying the duty ratio of the control pulse width modulation (PWM) signal of the DC-DC converter. This is accomplished by locating the MPP of the PV array using various MPPT algorithms. Constant voltage, current sweep, perturb and observe (P&O) and incremental conductance are the most common MPPT algorithms [6][7][8][9][10]. Conventional MPPT charge controllers have an average conversion efficiency of around 90% [11][12][13]. Most household electrical appliances require AC electricity. In order to drive AC loads, the system must employ a DC to AC inverter to convert the battery bank's DC voltage into the required AC line voltage. Therefore, the overall system's end-to-end (from source to load) efficiency while driving AC loads will be much less than that of the charge controller. The recent development of supercapacitor (SC) technologies has enabled the use of SCs as an energy storage device for standalone PV systems [14,15]. Compared to batteries, SCs have higher power density and longer cyclic life, making them better suited for selfsustainable and low-maintenance systems [16][17][18][19][20][21]. When an empty capacitor charges to a voltage of by delivering charge , the capacitor stores 1 2 of energy while wasting the same amount of energy in the total loop resistance, irrespective of the value of loop resistance [22]. This is a 50% energy loss compared with the electrochemical battery, where a battery stores an energy amount of . In addition, authors of [20] have studied the possibilities for impedance matching of a typical standalone PV system by connecting an SC bank at the output of the DC-DC converter, replacing the battery bank. Results indicate that tracking the MPP using the impedance matching technique is extremely difficult for such a system because of the rapid variation of the impedance across the SC bank with state of charge (SOC). However, authors of [23] have proposed a novel method of connecting an SC bank to RC circuit where SC is connected in series with a useful resistive load, leading to a higher end to end efficiency of the overall system. Reference [24] shows an application of this theory which uses a buck converter with a battery bank as the useful load in the capacitor charging loop to enhance system efficiency for a PV system. This article discusses the design and development of a novel hybrid PV system consisting of a series-coupled SC with PV array and MPPT charge controller. The charge controller is connected to a battery bank, and the combination acts as the useful resistive load of the capacitor charging loop. Section 2 of this article provides a summary of SC assisted loss circumvention theory. Section 3 provides the conceptual background for developing the proposed PV system and its operational modes. Section 4 discusses the feasibility of the proposed hybrid PV system's impedance matching technique under different operating modes. Finally, Section 5 presents the prototype implementation of the proposed PV system with SC assisted loss circumvention theory and experimental results that verifies the theoretical claims.
Supercapacitor Assisted Loss Circumvention Theory
Supercapacitor assisted loss circumvention theory is an extension of the typical resistor-capacitor (RC) circuit theory. This theory explains how to enhance the efficiency of a system using an SC charging loop consisting of a useful resistive load. This technique uses the advantage of a large time constant when a supercapacitor is connected to an RC circuit. Based on this concept, several SC assisted circuit topologies such as SC assisted light-emitting diodes (SCALED) [20], SC assisted low dropout regulator (SCALDO) [25] and SC assisted surge absorber (SCASA) [26] have already been developed. The fundamental background of this theory is summarized as follows.
As mentioned in Section 1, when the same amount of charge is pumped into a battery and an empty capacitor, the charging process of the capacitor wastes half of the total energy compared to the battery, irrespective of the value of total loop resistance. This is a fundamental observation of a typical RC circuit. SC assisted loss circumvention theory presented in the literature [23] suggests that for a case of charging a non-empty SC bank while connecting a useful load into the loop, the overall system's efficiency can be enhanced. This is because the energy consumed by the useful load does advantageous work before being dissipated. Consider the capacitor charging circuit with useful load illustrated in Figure 2. Using Figure 2, the efficiency, , of the system can be written as follows, where , and are defined as the energy stored in SC bank, energy dissipated through the useful load, , and energy wasted in the loop resistances, , respectively. Here, includes the loop parasitic resistance and equivalent series resistance (ESR) of the SC bank. In Figure 2, (0 ≤ ≤ 1) and (≥ 1) are defined as the pre-charge factor of SC bank and power supply over-voltage factor, respectively. When → 1 , SC bank reaches its pre-defined final voltage, which is denoted as . By analyzing the circuit for system efficiency, Equation (1) can be modified as follows [23], where = � . For the case of → 0 while → 0 and → 1, we can obtain the fundamental efficiency, → 1 2 of a capacitor charging circuit as discussed in Section 1. Equation (2) suggests that if the capacitor is not allowed to discharge fully in a cycle (when ≠ 0), system efficiency can be improved. As seen from Equation (2), can be enhanced by selecting proper values for and . When increases, the efficiency advantage increases significantly. This trend is shown in Figure 3a-c. These trends clearly show that the efficiency of an RC circuit can be enhanced by inserting a useful resistive load and a pre-charged SC into the RC circuit, as in Figure 2. Consequently, this concept can be applied to enhance the system efficiency of a PV system consisting of a series-coupled SC bank with a PV array and an MPPT charge controller. The following section demonstrates the development of the proposed PV system as an extension of the aforementioned loss circumvention theory. Figure 4 depicts a simplified block diagram of the proposed PV system. The proposed PV system has six switches 1 , 2 , 3 , 4 , 5 and 6 and they are switched "on" or "off" by the control circuitry of the system by assessing the PV array, SC bank and battery bank voltages. A DC load is introduced to the proposed system that implements a DC microgrid as a useful way to utilize the energy stored in the SC bank. It is essential to regularly use the stored energy so that the SC bank can be discharged to make it ready to store additional energy again. Depending on the switching algorithm that controls each switch's "on" and "off" states, the system has three operating modes, as described in the text below.
Mode I: Neutral
Under this mode, the switches 1 , 3 and 5 will be switched "on" while keeping the other switches "off". This will implement the proposed SC-battery hybrid charging system while connecting the DC load in parallel with the SC bank. If the current output from the PV array is sufficient, the DC load will be driven directly by the PV array, while the excess current will flow through the SC bank, allowing the SC bank to charge. Then, the current will flow through the charge controller charging the battery bank. If there is a deficiency of current to drive the DC load, the SC bank will automatically buffer the excess current into the DC load, which will cause the SC bank to discharge slowly. The Block diagram of the system operating under this is shown in Figure 5a.
Mode II: SC Charge Recovery
If the voltage of the SC bank drops to its pre-defined minimum voltage because of the reduced input power while the system is operating in neutral mode, the DC load will be connected to the battery bank by switching 2 and 4 into "on" state and 1 and 3 into the "off" state while keeping the other switches in the same state. This will disconnect the DC load from the SC bank. Consequently, SC bank regains its charge, and when it is charged up to its pre-defined maximum voltage, the system will again be switched back into neutral operating mode. The Block diagram of the system operating under this mode is shown in Figure 5b.
Mode III: SC Bypass
If the SC bank reaches its pre-defined maximum voltage while the system is operating under the neutral mode, the switch 6 will be turned "on" and 5 will be turned "off" while keeping the other switches in the same state. This will connect the PV array directly parallel with the charge controller and the battery bank, similar to the typical systems. Now the DC load is totally powered by the SC bank, which causes the SC bank to discharge rapidly. When the SC bank is discharged to its pre-defined minimum voltage, 5 will be switched back to "on" state while switching "off" 6 . This will switch the system back into neutral mode. The system block diagram of this mode is shown in Figure 5c. Table 1 summarizes the state of switches under each operating mode of the system. In all of the modes, it has been considered that the system is operating in the daytime. However, the SC bank can be fully discharged at night because the PV array is not generating power. Therefore, the system will be turned off at night, and all the loads can be connected to the battery bank, similar to the typical systems. The overall operation of the designed system can be continued by switching the system into different operating modes. In neutral mode, since the DC load is driven directly by the PV array, charging, discharging and conduction losses will be minimized, enhancing the system's efficiency. Another advantage of the proposed system is that when there is not enough current from the PV array, the SC bank will automatically buffer the DC load, which does not require any additional switching or special attention. Therefore, the switching losses are minimized. On the other hand, similar to the typical systems, a DC-AC inverter can be connected to the battery terminals to drive the AC loads using this system. With all these improvements, the proposed PV system leads to achieving very high system efficiency.
Tracking the MPP of the PV array is also an essential feature for a PV system. Even if the efficiency of the system is high, a PV system must be able to track the MPP if otherwise, maximum available power generated by the PV array will not be utilized. The following section theoretically investigates the feasibility of adapting typical impedance matching technique for maximum power transfer of the proposed system. To the best of the authors' knowledge, such a study was not found in the literature for a proposed type of PV system.
Theoretical Investigation of MPPT for the Proposed System
In typical PV systems, the MPP of the PV array is achieved by matching the instantaneous internal impedance of the PV array (under varying solar irradiance) to the load impedance by varying the duty ratio of the DC-DC converter's PWM control signal. This section investigates the feasibility of using the impedance matching technique for maximum power transfer of the proposed PV system operating under each mode, starting from the theoretical explanation of impedance matching of a typical PV system. Figure 6 illustrates the current-voltage (I-V) and power-voltage (P-V) characteristics of a typical PV cell or array.
MPPT Using Switch Mode DC-DC Converter
, , and of Figure 6 are defined as open-circuit voltage, short-circuit current, the voltage at MPP and current at MPP, respectively. The MPP is the optimal operating point at which the PV array generates maximum electrical power. Under any operating condition, a PV system must operate at this point to extract the maximum available power from the PV array.
As stated in the maximum power transfer theorem in electrical circuits, to transfer maximum power from a source with finite internal resistance to an external load, the resistance of the load must be equal to the internal resistance of the source as viewed from its output terminals. Therefore, to transfer the maximum amount of power generated by a PV array to an external load, the value of the instantaneous internal resistance of the PV array must be matched to the load. Hence, the resistance of the external load must also be able to vary continuously. In typical MPPT solar charge controllers, this is accomplished by continuously adjusting the duty ratio, , of the PWM control signal of the built-in DC-DC converter. There exist many types of MPPT algorithms [6][7][8][9][10], and they are all based on the impedance matching technique mentioned above. Switch mode buck or buck-boost converters are often employed in typical MPPT charge controllers. Consider the case of using a buck converter for the impedance matching operation of a standalone PV system. Figure 7 illustrates block diagram of a DC-DC converter, including a battery bank and a load connected to its output. , , , and depict in Figure 7 are defined as input current, output current, input voltage, output voltage and voltage of the battery bank, respectively. By considering Figure 7, Thevenin resistance, , can be written as, where is the internal resistance of the battery bank, and is the load resistance. can consists of several loads including a DC-AC inverter with an AC load. When a DC-DC buck converter is working under continuous conduction mode, it is equivalent to a DC transformer where the turns ratio of this equivalent transformer can be continuously controlled electronically in the range of 0-1 by controlling the duty ratio, , of the control signal [27]. Therefore, the relationships between and , and and can be written as [28], = .
Under the assumption that the converter is lossless and always operates in continuous conduction mode, input resistance − as seen from the input terminals of the buck converter can be deduced as [28], Equation (6) clearly shows that the input resistance of the buck converter is a function of the duty ratio of the PWM control signal. Due to this relationship, when → 1 then
MPPT of the Proposed System
As discussed above, a battery bank connected at the output of a DC-DC converter allows simple impedance matching based on the effective resistive load seen by the solar panel. The same technique can be adapted for the SC bypass operating mode of the proposed PV system because under this mode, the PV array is directly connected with the charge controller bypassing the SC bank, while the SC bank solely powers DC load. Let us consider the case by case investigation of impedance matching for the remaining two operating modes of the proposed system as described in the below text.
Neutral Mode of the System
Consider Figure 9a, which illustrates the block diagram of the proposed system operating under neutral mode where the SC bank is connected in series between the DC-DC converter of the charge controller and output of PV array while DC load is connected in parallel with the SC bank.
− is the resistance of the DC load. , , and depict in Figure 9 are defined as the voltage of the SC bank (time-dependent), ESR of the SC bank, the capacitance of the SC bank and input resistance of the system as seen from the input terminals, respectively. Other symbols have the same meaning as in section 4.1.
Using Ohm's law, the voltage across the DC load (or SC bank), − , can be written as, Using the Equation (7) and Ohm's law, an expression for the input current can be deduced as, Using the Equation (8), final expression for can be deduced as, When considering the Equation (9), it is clear that for the case of the neutral mode of the proposed system, varies from � − + − + − � to ( − + − ) as goes from 0 → ∞. Figure 10a,b show the behavior of input resistance of the system when it is operated under neutral mode.
SC Charge Recovery Mode of the System
Consider Figure 9b, which illustrates the block diagram of the proposed system operating under SC charge recovery mode where the SC bank is connected in series between the DC-DC converter of the charge controller and the output of the PV array. If the initial voltage of the capacitor is zero, input current, , of the circuit shown in Figure 9b can be written as, Using the Equation (10), can be deduced as follows, As seen from the Equation (11), it is clear that is a function of both (time) and because − is a function of . This implies that, in contrast with typical systems, the total resistance as seen from the input terminals of the system depends not only on the duty ratio of the PWM control signal but also on the SOC of the SC bank. When → 0 while → 1, → ( + ) where this is the minimum value that could reach. In addition, when → ∞, → ∞ regardless of the value of . Compared to the typical case discussed in Section 4.1, the series connection of the SC bank with the PV array and DC-DC converter of the proposed system makes the input resistance of the system dependent on the SOC of the SC bank. For the case of neutral mode, the input resistance is a function of the resistance of the DC load as well. Therefore, the existing MPPT schemes have to be reconfigured to adapt them into the proposed SCbased PV system. As seen from Figure 10c,d, because the exponential factor of the Equation (11) depends on elapsed time, the input resistance of the overall system increases rapidly with time. Therefore, to match the input resistance of the system to output resistance of the PV array at MPP by varying the duty ratio, , the input resistance of the system must have a finite value, less than that of the PV array at MPP for a particular maximum value of . So that can be further decreased by using the MPPT algorithm to match the input resistance of the system to the output resistance of the PV array at MPP. Therefore, maximum power point tracking using the impedance matching technique of the proposed system will be an impossible task if we allow the capacitor to fully charge during a cycle because the value of input resistance of the system will be very large regardless of the value of . However, for a case where the capacitor is not allowed to be fully charged during a cycle, MPP could be tracked because has a finite value less than the output resistance of the PV array. At the same time, the value of must have a minimum value at any given value for .
When considering the variation of under neutral mode as depicted in Figure 10a,b, is varying rapidly till the resistance across the SC bank reaches the value of − . After that, the has the same variation with respect to as in the typical system but with having the minimum value of − . Therefore, under this mode, the typical impedance matching can be implemented to track the MPP of the PV array if we carefully design the system. By considering above mentioned conditions, it is possible to match to the source resistance by varying as in a typical system for each operating mode of the proposed system. Therefore, it can be concluded that the typical MPPT can be adapted to the proposed system.
Development and Performance Analysis of a Prototype of the Proposed System
By utilizing the SC assisted loss circumvention theory and the investigation of impedance matching of the proposed system, a prototype system has been developed and analyzed. Table 2 shows the component values used for implementing the prototype system. The following two sections discuss the simulation of impedance matching and performance analysis of the prototype.
Simulation of Impedance Matching of the Prototype
The voltage of an energy storage device of a PV system must be maintained at a nominal value. For example, typical systems use batteries with a nominal voltage of 12 V or higher. For the case of connecting an SC bank into a PV system, the voltage of the SC bank must also be maintained at a nominal voltage. The developed prototype is a 12 V system, and therefore, for a given charge and discharge cycle of the SC bank, we could allow the SC bank to reach a voltage of 13.5 V as the upper threshold and 10.5 V as lower threshold keeping the SC bank's nominal voltage at 12 V. This will help the system to operate smoothly. Since we are not charging the SC bank to its maximum capacity, this should help to track the MPP of the PV array of the proposed system.
Commercial SCs have lower ESR (0.1 to 100 mΩ) than the electrolyte capacitors (30 to 1000 mΩ), making SCs less dissipative. Therefore, the dissipative voltage across a series-connected SC could be negligibly small. The potential difference, ∆ , across an SC bank, can be large and only varying slowly because ∆ is inversely proportional to the capacitance. For a case of constant current ( ) charging, ∆ can be deduced as follows [22], where ∆ is the time elapsed to increase the potential difference across the capacitor in ∆ volts and is the capacitance of the capacitor. Using the Equation (12) and theoretical investigation discussed in Section 4, the variation of with respect to the SOC of SC bank and duty ratio for the prototype system was simulated for neutral and SC charge recovery modes, and the results are depicted in Figure 11. In this simulation, the initial voltage of the SC bank was set to 10.5 V, and the simulation was run until it reached 13.5 V. As seen from Figure 11a,b, it is clear that when the SOC of the SC bank is maintained between prescribed levels, the minimum value of the input resistance of the system has the values of less than or very close to the value of the output resistance of PV array (32 Ω) at different duty ratios. Therefore, MPPT using the impedance matching technique for the prototype system working under SC charge recovery mode is possible, and it can be carried out by varying , similar to typical systems throughout maximum allowable charge and discharge voltage span of the SC bank. In addition, as seen from Figure 11c,d, for the prototype system working under the neutral mode, the minimum value of input resistance is 20.35 Ω which is less than 32 Ω. Therefore, the MPP of the PV array can be tracked by varying similar to the typical systems. The above mathematical simulations have been conducted by assuming that the PV array is always operated at its MPP. However, the output power of the PV array will decrease with the reduced solar irradiance. Moreover, according to the typical PV array characteristics, the output impedance of the PV array at MPP increases with diminished solar irradiance. Therefore, can be further varied to match the impedance of the proposed system to that of the PV array to maximize the power transfer similar to typical systems. Figure 12 shows the detailed block diagram and the experimental setup containing a prototype of the proposed system. All the switches in the switching network were designed by using power MOSFETs. To track the MPP of the PV array, the typical incremental conductance MPPT algorithm was used by implementing it on the in-built microcontroller of the control circuitry. Performance of the system was analyzed for all three operating modes of the system as follows. During the experiments, all the observations were taken while the battery was operated under the bulk charging mode.
Neutral Mode
When the system is operating under this mode, DC load is driven directly by the PV array when there is enough output current from it, and if not, the excess current required by the load is provided by the SC bank. The performance of the system under these two conditions is analyzed separately, and presented as follows.
1. DC load is driven directly by the PV array Figure 13 illustrates the variation of input power from the PV array, power delivered to the battery bank, SC bank and DC load, and voltage of the SC bank ( ) and input and used energy and system efficiency with respect to time during the experiment. It can be seen that the DC load has been driven directly by the PV array since there is enough current from the PV array, and excess current has flown through the SC bank, which has enabled it to charge slowly. As seen from Figure 13a, the system has extracted power from the PV array close to its MPP at reduced solar irradiance incident on the PV array. Figure 13b clearly shows that the average efficiency of around 96% has been achieved by the system.
DC load is driven by the PV array while the SC bank provides excess current
In this case, due to the variation of the solar irradiance, the output current from the PV array is not sufficient to directly drive the DC load. However, since it is connected in parallel with the SC bank, the excess current required by the load is provided by the SC bank when necessary, causing the SC bank to discharge slowly. As seen from Figure 14a, the SC bank has increased its charge until the time reaches around 175 s of the experiment. After that, the voltage of the SC bank has reduced since the SC bank has provided excess current required to drive the load. Therefore, it is clear that the system has been able to work smoothly under varying solar irradiance while tracking the MPP of the PV array. According to Figure 14b, it can be seen that the average efficiency of around 92% has been achieved by the system.
SC Charge Recovery Mode
The system is switched into this mode when the SC bank is discharged to 10.5 V while the system is operating under neutral mode. Since the SC bank is discharged, in order to drive the DC load, it is connected in parallel with the battery bank. As observed from Figure 15a, this has allowed the SC bank to charge rapidly. A portion of power delivered to the battery bank is utilized to drive the DC load, and if it is not enough to drive the load, the battery bank acts as a buffer by supplying the necessary power to the load. According to Figure 15b, it can be seen that the system has been able to achieve 98% average system efficiency during the experiment. This is around an 8% enhancement of efficiency as compared to commercially available typical PV systems [11][12][13].
SC Bypass Mode
Under this operating mode, the DC load is directly powered by the SC bank, which has caused it to a rapid discharge, as shown in Figure 16a. PV array is directly connected with the charge controller and battery bank implementing a typical system. Therefore, when the system operates under this mode, the maximum available power of the PV array is directly transferred to the battery bank through the charge controller. As seen from Figure 16b, the system has achieved around 85% average efficiency. On the other hand, Figure 16c indicates that when the DC load is driven directly by the SC bank, the average efficiency of transferring the energy from the SC bank to the DC load is around 90%. Due to these lower efficiencies in this mode, the time that the system spends working under this mode during the day must be minimized as far as possible by carefully designing the system. Otherwise, the proposed system will not produce the anticipated outcomes.
Conclusions
This article discusses a novel method that could be used to enhance the efficiency of a PV system using a supercapacitor bank as an auxiliary energy storage device. For the proposed system, theoretical and experimental validation of MPPT and efficiency enhancement techniques were carried out. From the results, it can be concluded that the efficiency of a PV system can be enhanced by the proposed technique while tracking the MPP of the PV array. A prototype of the proposed system has been developed. Experimentally it has been shown that when the system is operating in neutral mode, the system has achieved 96% and 92% efficiencies for the cases of high and low power output conditions of the PV array, respectively. When the system operates under the SC charge recovery mode, it has achieved 98% average efficiency. This is an 8% increment when compared to commercially available PV systems [11][12][13]. However, when the system is switched to the SC bypass mode, it has only 85% average efficiency. This happens because the system bypasses the SC bank and connects the PV array directly to the charge controller, implementing a typical standalone PV system. Finally, we envision that the results of this study would be available for commercial applications in the PV market very soon.
Author Contributions: K.P. proposed the concept, designed the prototype model of the system, performed experimental work and wrote the manuscript. A.R., S.K. and N.K. supervised the project, participated in organizing the paper content, and editing the manuscript. All authors have read and agreed to the published version of the manuscript. | 2021-10-18T17:06:56.527Z | 2021-10-04T00:00:00.000 | {
"year": 2021,
"sha1": "477c37efec5cc8802da7f783d938d84f9c5b9800",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-9292/10/19/2422/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "e8f73fa7122c00c6016f6887f581ff41d18b7538",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
244647210 | pes2o/s2orc | v3-fos-license | Cutting corners: donning under duress–a VR teaching tool
Cutting corners: donning under duress–a VR teaching tool Enfiler son EPI en vitesse et brûler les étapes : la solution dans un outil pédagogique de RV Shikha Bansal,1 Julian Wiegelmann,2 Clyde Matava,3 Catherine Bereznicki,4 Fahad Alam2 1Department of Anesthesia, Northern Ontario School of Medicine and Thunder Bay Regional Health Sciences Centre, Ontario, Canada; 2Department of Anesthesia, Sunnybrook Health Sciences Centre, Ontario, Canada; 3Department of Anesthesia and Pain Medicine, The Hospital for Sick Children, Ontario, Canada; 4Department of Family Medicine, University of Calgary, Alberta, Canada. Correspondence to: Fahad Alam, Department of Anesthesia, 2075 Bayview Ave Toronto, ON, Canada M4N 3M5; email: Fahad.Alam@sunnybrook.ca Published ahead of issue: November 23, 2021; CMEJ 2021 Available at http://www.cmej.ca © 2021 Bansal, Wiegelmann, Matava, Bereznicki, Alam; licensee Synergies Partners https://doi.org/10.36834/cmej.72143. This is an Open Journal Systems article distributed under the terms of the Creative Commons Attribution License. (https://creativecommons.org/licenses/by-nc-nd/4.0) which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is cited. Introduction For highly infectious diseases, such as severe acute respiratory syndrome (SARS) or the novel coronavirus (COVID-19), healthcare workers (HCW) are at a high risk of personal exposure. Healthcare workers can reduce this exposure by taking airborne and contact precautions using personal protective equipment (PPE).1 However, the COVID-19 pandemic has resulted in intense psychological stress in HCW,2 and the time required to don PPE (typically 3-7 minutes)3 can lead to an inner conflict and cognitive strain when there is an emergent need for patient treatment (e.g. cardiopulmonary resuscitation). HCW may fail to don PPE properly, resulting in exposure to the virus by 'cutting corners' or making mistakes in an attempt to act quickly.4 Thus, training and practice in donning and doffing PPE as per individual hospital protocol are of paramount importance to protect HCW.
Introduction
For highly infectious diseases, such as severe acute respiratory syndrome (SARS) or the novel coronavirus (COVID-19), healthcare workers (HCW) are at a high risk of personal exposure.Healthcare workers can reduce this exposure by taking airborne and contact precautions using personal protective equipment (PPE). 1 However, the COVID-19 pandemic has resulted in intense psychological stress in HCW, 2 and the time required to don PPE (typically 3-7 minutes) 3 can lead to an inner conflict and cognitive strain when there is an emergent need for patient treatment (e.g.cardiopulmonary resuscitation).HCW may fail to don PPE properly, resulting in exposure to the virus by 'cutting corners' or making mistakes in an attempt to act quickly. 4Thus, training and practice in donning and doffing PPE as per individual hospital protocol are of paramount importance to protect HCW.
Virtual reality (VR) may offer a potential solution to this problem for several reasons.We have previously used immersive VR-360 videos to reduce patient perioperative anxiety by placing the viewer (in the first person) in a virtual environment, as a form of exposure therapy, where one can emotionally experience their surroundings in a safe manner. 5A study by Gutiérrez et al. has shown that medical students have higher knowledge gain with immersive environments using head-mounted displays (HMDs) than by screen-based learning. 6Haerling et al. have demonstrated that learning transfer is similar in nurses receiving virtual or physical simulation, but the simulation was significantly cheaper in the VR group. 7
Objective
Using VR-360 videos as a form of educational exposure therapy for PPE donning in both high-and low-stress environments.
Methods
Given physical distancing, resource, and time constraints, in our context we had limited access to manikin-based simulation.We thus chose to use VR based 360 videos to demonstrate our institution's PPE donning protocol.We created two immersive VR-360 films of 1) a HCW donning PPE under normal circumstances and 2) while in the delivery suite for a critically ill (simulated) newborn requiring resuscitation.Cognitive stress in the latter video was simulated using loud alarms (via in situ simulation software) 8 and emotional team members yelling for help in the background.This allowed them to experience the stress of such a scenario without sacrificing personal safety during donning.The participants then completed an adapted post video Likert scale-based questionnaire.It was composed of questions related to subjective 'realism' and usefulness of these videos as well as the equipment, its side effects and satisfaction with the overall experience (Appendix A). 9,10 As this was created as a tool to educate on the standard use of PPE donning, a formal ethics approval was not required.Thus far, ten anesthesiologists have viewed these videos using the Oculus Go headset.
Preliminary feedback
Our preliminary feedback (Figure 1) has been that the videos seemed realistic, enjoyable, practical and provoked a self-reported stressful response (when intended).Half of the participants concurred that they gained knowledge which they could extend to clinical practice, whereas the other half were undecided.The majority agreed that the entire system was easy to use without side effects and were satisfied with the experience.This was the first phase of a larger project in which we plan to compare VR videos to manikin-based simulation (the current 'gold-standard' for education).
Summary
This initiative was created as a response to the pandemic to ensure that HCW adhered to proper PPE donning procedures in both high-and low-stress environments.Our preliminary evidence suggests VR videos serving as educational exposure therapy for HCW may be a costeffective, globally accessible and sustainable resource.We plan to expand the content of these videos to increase safety and decrease the emotional strain on our HCW in a variety of settings during this pandemic, while also conserving valuable resources.
Figure 1 .
Figure 1.Preliminary participant feedback regarding VR educational video.No participants replied "Strongly Disagree" to any of the questions. | 2021-11-26T16:40:00.213Z | 2021-11-23T00:00:00.000 | {
"year": 2021,
"sha1": "dd36d48358676f015412db8cf92c09268529dfdb",
"oa_license": "CCBYNCND",
"oa_url": "https://journalhosting.ucalgary.ca/index.php/cmej/article/download/72143/55304",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "6b014945e1950c56bf21b35980a3149a7daacda0",
"s2fieldsofstudy": [
"Education",
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119263226 | pes2o/s2orc | v3-fos-license | Search for third generation scalar leptoquarks in pp collisions at √ s = 7 TeV with the ATLAS detector
: A search for pair-produced third generation scalar leptoquarks is presented, using proton-proton collisions at √ s = 7 TeV at the LHC. The data were recorded with the ATLAS detector and correspond to an integrated luminosity of 4.7 fb − 1 . Each leptoquark is assumed to decay to a tau lepton and a b -quark with a branching fraction equal to 100%. No statistically significant excess above the Standard Model expectation is observed. Third generation leptoquarks are therefore excluded at 95% confidence level for masses less than 534 GeV.
Introduction
Leptoquarks (LQ) are colour-triplet bosons that carry both lepton and baryon numbers and have a fractional electric charge.They are predicted by many extensions of the Standard Model (SM) [1][2][3][4][5][6][7] and may provide unification between the quark and lepton sectors.In accordance with experimental results on lepton-number violation, flavour-changing neutral currents and proton decay, it is assumed that individual leptoquarks do not couple to particles from different generations [8,9], thus leading to three generations of leptoquarks.The most recent limit on pair-produced third generation scalar leptoquarks (LQ 3 ) decaying to τbτb comes from the CMS experiment, in which scalar leptoquarks with masses below 525 GeV are excluded at the 95% confidence level (CL) [10].Limits have also been set by the D0 [11] and CDF [12] experiments at the Tevatron, which have excluded third generation scalar leptoquarks with masses up to 210 GeV and 153 GeV respectively.First and second generation scalar leptoquarks have been excluded up to 830 GeV and 840 GeV respectively [13][14][15].The results presented here are based on a total integrated luminosity of 4.7 fb −1 of proton-proton collision data at a centre-of-mass energy of √ s = 7 TeV, collected by the ATLAS detector at the LHC during 2011.The final states investigated arise from the decay of both leptoquarks into a tau lepton and a b-quark, leading to a τbτb final state.The branching fraction of LQ 3 decays to τb is assumed to be equal to 100%.
Tau leptons can decay either leptonically (to an electron or muon plus two neutrinos), or hadronically (typically to one or three charged hadrons, plus one neutrino, and zero to four neutral hadrons).Since the final state includes two taus, this leads to three possible sub-categories of events: di-lepton, lepton-hadron and hadron-hadron.Of these, the lepton-hadron category has the largest branching fraction (45.6%), and the presence of one charged light lepton ( = e, µ) in the event is useful for event triggering and provides better rejection of the multi-jets background.Only the lepton-hadron decay mode is considered in this analysis, resulting in either an eτ had-vis bb + 3ν or µτ had-vis bb + 3ν final state, where τ had-vis refers to the visible (non-neutrino) components of the hadronic tau decay.
Selected events are therefore required to have one electron or muon with large transverse momentum (p T ), one high-p T hadronically decaying tau, missing transverse energy from the tau decays, and two high-p T jets.Searches are performed independently for the electron and muon channels.The results are subsequently combined and interpreted as lower bounds on the LQ 3 mass.
The ATLAS detector
The ATLAS detector [16] is a multi-purpose detector with a forward-backward symmetric cylindrical geometry and nearly 4π coverage in solid angle.ATLAS uses a right-handed coordinate system with its origin at the nominal interaction point (IP) and the z-axis along the beam pipe.The x-axis points from the IP to the centre of the LHC ring, and the y-axis points upward.Cylindrical coordinates (r, φ ) are used in the transverse (x, y) plane, with φ being the azimuthal angle around the beam pipe.The pseudorapidity η is defined in terms of the polar angle θ by η = −ln(tan(θ /2)).The three major sub-components of ATLAS are the tracking detector, the calorimeter and the muon spectrometer.
Charged particle tracks and vertices are reconstructed using silicon pixel and microstrip detectors covering the range |η| < 2.5, and by a straw tube tracker that covers |η| < 2.0.Electron identification capability is added by employing Xenon gas to detect transition radiation photons created in a radiator between the straws.The inner tracking system is immersed in a homogeneous 2 T magnetic field provided by a solenoid.
Electron, jet and tau energies are measured in the calorimeter.The ATLAS calorimeter system covers a pseudorapidity range of |η| < 4.9.Within the region |η| < 3.2, electromagnetic calorimetry is provided by barrel and end-cap high-granularity lead/liquid argon (LAr) electromagnetic (EM) calorimeters, with an additional thin LAr presampler covering |η| < 1.8, to correct for energy loss in material upstream of the calorimeters.Hadronic calorimetry is provided by a steel/scintillator-tile calorimeter, segmented into three barrel structures within |η| < 1.7, and two copper/LAr hadronic end-cap calorimeters.The forward region (3.1 < |η| < 4.9) is instrumented by a LAr calorimeter with copper (EM) and tungsten (hadronic) absorbers.
Surrounding the calorimeters, a muon spectrometer with air-core toroids, a system of precision tracking chambers providing coverage over |η| < 2.7, and detectors with triggering capabilities over |η| < 2.4 to provide precise muon identification and momentum measurements.
A three-level event-triggering system selects inclusive electron and muon candidates to be recorded for offline analysis.
Monte Carlo simulations
Simulated signal events are produced using the PYTHIA 6.425 [17] event generator with underlying-event Tune D6 [18] and CTEQ6L1 [19] parton distribution functions (PDFs).The coupling λ LQ→lq which determines the LQ lifetime and width [20] is set to 0.01×4πα, where α is the fine-structure constant.This value is widely used in leptoquark searches and gives the leptoquark a full width of less than 1 MeV and a decay length of less than 1 mm.The signal process is normalised using next-to-leading-order (NLO) cross-sections for scalar leptoquark pair production [21].The signal production cross-section for a leptoquark mass of 500 GeV is 46.2 fb.
Background processes considered in this analysis are the production of W +jets, Z/γ * +jets, t t, single top quarks, boson pairs, and multi-jets.The W -and Z-boson processes are simulated using the ALPGEN 2.13 generator [22] with the technique described in ref. [23] to match the hard process (calculated with a leading-order (LO) matrix element for up to five partons) to the parton shower of HERWIG 6.510 [24], and uses JIMMY 4.31 [25] to model the underlying event.Wherever available, dedicated ALPGEN 2.13 samples with massive charm and bottom partons were used for the W +jets and Z/γ * +jets processes.All samples listed above are generated using the CTEQ6L1 PDFs.Di-boson processes (WW , W Z, and ZZ) are modelled with HERWIG 6.510 using the MRST [26] LO PDFs.Samples of top-quark pair production and associated production of single top quark (Wt) events are produced using the MC@NLO 4.01 [27][28][29][30] generator interfaced with HERWIG 6.510 for parton showering, and JIMMY 4.31 to model the underlying event.The CT10 [31] PDFs are used.Single-top sand t-channel processes are modelled with AcerMC 3.8 [32] using the MRST LO PDFs.The top-quark mass is taken as 172.5 GeV.In all simulated samples TAUOLA [33] and PHOTOS [34] are used to model τ-lepton decays and additional photon radiation from charged leptons, respectively.The W +jets and Z+jets samples are normalised to the inclusive NNLO crosssections in the proportions predicted by NLO calculations for exclusive n-parton production.The most precise available calculation (nearly NNLO) is used to normalise t t production [35].All other sources of background are normalised using the cross-sections calculated at NLO. Signal and background events are processed through a detailed detector simulation [36] based on GEANT4 [37].The data used in this paper are affected by multiple pp collisions occurring in the same or neighbouring bunch crossings (pile-up) and have an average of ten interactions per bunch crossing.The effects of pile-up are taken into account by overlaying simulated minimumbias events onto the simulated hard-scattering events.The Monte Carlo (MC) samples are then re-weighted such that the average number of pile-up interactions matches that seen in the data.
Physics object identification
Collision events are required to have at least one reconstructed vertex with at least four associated tracks with p T > 0.4 GeV.In events where more than one vertex is found, the primary vertex is defined as the one with the highest ∑ p 2 T of the associated tracks.For the final state of interest described below, this choice of primary vertex is correct in 98.9% (98.4%) of the cases in the electron (muon) channel for the luminosity range considered here.
Electron candidates are reconstructed from clusters of cells in the electromagnetic calorimeter and from tracks in the inner detector.They are required to pass a set of electron identification cuts, based on information about the transverse shower shape, the longitudinal leakage into the hadronic calorimeter, transition radiation, and the requirement that a good-quality track with a hit in the innermost pixel layer points to the calorimeter cluster [38].A tight working point corresponding to a selection efficiency of approximately 80% for true electrons in simulation is chosen.Electrons are required to have p T > 20 GeV and |η| < 2.47, excluding the transition region between the barrel and the end-cap calorimeters, i.e. 1.37 < |η| < 1.52.Isolation requirements are placed on the electron candidates by demanding that the calorimeter transverse energy in a cone of radius ∆R = 0.2 around the electron (not including the electron cluster) must be less than 20% of the electron p T , where ∆R = (∆η) 2 + (∆φ ) 2 .In addition, track isolation is imposed by requiring that the p T sum of additional tracks (not including the electron track) in a cone of radius ∆R = 0.2 is less than 20% of the electron track p T .
Muon tracks are reconstructed independently in the inner detector and in the muon spectrometer.Tracks are required to have a minimum number of hits in each, and must be compatible in terms of geometrical and momentum matching.The information from both systems is then used in a combined fit to refine the measurement of the momentum of each muon.Muon candidates are required to have p T > 20 GeV and |η| < 2.5.The average muon reconstruction efficiency is approximately 90%, except for small regions in pseudorapidity where it drops to 80% [39].Isolation requirements are imposed by demanding that the transverse energy (E T ) deposited in the calorimeter in a cone of radius ∆R = 0.2 around the muon (not including the cells crossed by the muon) is less than 20% of the muon p T .Furthermore, track isolation is imposed by requiring that the p T sum of additional tracks (not including the muon track) in a cone of radius ∆R = 0.2 be less than 20% of the muon p T .
Jets are reconstructed using the anti-k t [40] algorithm with a radius parameter R = 0.4.The jet algorithm is run on calibrated topological clusters of calorimeter cells [41].Additional p T -and η-dependent corrections are applied to jets to bring them to the final calibrated energy scale [42].Selected jets must have p T > 25 GeV and |η| < 2.8.
The identification of jets originating from b-quarks is performed using a neural-network-based tagger [43] that uses the output weights of several likelihood-based algorithms as inputs.The track transverse and longitudinal impact parameters with respect to the primary vertex are examples of variables used by these algorithms.A working point corresponding to an identification efficiency of approximately 70% for true b-jets in simulation is chosen.For jets initiated by gluons or light quarks, the rejection factor (1/fraction that pass the b-tagging ID) is of order 100.
The reconstruction of hadronically decaying tau leptons is seeded by jets which are reconstructed within the acceptance of the inner detector.Only clusters in a cone with radius ∆R = 0.2 are used to define the visible tau energy and direction because the products of hadronic tau decays are more collimated than those from multi-jet processes.Additional corrections depending on the p T , η and number of tracks are applied to bring the reconstructed tau candidates to the correct energy scale [44].The energy deposition in the calorimeter is required to be matched to either one or three tracks in the inner detector.Hadronically decaying taus are required to have visible p T > 20 GeV, |η| < 2.5 and unit charge, and are identified using a Boosted Decision Tree (BDT) 1 [45] which uses both calorimeter and tracking-based variables such as shower width and track multiplicity.A working point with an identification efficiency for true hadronic tau decays in simulation of ∼ 50% is chosen.The rejection factor for jets ranges from 50 to 100 depending on the number of tracks matching the jet candidate.
The missing transverse momentum is a two-dimensional vector defined as the negative vector sum of the transverse momenta of reconstructed electrons, muons, tau leptons and jets, and also of calorimeter energy deposits not associated with reconstructed objects. 2 The magnitude of the missing transverse momentum vector is referred to as the missing transverse energy (E miss T ).
Event selection
Events are required to be identified by the trigger system as containing at least one electron or one muon.In order to control the data-taking bandwidth, the trigger system imposed a minimum transverse energy threshold on electrons of 20 GeV or 22 GeV (depending on the data-taking period), and a minimum p T threshold on muons of 18 GeV.For the highest luminosities towards the end of the data-taking period, the muon trigger is required to be accompanied by a jet that passes the first-level trigger p T threshold of 10 GeV.All data events are required to be recorded during stable LHC running conditions and with all relevant sub-detectors functioning normally.Events are cleaned for instrumental effects, such as sporadic noise bursts [46].Events are required to have exactly one reconstructed electron (muon) with p T > 25 (20) GeV.This suppresses background processes such as Z/γ * → and t t which have a higher average lepton multiplicity.Exactly one identified hadronic tau decay candidate with p T > 30 GeV and oppositesign charge to the lepton is required.The E miss T is required to be larger than 20 GeV in order to further reject multi-jet and Z/γ * → processes.In addition, at least two reconstructed jets are required, with the leading jet having p T > 50 GeV and the sub-leading jet having p T > 25 GeV.
The signal-to-background ratio is improved by requiring that either the leading or sub-leading jet passes the b-tagging requirements.After requiring that events must have one of these two jets passing the b-tagging requirements, the dominant background process is t t.Since both the signal and t t processes contain two b-jets in the final state, no further improvement in sensitivity is obtained by requiring that a second jet in the event also pass the b-tagging requirements.
The visible mass (m τ had-vis −jet ) of the tau candidate and the closest jet in η − φ space (minimum ∆R) is required to be larger than 90 GeV.Only jets with p T > 40 GeV are considered.This cut is chosen to reject semi-leptonic t t events where the tau candidate is faked by jets from W → q q decays.
In events containing leptoquark decays, large E miss T arises from neutrinos accompanying the tau decays.Taus originating from leptoquarks typically have high momentum, thus the decay products are predominantly collinear and the E miss T direction is correlated with the direction of the visible tau decay products.Two variables are defined in order to improve the separation of signal and background: the absolute difference in φ between the charged lepton and E miss T (|∆φ (E miss T , )|), and the absolute difference in φ between the tau candidate and E miss T ( |∆φ (E miss T , τ had-vis )|).The relationship between these two variables for simulated signal (m LQ = 500 GeV) and the dominant top background, after applying all the requirements described above is shown in figure 1. Events must satisfy the following requirement: where = e, µ.This E miss T angular requirement selects events below the solid line in figure 1.A leptonically decaying tau is accompanied by two neutrinos, whereas a hadronically decaying tau is accompanied by only one.In events with two true taus (as in the signal process), the E miss T is therefore typically aligned with the leptonic tau decay and these events are preferentially selected by the E miss T angular requirement.The signal efficiency is approximately 85%, independent of leptoquark mass.For t t events containing a real hadronic tau (produced from the W decay), the additional neutrinos from the tau decay cause the E miss T to be preferentially aligned with the hadronic tau decay and these are rejected.In the subset of t t events where the tau is faked by a jet from W -decay, events are evenly distributed across the |∆φ (E miss T , )|-|∆φ (E miss T , τ had-vis )| plane and a large proportion of these are also rejected.The overall efficiency for inclusive t t events is 31%.
Background estimation
Backgrounds considered in this analysis are the production of W +jets and Z/γ * +jets (collectively referred to as V +jets), t t, single top quarks, di-boson and multi-jets.Normalisation factors are applied to the MC predictions for V +jets and top backgrounds in background-enriched control regions, to predict as accurately as possible the background in the signal region, as described in more detail below.These control regions are constructed to be mutually exclusive of the signal region and the assumption is made that normalisation factors in the signal region are the same as in the background-enriched control regions.The contribution from multi-jets is estimated using fully data-driven techniques.The contribution from di-boson processes is taken directly from MC.The shapes of the distributions are taken from simulation in the signal region.
Different approaches are used to estimate the backgrounds in the electron and muon channels.Normalisation factors for the electron channel are calculated after applying the electron, tau, E miss T , charge-product cuts, and jet multiplicity and p T requirements described in section 5 above.This approach minimises bias with respect to the signal region, but leads to limited statistics (for MC and data) in the control regions.Normalisation factors for the muon channel are calculated after applying only the muon, tau and charge-product requirements described in section 5.
Electron channel
The multi-jets background is estimated by defining a region in data with a tau candidate that fails the tau BDT identification used in the nominal selection but passes a looser identification working point, and has the same-sign charge as the electron.In addition, the events are required not to contain any taus that pass the nominal selection criteria.Contributions from W , Z/γ * +jets, topquark, and di-boson background processes estimated from MC simulations are subtracted to get the shape of the multi-jets distribution.The normalisation is determined by performing a twocomponent maximum likelihood fit to kinematic distributions of the sum of multi-jets and all other sources of background to data, with the multi-jets fraction being the fit parameter.The variable used for fitting is chosen to provide good discrimination between multi-jets and other sources of background.The method is used to calculate the multi-jets contribution in the signal region, where the transverse mass between the charged light lepton and the E miss T , defined to be: is used as the fit variable.The multi-jets contribution is found to be 12 +8 −16 % of the total data yield in the signal region.The same method is also used in background control regions, fitting to the E miss T distribution in the W and Z/γ * → ττ control regions, and the electron E T in the top control region.The validity of the method used to estimate the multi-jets background contribution is crosschecked by using events with same-sign charge electron-tau pairs as the control region and found to be compatible within statistical errors.Dependence on the choice of variable used is evaluated by fitting to other kinematic variables, and also found to be within statistical errors of the nominal value.Separate control regions are defined for the Z/γ * → ee, Z/γ * → ττ, W , and t t and single top-quark processes.They are defined by applying the electron, tau, charge-product, E miss T , and jet multiplicity and p T requirements (as described in section 5, collectively referred to as the 'baseline' requirements), in addition to the cuts shown in table 1.
The control region for Z/γ * → ee events is constructed by requiring an additional electron with p T > 20 GeV and opposite-sign charge to the first one.The second electron is required to pass the same identification requirements as the first one.
The Z/γ * → ττ control region is defined by additionally requiring that the visible mass of the electron-tau pair is in the range 40 < m eτ had-vis < 80 GeV and that the transverse mass between the electron and the E miss T is less than 60 GeV.A b-jet veto is also applied, using a looser working point (with a selection efficiency of 75%) compared to the working point used for signal selection.In this way contamination from top backgrounds is reduced.
The W +jets control region is constructed by applying in addition a b-jet veto (as described above), demanding that 60 < m T < 120 GeV, and requiring that the event fail the E miss T angular requirements cut (eq.5.1).
The control region for top backgrounds is defined by additionally requiring that events pass the b-tagging requirements, have m τ had-vis jet > 90 GeV, fail the E miss T angular requirements and have S T < 350 GeV, where S T is defined as the scalar sum of the p T of the charged light lepton, the tau, the two highest-p T jets and the E miss T in the event, Normalisation factors for V +jets and top backgrounds in the electron channel are calculated according to: where N Data is the number of data events in the control region, N MC Other BG is the expected number of events from other background processes, and N MC BG is the contribution in the control region from the background process of interest.
To account for contamination from other background processes in the control regions, normalisation factors are found for each region in turn.At each stage the multi-jets contribution is re-estimated and all previously found normalisation factors are applied when estimating the contribution from other background processes.
The final background normalisation factors obtained are presented in table 3 and discussed together with the muon channel in section 6.3.
Muon channel
The multi-jets contribution to the control and signal regions are estimated in the muon channel from data using the ABCD method.Events are sorted into four regions using two observables assumed to be independent -the muon isolation and the sign of the charge product of the muontau pair.The regions are therefore defined as: isolated muon and opposite-sign muon-tau pair (A), isolated muon and same-sign muon-tau pair (B), as well as two regions with a non-isolated muon and opposite-sign or same-sign charge muon-tau pair (C and D respectively).Non-isolated muons are defined as those which fail at least one of the isolation requirements described in section 5. Contributions from other physics processes in regions B, C, and D are subtracted using the MC simulation.The shape of kinematic distributions for the multi-jets background is taken from region B, while the expected number of events in the signal region (A) is determined by taking the product of the number of events in region B with the ratio of the number of events in regions C and D (i.e. ).The multi-jets contribution is estimated to be 15±4% of the total data yield in the signal region.The validity of the method is checked by varying both isolation cuts up and down from the nominal value of 0.2 by 0.05.Deviations in the ratio N C N A are included as an additional systematic uncertainty.
Control regions for V +jets and top-quark background processes are defined by applying the muon and tau requirements (including the charge-product requirement) as described in section 5. Additional selection criteria used for each control region are listed in table 2. Normalisation factors are calculated for each process by performing a maximum likelihood fit.The variable used for fitting is chosen in each case to provide good discrimination between the background process of interest, and other contributing physics processes in that control region.The contribution from multi-jets in each control region is estimated using the method described above, and this and contributions from other background processes are taken into account when performing the fits.
The control region for Z/γ * → µ µ events is defined by requiring two oppositely charged muons and one hadronic tau decay.The second muon is required to pass the same requirements as the first.The normalisation factor for Z/γ * → µ µ events in the signal region is then determined by fitting to the di-muon invariant mass distribution in the range 60 < m µ µ < 120 GeV.
The normalisation of Z/γ * → ττ events is obtained by selecting events with one muon, one hadronic tau decay and E miss T > 20 GeV.Additionally, events are required to fail the b-jet requirement.The fit is performed using the visible mass of the muon-tau pair in the range 45 < m µτ had-vis < 80 GeV.
The control region for W +jets events is defined by selecting events with one muon, one hadronic tau decay and E miss tion is determined by fitting to the transverse mass of the charged light lepton and the E miss T in the range 70 < m T < 100 GeV.
The control region for t t and single-top processes is defined by applying all selection criteria, with the exception of the E miss T angular requirement which is reversed.In addition, the S T of the event is required to be less than 350 GeV.The normalisation factor is obtained by fitting to the S T distribution up to 350 GeV.
Background summary
The background normalisation factors in the signal region determined from data for both channels are presented in table 3.
Uncertainties for normalisation factors are larger in the electron channel than the muon channel due to the tighter requirements placed on the control region definitions, namely the additional requirements on E miss T and jets which are not applied for the muon channel (unless explicitly stated).As a cross-check, the electron channel method (detailed in section 6.1) is applied to the muon channel and the signal region normalisation factors determined in this way are found to be NF Z/γ→µ µ = 0.59 ± 0.09, NF Z/γ→ττ = 1.03 ± 0.08, NF W = 0.50 ± 0.08 and NF top = 0.93 ± 0.09.These figures agree within uncertainties with the factors determined using the method described in section 6.2 and shown in table 3.
The largest background contribution comes from t t events, with approximately 55% of these coming from events containing a real hadronic tau decay (from the W decay) after all selection cuts are applied.Approximately 40% come from events where the W boson decays hadronically, and the tau candidate is faked by one of the jets.In the remaining ∼ 5% of events, the reconstructed hadronic tau decay is faked in equal proportions by electrons or b-jets.Normalisation factors for background processes in which the hadronic tau decay is faked by a jet are observed to be significantly smaller than unity.This is a known effect, caused by jets being narrower in simulation than in data and therefore being more likely to fake a hadronic tau decay [47].
In both methods used for background estimation, the control regions for V +jets background processes either make no requirements on b-tagging, or veto events containing one or more b-jets.Simulation tests have validated the assumption that the normalisation factors obtained in regions that require b-jets are the same as those in regions where b-jets are not explicitly required, or are vetoed.
Systematic uncertainties
In simulated samples all sources of uncertainty are varied individually within their errors and the impact on the results of the analysis is determined.Background normalisation factors and multi-jets contributions are recalculated for each source of systematic uncertainty.In this way the nominal simulation (comprising the current best estimates for physics object reconstruction corrections) and systematic variations are treated coherently, and the uncertainties are propagated through the analysis.The S T distribution is used to test for the existence of leptoquarks, since this variable provides the best discrimination between signal and background (discussed further in section 8).The shape of the S T distribution remains unchanged within the total shape uncertainty (see further discussion in section 8) when applying all the uncertainties detailed below and systematic variations are therefore treated as nuisance parameters affecting the overall scale.The relative changes in acceptance for signal and background are quoted for each systematic variation.
Object-level uncertainties
There are several sources of systematic uncertainty associated with the reconstruction, identification, and energy scale of physics objects, which can potentially affect the estimated shapes of kinematic distributions and the normalisation of various processes.
Uncertainties associated with the efficiency of single-lepton triggers are typically less than 1% [48,49] and a ±0.5% (±3.3%) variation in the signal acceptance is observed when varying the electron (muon) trigger efficiency by ±1σ .The effect on background processes is negligible.
Varying the electron energy scale by ± 1σ results in a ±0.8% change in background acceptance compared to the nominal selection and has a negligible impact on the signal yields.The electron reconstruction and identification efficiency uncertainties are combined in quadrature and yield an overall change of less than 1.5% for both signal and background.
Varying the muon momentum scale by 1σ results in a 0.2% change in signal yields compared to the nominal selection.The impact of muon resolution uncertainties on signal and background acceptance is found to be negligible.A ±1σ variation of muon reconstruction efficiency results in a ±0.3% change in signal acceptance.
The uncertainty on the tau energy scale is typically around 3%, depending on the p T and η of the hadronically decaying tau candidate [50].Varying the energy scale by ±1σ changes the acceptance for background and low-mass signal (m LQ = 200 GeV) by approximately 2%, decreasing to 1.2% for m LQ = 500 GeV.The uncertainty on the tau identification efficiency is 4% for taus with p T < 100 GeV.This increases linearly with p T up to a maximum of 8% for three-prong taus with p T > 350 GeV.Varying the tau identification efficiency by this uncertainty results in an overall acceptance change for signal of approximately 6% (for a leptoquark of mass 500 GeV).The variation of background yields is found to be approximately 1% -significantly smaller than the change in the signal yield, because the effect is largely absorbed in the normalisation factor defined in the control region.
The jet energy resolution uncertainty is approximately 10% and affects the event yields by approximately 2% [42].The jet energy scale uncertainty depends on p T and η, and varies between 2% and 5%.It is modelled by 14 separate nuisance parameters, each of which is varied by ±1σ independently from the others.The use of control regions does not significantly reduce the variations of the different background yields, and changes in acceptance of signal and background of up to ±2% are observed.
The uncertainty on the b-jet identification efficiency for the algorithm and operating point used in this analysis ranges from 5% to 18% depending on jet kinematics.The b-tagging efficiency and probability that a light jet is identified as a b-jet are anti-correlated and thus varied accordingly.A ±1σ variation results in a ±9% (±15%) change in signal acceptance for leptoquarks with a mass of 200 (500) GeV.The use of control regions reduces the background yield variation to ±3% in both channels.
All energy scale and resolution corrections for electron, muon, tau and jet candidates are propagated consistently to the E miss T calculation.Additional uncertainties related to the energy scale and pile-up dependence of calorimeter clusters not associated with any high-p T objects (electrons, taus, jets) are also considered in the E miss T calculation.These sources are varied independently within their uncertainties and the impact on signal and background yields is found to be negligible.
The uncertainty on the integrated luminosity for data taken during 2011 is ±3.7% as determined in ref. [51].
Theoretical uncertainties
QCD renormalisation and factorisation scales are varied by a factor of two to estimate the impact on the signal production cross-section.The variation is found to be ±12%.A re-weighting technique is used to assess the sensitivity of the signal acceptance to the choice of parton distribution functions and the resulting uncertainty is estimated to be ±12%.Varying the multi-parton interactions within experimental bounds has a negligible effect on the signal process.
The effect of the choice of event generator for the top-quark background is estimated by using PowHeg 1.0 [52,53] (instead of MC@NLO 4.01) to model the hard process.The parton shower and hadronisation models, and the underlying event model (respectively HERWIG 6.510 and JIMMY 4.31 in the nominal sample) are replaced with those from PYTHIA 6.425.In addition, the CTEQ6L1 PDF set is used instead of CT10 which is used in the nominal t t sample.The total background yield is found to differ by 1.5% with respect to the nominal samples.
The uncertainty on the signal and the top-quark background due to initial-state radiation (ISR) and final-state radiation (FSR) is evaluated using the AcerMC generator interfaced to the PYTHIA 6.425 shower model with the parameters controlling ISR and FSR varied in a range consistent with experimental data [54].The event yields are found to agree with nominal values within statistical uncertainties.
MC@NLO events with top-quark masses of 170 GeV and 175 GeV are used to assess the top-quark mass dependence, which is added in quadrature to the uncertainty related to the choice of event generator and PDF set.The resulting uncertainty (2.8%) is treated as a nuisance parameter affecting the background yield and is assumed to be fully correlated between the electron and muon channels.
Other background processes taken from simulation (W , Z/γ * , di-boson) account for less than 20% of the total background.The W and Z/γ * samples are simulated with the matching parameter (described in ref. [23]) set to 20 GeV.Event yields are found to agree with the nominal values within statistical uncertainties when this parameter is changed to 30 GeV.
The W and Z control regions are defined with either the application of a b-jet veto, or with no b-tagging requirements.The uncertainties on the production cross-sections of W or Z bosons in association with one or two b-jets are estimated using MCFM [55,56].The QCD renormalisation and factorisation scales are varied independently by a factor of two, and different PDF sets are considered.The total uncertainty is found to be 30% and the uncertainties on the normalisation factors for W and Z background processes are increased by this amount (i.e. to 2%).
Background uncertainties
For each channel, the systematic uncertainties on the normalisation factors in table 3 are evaluated by calculating the normalisation factor for a given background when normalisation factors for all other sources of background are varied up or down by their statistical error.The systematic uncertainty on the multi-jets background is evaluated by varying the normalisation factors of other backgrounds by ±1σ .The methods used to estimate the contributions from multi-jets processes are validated by modifying the control regions used, and in the case of the electron channel the variable used for fitting (see sections 6.1 and 6.2).Deviations from the nominal value are included as additional sources of systematic uncertainty.
Conservatively, all background normalisation factors are assumed to be fully correlated and the impact on the total background yield is +16 −19 % for the electron channel and ±9% for the muon channel.The background estimation method used in the muon channel allows a more accurate determination of the normalisation factors compared to the event-counting method employed in the electron channel, and the normalisation factor uncertainty for the muon channel is correspondingly smaller than in the electron channel.For both channels, the limited number of data events in the top-quark control region is the main source of uncertainty on the top-quark normalisation factor, which in turn has the largest impact on the total yield uncertainty.Due to the tighter requirements on control regions for the electron channel background estimation, this method also suffers from a limited number of events in data and simulation when estimating normalisation factors for V +jets background processes.
Summary of systematic uncertainties
The shape of the S T distribution remains unchanged within statistical uncertainties when applying all the uncertainties mentioned above.The uncertainties for the electron and muon channels are summarised in tables 4 and 5 respectively.Uncertainties related to the background normalisation factors have the largest impact on the total background yield, while for the signal yield the largest sources of systematic uncertainty are due to theoretical uncertainties (comprising uncertainties related to PDFs, multi-parton interactions, and initial-and final-state radiation) and from b-jet identification.
Results
The observed yields in data, as well as expected yields for the background processes and the signal for several leptoquark masses, after all selection cuts are applied, are shown in table 6.The S T distribution is used to test for the existence of leptoquarks.Distributions for both channels are shown in figure 2. At very high S T , the statistical uncertainties on the various background processes become very poor due to the limited number of MC and (in the case of the multi-jets) data events in the signal region.The sum of the background processes is fitted in the region 350 GeV < S T < 2000 GeV to an exponential function using a maximum likelihood fit.In this way the distribution is smoothed and a background expectation is provided throughout this S T region.The fit parameters are varied within their uncertainties to obtain a shape uncertainty.The shape of the S T distribution is checked for all systematic variations and the variation is found to be significantly smaller than the fit uncertainty in almost all cases.The only exception is for the variation in choice of generator used to model the t t background process, where the central value lies outside the nominal range (although covers the nominal value within its own statistical uncertainty).Conservatively, the difference between the nominal shape and the alternative shape is taken as a systematic uncertainty and added to the shape uncertainty determined from the nominal fit.The total shape uncertainty is treated as an additional nuisance parameter.Comparisons of the fitted distributions to data are shown in figure 3. Below S T = 350 GeV the background shape is taken from the histogram.ses.The signal component is calculated separately for each leptoquark mass, thus taking the mass dependence of the S T distribution into account.For each mass hypothesis, a single 'signal strength' parameter (µ) multiplies the expected signal in each bin, where µ = 0 corresponds to the absence of a signal and µ = 1 corresponds to the presence of a signal with nominal strength.The model describes the expected number of signal (s i ) and background (b i ) events in each bin using a Poisson distribution.All systematic uncertainties described in section 7 are assumed to be distributed where s i and b i are the expected number of signal and background events in bin i respectively, and N i is the observed number of events.Both s i and b i depend on nuisance parameters θ.Pseudoexperiments are generated according to background-only and signal+background models to obtain distributions of the test statistic, log(L (µ, θ )/L (0, θ )).The CLs method [57] is used to calculate the p-values 3 .The signal strength parameter is varied iteratively to find the 95% confidence level.The resulting cross-section limits as a function of leptoquark mass are calculated.It is assumed that BR(LQ → τb) = 1.0.The 95% CL upper bounds on the NLO cross-section for scalar leptoquark pair production as a function of mass are shown for individual channels in figures 4(a) and 4(b).Error bands for the expected limits include all sources of uncertainty.Third generation scalar leptoquarks are observed (expected) to be excluded at 95% confidence level for masses below 498 (523) GeV and 473 (514) GeV in the electron and muon channels respectively by comparing the limits with theoretical predictions of cross-section versus m LQ .The limit is taken using the nominal theoretical calculation for the leptoquark production cross-section at NLO.The uncertainty The expected (dashed) and observed (solid) 95% credibility upper limits on the cross-section as a function of leptoquark mass, for the combined result.The 1(2) σ error bands on the expected limit represent all sources of systematic and statistical uncertainty.The expected NLO production cross-section for third generation scalar leptoquarks and its corresponding theoretical uncertainty (hashed band) are also included.on the cross-section is also shown.The result when both channels are combined is shown in figure 5.The likelihood for the combined model is defined as the product of likelihood terms for each channel.The data are found to be consistent with the background-only hypothesis and third generation scalar leptoquark production is excluded at 95% confidence level for leptoquark masses up to 534 GeV (the expected limit is 569 GeV).
Conclusions
A search for pair production of third generation scalar leptoquarks has been performed with the ATLAS detector at the LHC, using a data sample corresponding to an integrated luminosity of 4.7 fb −1 .No significant excess over the SM background expectation is observed in the data.The assumption is made that BR(LQ → τb) = 1.0 and third generation scalar leptoquarks with masses up to m LQ < 534 GeV are excluded at 95% CL.The cross-section for leptoquark pair-production increases with centre-of-mass collision energy.At √ s = 8 TeV the production rate for leptoquarks with m LQ = 700 GeV is enhanced by a factor of two, providing scope for setting stronger limits using data from the 2012 LHC run.Meanwhile, this result is the most stringent limit arising from direct searches for third generation scalar leptoquarks to-date.
Figure 1 .
Figure 1.The absolute value of the angle ∆φ between the reconstructed charged light lepton and E miss T as a function of |∆φ | between τ had-vis and E miss T for simulated (a) signal (m LQ = 500 GeV) and (b) top-quark background, after applying all selection cuts (see text) and normalising to the integrated luminosity of the data.The function corresponding to the solid line is defined in eq.5.1.
Figure 2 .
Figure 2. Data and MC comparisons of the S T variable after applying all cuts in the (a) electron and (b) muon channels.The ratio N Data /N Background is also shown, where the red line at unity and hashed band represent the Standard Model expectation and associated statistical and systematic uncertainties.No events with S T > 1.2 TeV were observed in data.
Figure 3 .
Figure 3.Comparison of the fitted S T background shape to data in the (a) electron and (b) muon channels.The ±1σ band represents the uncertainty on the shape of the S T distribution, obtained by varying the fit parameters within their uncertainties and comparing with the shape of the S T distribution obtained for each systematic variation.No events with S T > 1.2 TeV were observed in data.
Figure 4 .
Figure 4.The expected (dashed) and observed (solid) 95% credibility upper limits on the cross-section as a function of leptoquark mass, in the (a) electron and (b) muon channels.The 1(2) σ error bands on the expected limit represent all sources of systematic and statistical uncertainty.The expected NLO production cross-section for third generation scalar leptoquarks and its corresponding theoretical uncertainty (hashed band) are also included.
Figure 5 .
Figure5.The expected (dashed) and observed (solid) 95% credibility upper limits on the cross-section as a function of leptoquark mass, for the combined result.The 1(2) σ error bands on the expected limit represent all sources of systematic and statistical uncertainty.The expected NLO production cross-section for third generation scalar leptoquarks and its corresponding theoretical uncertainty (hashed band) are also included.
Table 1 .
Control region definitions for the electron channel.The electron, tau, charge-product, E miss T , and jet multiplicity and p T requirements are applied as described in section 5.
Table 2 .
Control region definitions for the muon channel.The muon, tau and charge-product requirements are also applied as described in section 5.
Table 3 .
Summary of background normalisation factors obtained using the control regions specified in tables 1 and 2. The errors include both the statistical and systematic (discussed in section 7.3) uncertainties.
Table 4 .
The sources of systematic uncertainty in the electron channel and the relative change (in %) in the background and signal yields.The theory term includes uncertainties related to initial and final state radiation, PDFs, and multi-parton interactions.
Table 5 .
The sources of systematic uncertainty in the muon channel and the relative change (in %) in the background and signal yields.The theory term includes uncertainties related to initial-and final-state radiation, PDFs, and multi-parton interactions.
Table 6 .
Yields for data, background and several leptoquark masses in both channels after all cuts are applied.The errors include statistical uncertainties and systematic uncertainties on the background normalisation. | 2013-07-05T09:53:13.000Z | 2013-03-03T00:00:00.000 | {
"year": 2013,
"sha1": "e0f15286fda87fd41d8441b6c4fb701b125aee22",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP06(2013)033.pdf",
"oa_status": "HYBRID",
"pdf_src": "ArXiv",
"pdf_hash": "73ab8d3ada3398899cc676848f0ebbe3062f5e55",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
118610879 | pes2o/s2orc | v3-fos-license | Structural phase transition in TmxFe1-xSe0.85 (Tm = Mn and Cu) and its relation to superconductivity
In this letter, we report the results of detailed studies on Mn- and Cu-substitution to Fe-site of beta-FeSe, namely MnxFe1-xSe0.85 and CuxFe1-xSe0.85. The results show that with only 10 at% Cu-doping the compound becomes a Mott insulator. Detailed temperature dependent structural analyses of these Mn- and Cu-substituted compounds show that the structural transition, which is associated with the changes in the building block FeSe4 tetrahedron, is essential to the occurrence of superconductivity in beta-FeSe.
The iron-pnictide [1,2,3,4,5,6] and β-FeSe [7,8] superconductors have become a focus of condensed matter research in the past year. In the iron-based superconductors, there exists a structural transition at temperature (Ts) much higher than the superconducting transition point (Tc). At Ts the tetragonal lattice (P4/nmm) distorts into a lower symmetry monoclinic lattice (P112/n) (or orthorhombic with the defined a-b plane rotated about 45 • with respect to the original lattice). In both the LaFeAsO (1111) and BaFe 2 As 2 (122) families it was suggested that this phase transition, which is accompanied with an antiferromagnetic state developed at around the same temperature, has to be suppressed either by chemical doping or applying external pressure in order to observe superconductivity [5,9,10,11,12,13]. However, this distortion seems to be indispensable to the superconductivity in the FeSe (11) compound [7,14,15]. Preliminary Mössbauer measurements [16,17] suggested no magnetic ordering developed below Ts and beyond Tc in FeSe. Yet, the existence of short-range ordering or spin fluctuation could not be totally ruled out.
In order to further investigate the distortion issue, substitutions on Fe sites were studied earlier in 11 [15], and in the 1111 [18] and 122 families [13,19,20]. Among all substituent alternatives, transition metals, especially those with unpaired 3d electrons such as Mn, Co, Ni, and Cu [21], would be of most interest for their comparable ionic sizes to Fe, and the potential to investigate in more detail the interplay between magnetism and superconductivity, which may also lead to better insight into the origin of superconductivity in this class of materials.
We reported earlier our preliminary results on a series of 3d-transition-metal substituted FeSe 1−x compounds [15]. For all 3d-elements (from Ti to Cu) with 10 at% * Electronic address: twhuang@phys.sinica.edu.tw † Electronic address: mkwu@phys.sinica.edu.tw substitution, we found only Mn, Co, Ni, and Cu could retain the tetragonal structure. We later decided to investigate in detail the Cu x Fe 1−x Se 0.85 and Mn x Fe 1−x Se 0.85 samples for comparison as we found only 3 atomic percent (at%) Cu-doping completely suppressed superconductivity, whereas up to 30 at% Mn-substitution only slightly decreased the superconducting transition temperature.
Cu and Mn substituted samples were prepared with method similar to that in [15]. TEM analysis was performed on powder samples suspended on gold grids coated with amorphous carbon in a JEOL 2100F transmission electron microscope equipped with STEM and EDX spectral analytical parts. The X-ray absorption near edge spectra were measured with calibrated standard iron foil at BL16A NSRRC with energy resolution about 0.1 eV. Cell parameters were calculated from the experiments performed in synchrotron source (BL12b2 at SPring 8 and BL13A at NSRRC) with an incident beam of wavelength 0.995Å. Neutron powder diffraction data were collected using Echidna and Wombat diffractometers [22,23] at the OPAL reactor, Australia. The samples were loaded in 6 mm cylindrical vanadium cans and data were collected in the temperature range 3-300 K using wavelength of 1.885Å. The resistance measurements were carried out using the standard 4-probe method with silver paste for contacts. Figure 1 shows the temperature dependence of electrical resistivity of Cu x Fe 1−x Se 0.85 and Mn x Fe 1−x Se 0.85 compounds with various x values. Superconducting transition in Cu x Fe 1−x Se 0.85 (Fig. 1a) was observed only in samples with x ≤ 0.02. For x ≥ 0.03, the compound gradually becomes semiconductor-like [21]. Detailed analysis of the temperature dependence of resistivity shows that for 10 at% Cu-doping sample the resistivity, as shown in the inset of It was surprising that only 3 at% Cu-doping makes the sample become an insulator. Figure 2a shows a TEM image of a Cu 0.04 Fe 0.96 Se 0.85 powder sample aligned with the c-axis, in a way that the Fe-Fe plane or the Se-Se plane is parallel to plane of this page. The selectedarea electron diffraction shows that, the reflections at the (hkl), h+k=2n, h=k=odd positions are quite strong, while they are expected to be very weak in FeSe. This strongly suggests the successful substitution of Cu into Fe site. The STEM/EDX elemental mappings (right panel of Fig. 2a) of the area of interest marked by a red square demonstrate no particular feature of copper in the sample suggesting homogeneous dispersion of Cu over the whole sample. The 2-Å scanning electron-probe, which is smaller than the Fe-Fe or Se-Se distance, is expected to be able to resolve any clustering or non-uniformity in the samples. The Fe and Se concentrations are as well found uniformly distributed in the sample.
The XAS Fe K-edge spectra are shown in Fig. 2b, which are normalized at the photon energy ∼100 eV from the absorption edge at E 0 =7112 eV (pure Fe). The feature marked as a1 is mainly due to the transition from Fe 1s to the 4sp state as in the FeSe x series [24]. A comparison of the spectra of the standard (Fe and FeO) and the Cu x Fe 1−x Se 0.85 samples with x=0-0.04 reveals the energy shifting at the a1 regions around 7116.8 eV with increasing x value. The results indicate that the variation of Fe valence, which is shown in the inset of Fig. 2b, decreases from +1.81 at x=0 to +1.66 at x=0.04. The linear shifting of absorption edge gives additional support to the random distribution of Cu over Fe, since inhomogeneity may give rise to deviations in the absorption spectra. In the Fe 4p states [25], a smooth feature from 8 to 15 eV above the edge also showed a tendency towards larger areas, suggesting a systematic increase of unoccupied states above the Fermi level as more Cu is substituted. These results may provide a rational explanation to the observed resistivity increases, and eventual insulating behavior, in higher Cu-doping samples.
In Fig. 3 we show the schematic crystal structure of β-FeSe and the temperature-dependent neutron powder diffraction patterns (NPD) of Cu-and Mn-substituted samples. Figure 3b is the temperature dependence of neutron scattering for Cu 0.01 Fe 0.9 Se 0.85 (left) and Cu 0.1 Fe 0.9 Se 0.85 (right) bulk samples. Peak splitting was observed in (220), (221) and (114) reflections at ∼60 K in the Cu 0.01 Fe 0.9 Se 0.85 sample. However, no splitting could be identified for any peak in Cu 0.1 Fe 0.9 Se 0.85 sample from 140 to 10 K, indicating the absence of any structural distortion in the 10 at% Cu-doping samples. On the other hand, in the NPD of Mn 0.1 Fe 0.9 Se 0.85 bulk sample, Fig. 3c, peak splitting is observed in (220), (221) and (114) reflections at ∼85 K indicating the onset of structural phase transition. This phase transition could be described by a structural distortion from tetragonal lattice (P4/nmm) to monoclinic (P112/n), which is much the same as observed in the FeSe [7,26] and FeSe 0.5 Te 0.5 at temperatures below ∼100 K [8]. Moreover, if viewing along the (110) direction, the lattice that distorts from tetragonal to monoclinic does not destroy the magnetic symmetry, allowing superconductivity to occur [8].
In the neutron data for Mn 0.1 Fe 0.9 Se 0.85 from 100 to 10 K, as shown in Fig. 3d, we observed several Bragg peaks at low q range suggesting incommensurate magnetic ordering at q=1. Detailed Rietveld refinements of the diffraction data gives insight to the Cu substitution effect on the crystal structure of Cu x Fe 1−x Se 0.85 (x=0-0.05). The lattice constants a and c were found slightly modified by Cu substitution, as shown in Fig. 4a. If seen with the tetrahedron shown in Fig. 3a, we found that this modification causes shrinkage in the Fe-Se bond length and slight expansion in Fe-Fe bond length (Fig. 4b), which is accompanied with changes in Se-Fe-Se bond angles (Fig. 4c). The effects combined leads toward a regular tetrahedron (γ=109.28 • ), i.e., compression of the tetrahedron. This hardened bond strength could inhibit the structural transition at low temperature. Thus, the low temperature structural transition (Ts) was drastically suppressed and eventually disappeared when the concentration of Cu substitution exceeded 3 at%.
Our experimental observations can be summarized in the structural phase diagram as shown in Fig. 4d for Cu x Fe 1−x Se 0.85 and Mn x Fe 1−x Se 0.85 . The substitution by Cu or Mn on Fe site clearly drives down the structural transition temperature Ts, and it also reveals the correlation between Ts and Tc. As Ts deceases with increasing Cu or Mn substitution, the superconducting state is gradually suppressed. It indicates that the driving force to the formation of low temperature phase, the monoclinic P112/n structure, could be the key for the formation of superconductivity in this type of superconductors. It is worth noting that the low-temperature structural distortion is completed by elongation along the (110) direction of tetragonal cell, which is shown by an arrow in Fig. 3a, revealing an one-dimension like pyramid chain through Se sites. It is natural to consider that Fermi sur-face nesting along the (110) direction could be mediated with this anisotropic chain. In this regard, the Fermi surface nesting along with phonon softening at proper temperatures could be an important driving force for the structural distortion. Further measurements on single crystals should be conducted before making any definite conclusion.
In summary we report the strong suppression of superconductivity by Cu substitution in the FeSe system. In comparison with Mn substitution, we found that the inhibited tetragonal to monoclinic structural phase transition should be responsible for the suppression of superconducting transition. Samples with Cu substitution over 3 at% show no structural distortion and no superconductivity down to 2 K. Detailed structural analyses suggest that for the FeSe system the modification of FeSe 4 tetrahedron could be essential to the structural phase transition and thus to the origin of superconductivity. | 2009-07-23T08:35:50.000Z | 2009-07-23T00:00:00.000 | {
"year": 2009,
"sha1": "1bc2efcaedd65adc158d59f91b4a5562d6cbfa47",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "1bc2efcaedd65adc158d59f91b4a5562d6cbfa47",
"s2fieldsofstudy": [
"Physics",
"Materials Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
12973598 | pes2o/s2orc | v3-fos-license | Quantification of Health by Scaling Similarity Judgments
Objective A new methodology is introduced to scale health states on an interval scale based on similarity responses. It could be well suited for valuation of health states on specific regions of the health continuum that are problematic when applying conventional valuation techniques. These regions are the top-end, bottom-end, and states around ‘dead’. Methods Three samples of approximately 500 respondents were recruited via an online survey. Each sample received a different judgmental task in which similarity data were elicited for the top seven health states in the dementia quality of life instrument (DQI). These states were ‘111111’ (no problems on any domain) and six others with some problems (level 2) on one domain. The tasks presented two (dyads), three (triads), or four (quads) DQI health states. Similarity data were transformed into interval-level scales with metric and non-metric multidimensional scaling algorithms. The three response tasks were assessed for their feasibility and comprehension. Results In total 532, 469, and 509 respondents participated in the dyads, triads, and quads tasks respectively. After the scaling procedure, in all three response tasks, the best health state ‘111111’ was positioned at one end of the health-state continuum and state ‘111211’ was positioned at the other. The correlation between the metric scales ranged from 0.73 to 0.95, while the non-metric scales ranged from 0.76 to 1.00, indicating strong to near perfect associations. There were no apparent differences in the reported difficulty of the response tasks, but the triads had the highest number of drop-outs. Discussion Multidimensional scaling proved to be a feasible method to scale health-state similarity data. The dyads and especially the quads response tasks warrant further investigation, as these tasks provided the best indications of respondent comprehension.
Introduction
Comprehensive and generic health-related quality of life (HRQoL) measures have been designed to capture an individual's health status in a single value (index or weight). While mostly applied in cost-effectiveness analyses [1,2], such values can also be used in health-outcomes research, disease-modeling studies, and monitoring of public health programs.
The most frequently used valuation techniques to derive healthstate values are the standard-gamble (SG) [3], time trade-off (TTO) [4], and the visual analogue scale (VAS) [5,6]. However, there are drawbacks to each of these traditional techniques, both theoretical and empirical. SG values tend to be biased by risk aversion and the SG task was often considered as too cognitively demanding [7]. TTO values incorporate time preferences in addition to health-state preferences [8]. In addition, difficulties arise when valuing states that are worse than 'dead', and some people are unwilling to trade any life years because they consider life worth living under any conditions [9,10]. Even the new protocols (lead-time TTO, lag-time TTO, composite TTO) that were designed to overcome some of the problems of the traditional TTO protocol are subject to these biases [11][12][13][14]. Moreover, the TTO tasks have been framed in several ways, multiple iteration procedures have been applied, and the time horizons have differed. The VAS, which was introduced in the field of psychology, has been criticized for its interval properties [15], potential anchoring effects, and context and end-aversion biases [16]. The person trade-off method has been applied in the setting of public health evaluation, where the shortcomings of complex trade-off valuation techniques have been recognized, leading to the adoption of an easier ordinal response task [17].
Over the past decade, (discrete) choice models have gained considerable attention as an alternative to these conventional techniques [18][19][20][21][22][23][24]. Choice models are an extension to Thurstone's law of comparative judgment (LCJ) [25]. Whereas Thurstone's LCJ allows only the estimation of the relative values of health states based on paired comparisons [26], modern choice models extend it by regressing the relative weights of the domain levels that are part of the health-state descriptions [20,27,28]. These models are grounded in random utility theory, an idea that originated in psychology [25] and was subsequently adopted by economists [18]. A benefit is that the response tasks (e.g., paired comparisons) are cognitively less demanding, since they involve one of the most basic human operations, namely discrimination. Nonetheless, even though discrimination is one of the easiest response tasks for individuals, the operation is still limited by cognitive resources [29]. As such, even in paired comparisons the amount of information needs to be constrained.
A serious drawback of the DC models is that relative distances between health states are produced. In many applications, however, absolute values are required. In particular if health-state values are used in computing conventional quality-adjusted life years (QALYs) an important requirement is that position of 'dead' (value = 0) is specified. Another related problem exists at the top end of the health-state continuum. For instance, 'perfect health' (or its synonym) is always preferable (dominant) in DC tasks. As a result, the health states closely positioned to 'full health' cannot be accurately estimated. A methodology that has received little attention [30] and that may be able to deal with the limitations associated with DC modeling is (non-metric) multidimensional scaling (MDS) [31][32][33]. MDS is a collection of mathematical (hence, not statistical) techniques that can be used to analyze distances between objects (e.g. health states). These distances may be interval (metric) or rank distances (non-metric). For example, the 'psychological distance' between health states would be the perceived similarity between them, as elicited in specific judgmental tasks. MDS models similarity data as distances among pairs of health states in a geometric space. This is illustrated in Figure 1, which displays four health states with an interval distance between them represented by the length of the arrows. If we approximate the distances between the pairs of health states, we can use these rough estimates as input to infer the actual distances, which is done with metric multidimensional scaling. Conversely, when the distances are elicited as or converted into rank distances (the blue numbers in Figure 1), we can use non-metric multidimensional scaling. A benefit of non-metric MDS is that it allows responses that are less precise. As such, easier response tasks can be used to obtain this type of similarity data.
When a unidimensional solution (a basic requirement for measurement) is found with MDS, the health states are represented on an interval scale. The values can then be rescaled to a '0' (dead) -'1' (perfect health) scale. In theory, MDS is an elegant and robust method [34][35][36][37][38]. In practice, however, it might be very demanding at the data collection stage. For the time being, MDS seems more suited for exploring and deriving distances (quantification) in specific regions [30] or situations where conventional valuation techniques and choice models are not feasible or even fail. The current study is as an explorative study that attempts to investigate the feasibility of eliciting similarity data for health states and quantifying these states with MDS, meanwhile explaining in detail the procedures that underlie the approach.
Respondents
A company for marketing research (Survey Sampling International, Rotterdam) recruited the respondents for this study by selecting 1500 individuals aged 18-65 years from its respondent panel. After quota sampling, the sample was deemed roughly representative for the Dutch population with regard to age, gender, and education. An invitation was sent to the members of the sample by e-mail. Upon accepting, they were redirected to an online survey and then randomized to participate in one of three different response tasks (see below).
Ethics Statement
The Dutch medical research involving human subjects act states the following regarding survey research: ''No ethical approval is required unless: 1) subjects are under 18 years old, or are (mentally) incompetent 2) given the condition of the subject the survey is psychologically burdensome 3) subjects receive surveys on multiple occasions 4) subjects must travel or impose additional costs.'' The current research project was sent to the local ethics committee (http://www.cmoregio-a-n.nl/) which concluded that it did not require ethical approval.
Health States
The dementia quality of life instrument (DQI) [39] describes dementia-specific HRQoL in six domains: 1) physical health; 2) self-care; 3) memory; 4) social functioning; 5) mood; and 6) orientation. Each is measured on just three levels: 1) no problems; 2) some problems; or 3) severe problems. The DQI is intended for use among community-dwelling people. A particular health state is expressed as a six-digit number. The position of the digit denotes the domain, while the digit itself represents the level of problems in that domain. For example, '333332' corresponds to a health state with severe problems in all domains except orientation, where there are some problems. In the present study, only the top end of the health-state continuum was investigated. Therefore, the following seven DQI health states were used: '111111', '211111', '121111', '112111', '111211', '111121', and '111112'. The DQI was chosen as this study was part of a research project for the development of the DQI.
Response Tasks and Designs
Three methods of collecting similarity data were investigated (Figures 2, 3, and 4). The first method (dyads) had a paired comparison design, whereby each health state was paired with every other one. All respondents were thus presented with 21 pairs. The task was to rate the similarity of the presented health states on a scale of 1 to 9 where 1 indicated 'very similar' and 9 'very dissimilar' in severity. Levels 2-8 were unlabelled.
The second method (triads) had a cartwheel design, presenting each health state along with two others [32]. For each triad, the respondents had to indicate in two separate response tasks which two of the three health states shown were the most similar and which the least similar in severity. Responses were coded as a '2' for the most similar pair and '0' for the least similar pair. The middle pair was inferred by transitivity and coded as a '1'. An incomplete block design was used to minimize the burden on the respondents. Each participant was randomly assigned to a single block. For the triads tasks, the number of inconsistencies within one triad was recorded, as inconsistencies lead to inference problems (see analyses). The third method (quads) had a paired comparison design that presented two pairs of health states (pairs of pairs/tetrads) in each question. The respondents were asked, ''In which pair are the two health states more equally severe?'' Because the number of tetrads
Judgmental Processes
For the dyads response task we assumed the following judgmental processes to occur: each health state is valued independently. We define U ij as the value that respondent i attaches to health state j where U is unidimensional and composed of systematic components and unobservable components [25] U ij~V ij ze ij : ð1Þ The systematic component of the value is based on a function of the combination the attributes (in this application, the DQI domains: a) and levels (the amount of problems on a domain: l). In mathematical terms: where f i is unknown. Subsequently, the difference in value between health states j and k presented in set S is evaluated. In mathematical terms: Finally the difference in value between both health states is assigned an integer (R) on the response scale by respondents, whereby the ordinal relationship between comparisons across health-state sets is maintained. In mathematical terms: where g is monotonically decreasing. For the triads response tasks we define the following judgmental processes to occur: each health state is valued independently (equation 1). Subsequently, the difference in values of health states j, k, and l in set S are evaluated: Next, respondents choose the two health states in the set that have the highest similarity. In the second triad response task the process is reversed. Thus respondents choose the two health states that have the lowest similarity. In theory respondents re-assign values to each of the health states in this second task, which because of the error term could cause reversals in rankings. However, because this would lead to problems in estimation of the MDS scale, as we would not be able to infer a rank order, we coded responses as if the assignment of values to health states occurred once, and was stable over response tasks (see analyses).
For the quads response task we define the following judgmental processes to occur: within each pair of health states, each health state is valued independently (equation 1). Subsequently, within each pair the similarity between health states is evaluated (equation 3). Finally, respondents choose the pair with the highest similarity.
Analyses
For the dyads method, mean dissimilarity scores were calculated with the responses on the 9-point scale and used as input for a metric dissimilarity matrix D. Subsequently, this matrix was transformed into a metric similarity matrix D' by transforming each element x jk (representing the mean dissimilarity between the column j and row k health states) in D by 10 2 x jk (see Figure 2 for the analytical process).
For the triads method, individual responses were entered in a paired similarity dominance matrix. All individual matrices were summed to construct a paired similarity dominance array. By summation over the matrices of this array, the marginals were used as input for a metric similarity matrix T (see Figure 3 for the analytical process). All inconsistent responses per triad were omitted from the analyses. Spearman correlation coefficients between the number of inconsistencies and respondents' characteristics were calculated to assess which factors contributed to inconsistencies.
For the quads method, the percentage of times a pair of health states was chosen over another pair of health states was used as input for the paired similarity dominance matrix. In a fashion resembling the triads method, this matrix was transformed into a metric similarity matrix Q (see Figure 4 for the analytical process).
All of the above similarities were scaled with metric and nonmetric MDS by means of the SPSS (version 20) algorithms in PROXSCAL [31] and rescaled to a 0-10 scale. For goodness of fit of the six solutions, the stress-1 values were compared in combination with Shepard diagrams [36]. We adopted Kruskal's benchmark values of stress-1 values for non-metric MDS: .20 = poor; .10 = fair; .05 = good; .025 = excellent [34]. Stress values should be regarded as a badness-of-fit measure. Raw stress is defined as the sum of the squared representation errors (observed distances minus modeled distances). Stress-1 values are the raw stress values normalized for the MDS space, which is the sum of the squared distances. Stress-1 values are minimized with the following loss function: This function provides nonnegative, monotonically non-decreasing values for the transformed proximities (d d ij k ). The distances (d ij k (X k )) are the Euclidean distances between the health states in the rows of X k (the coordinate space). Furthermore, in the equation above n represents the number of respondents, m the number of health states, and w the weight given to each individual matrix (in this study always set to 1). The Shepard diagram comprises two juxtaposed plots. The first part consists of a scatter plot with proximities (observed data) on the horizontal axis and distances (model values) on the vertical axis. In metric scaling, we also have the transformed proximities that are computed by linear regression. In the Shepard diagram, the transformed proximities are added to the vertical axis and used to draw the best-fitting step function through the scatter plot of proximities and transformed proximities. Therefore, the Shepard diagram can be used to inspect both the residuals (misfit) of the MDS solution and the transformation. Outliers can be detected as well as possible systematic deviations. In non-metric MDS the transformed proximities are computed by monotone regression and are represented by a best-fitting monotone step function in the Shepard diagram.
Feasibility for each task was assessed by a 1-5 difficulty question where 1 was labelled 'very easy' and 5 'very difficult'. In addition, the median time to complete per response task and the percentage of respondents who did not finish the survey were compared.
Respondents
In total 1510 respondents were included in the study: 532 for dyads, 469 for triads, and 509 for quads. All three samples are roughly representative for the Dutch general population in terms of gender, age, and education, although the triads sample has a skewed gender distribution (Table 1).
MDS Solutions
Metric. The three rescaled metric MDS solutions resulted in different rank orders for the seven health states ( Figure 5). The stress-1 values for the dyads, triads, and quads solutions were 0.300, 0.331, and 0.378 respectively, indicating a poor fit [31], which is also displayed in the Shepard diagrams ( Figure 6). The correlation between dyads and triads was r = 0.95 (p,0.01), between dyads and quads r = 0.73 (p = 0.063), and between triads and quads r = 0.81 (p,0.05), indicating strong to near perfect associations. Non-metric. As with the metric solutions, the three rescaled non-metric MDS solutions resulted in different rank orders for the seven health states ( Figure 5). The stress-1 values for the dyads, triads, and quads solutions were 0.129, 0.012, and 0.014 respectively, indicating a poor fit for the dyads but an excellent fit for the triads and quads ( Figure 6). The correlation between dyads and triads was r = 0.76 (p,0.05), between dyads and quads r = 0.76 (p,0.05), and between triads and quads r = 1.00 (p, 0.001), indicating strong to perfect associations.
Inconsistencies
There were statistically significant (p,0.05) Spearman-rank correlations between the number of inconsistencies and respondents' characteristics. These characteristics were gender (r = 2 0.201), education (r = 20.180), self-assessed physical health (r = 0.093), self-assessed self-care (r = 0.013), and time to complete (in seconds) (r = 20.214). This suggests that males make more inconsistent responses, as do people with a lower education, people with more problems on physical health or self-care, and people who have a lower time to complete.
Feasibility
There were no apparent differences in the reported difficulty of the response tasks. Of the dyads, triads, and quads respondents, 26%, 29%, and 29% respectively found the task (very) easy, while 31%, 31%, and 30% found it (very) difficult. The median times to complete per choice task were 11.7, 20.5, and 17.1 seconds for the dyads, triads, and quads respectively. The number of drop-outs was 8%, 19%, and 7% for the dyads, triads, and quads respondents respectively. In the triads task, the percentage of respondents who had one inconsistency in at least one triad was 48%.
Discussion
This is the first explorative study attempting to demonstrate the feasibility of eliciting similarity data on health states and scaling these data with metric and non-metric multidimensional scaling (MDS). One of the main motives to investigate MDS was the fact that choice models suffers from dominance problems at the top and the bottom of the health-state continuum. Similarity judgments do not have this limitation. In fact, combining similarity response tasks with conventional choice tasks may be an attractive strategy. Three different response tasks to elicit similarity data were investigated. All three provided data that was scaled with MDS in such a way that health state '111111' was positioned at the end of the HRQoL continuum. This is a logical requirement of any health-state continuum and serves as a validity check of the derived data. Additionally, all MDS solutions based on the three response tasks had state '111211' positioned at the end of the scale.
Interestingly, there were discrepancies between the three tasks for the health states in between. These are difficult to explain from a theoretical point of view. In regard to the fit statistics, the triad and quad similarity responses were scaled excellently with nonmetric MDS. These solutions had a similar rank order and perfect association. However, all non-optimal states were clustered together. This would indicate that respondents do not perceive a substantial difference in quality between health states with some problems on one domain, which is consistent with previous findings based on preference data [39]. Given the content of the Shepard diagrams, a more likely explanation is that the non-metric solutions were degenerative. A degenerative solution occurs when fit statistics approach zero, even though the data are not represented properly. What we observed is a dichotomization of the data. One cluster of distances between 'perfect health' and the other six health states, and a cluster of distances between all pairs of health states in which all states have some problems on a single domain. In such a solution, only one aspect is properly represented: the 'between-block' distances are larger than the 'within-block' distances. As Borg & Groenen [31] state: ''This type of degeneracy can be expected with ordinal MDS when the dimensionality is high compared to the number of objects. It all depends though, on how many within-blocks of zero exist.'' In the current study the number of dimensions was one, the number of objects seven, and the number of within-blocks of zero was two. It is this last aspect that we clearly observe in the Shepard diagrams. To avoid degeneracy, stronger restrictions can be imposed. Examples of such restrictions are linear transformations with an intercept, spline transformations, or any other type of metric representation. Since we investigated metric as well as non-metric MDS, we have already imposed metric restrictions. These did not represent the data very well, as indicated by the Shepard diagrams and fit statistics.
A benefit of the MDS models is that interval-level data as well as ordinal-level data can be used to generate metric scales. An example of a comparison between metric and non-metric MDS in an application of health-state valuation can be found in the study by Krabbe et al. [30] In this study distances between health states were derived by summing the squared distances of empirically obtained VAS values and then taking the square root of the sum. Assuming the VAS obtains interval-level data, these distances also have interval-level properties, and can thus be scaled with metric MDS. Almost exactly the same distances were scaled with nonmetric MDS by assigning integer ranks to each of the distances. The Spearman rank correlation coefficient between the metric and non-metric MDS solutions was close to 1.0. These results were not surprising as the number of ordinal constraints (i.e. 171 similarities) was sufficiently high. An earlier study by Shepard [38] investigated this same issue in a different context. Shepard used Monte Carlo simulations to reconstruct random points in a two-dimensional space with non-metric MDS. He found that for 7 points the root-mean-square of the 7 correlations was 0.969 between the true distances and the nonmetric MDS solution. When the number of points increased to 45 this correlation was as high as 0.99999994.
The triads response tasks resulted in at least one inconsistency for nearly half of the respondents, which casts doubt on the feasibility of this response task. It suggests that internet surveys are not an appropriate medium for this response task. However, such an interpretation seems groundless, since respondent answers on the feasibility question do not suggest such a high level of inconsistent responses. The dyads and quads task had the lowest number of people dropping out. These two tasks appear to be the most promising for eliciting health-state similarity judgments in an online setting.
This study produced different results for each of the three similarity response tasks. Possible explanations for these differences are the following. The solutions are based on a relatively low number of similarities. Since only seven health states were used, the number of similarities was 21, which might be too low. Probably more relevant is the fact that the health states that were chosen in the current study turned out to be quite similar in severity. When the methodology for this study was discussed, no health-state values for the 6 DQI states with 'some problems' on a domain were available. In a later DC experiment [39] performed on DQI health states, the regression coefficients for each of the domain-levels 'some problems with…' showed overlapping confidence intervals. Therefore, it seems likely that the current study focuses on a very narrow space on the health-state continuum. We recommend that future studies use a more diverse set of health states that would cover a broader range than used in this study or even cover the entire health-state continuum. Another consideration is to use health states from well-established value-based classification systems such as the EQ-5D [40]. This would allow for more inferences of validity of MDS by comparing it with TTO and DC models.
One strength of the current study is the large number of respondents. The total sample was representative of the Dutch general population in terms of gender, age, and education. What we did not take into account is that MDS is able to cope with missing data. If the error level is low, excellent representation is possible with as much as 80% missing data, provided the data is scaled in the 'true' dimensionality and that the number of health states is high compared to the number of dimensions [31]. This allows for less-demanding incomplete designs to be used in future studies.
Eliciting similarity data also has some limitations. The number of respondents required to obtain similarity data is higher than for preference-based data (e.g. TTO and DC). At present, there is no indication which combination of health-state pairs will provide the most optimal similarity matrices. Another limitation is that the process of aggregating individual data into similarity matrices is non-standardized. Furthermore, the MDS approach uses response tasks that are potentially more difficult than a single DC task.
Despite the abovementioned limitations the MDS approach could be advantageous compared to other valuation methods. From a theoretical point of view the MDS approach compares favorably to both TTO and SG as the judgmental task is not influenced by problems such as adaptation, discounting, time preferences, a choice for indifference procedures, nor are there difficulties quantifying states considered worse than 'dead'.
Compared to DC models, which also do not suffer from the abovementioned limitations, the MDS approach offers some additional advantages. Currently there are limitations regarding scaling DC models on the dead-perfect health scale [41], although attempts to overcome this limitation have been put forward [42]. Additionally, in DC models researchers have to make assumptions and choices regarding the functional form of the value function (e.g., only main effects or main effect and interactions, or a multiplicative model instead of a linear model). In the MDS models, the functional form of the value function is undefined, allowing for more realism and full flexibility in respondent heterogeneity.
There are several options to use similarity data to arrive at a full set of values for all possible health states. For the dyads task individual similarity matrices can be obtained. These matrices can be scaled to obtain health-state values for each state present. Similar to valuation studies using TTO or SG, by using regression techniques health-state values can be estimated for the health states not included in the dyads tasks. For the triads and quads methods the possibilities are more restricted. If we want to derive values for all health states of a particular value-based system (e.g., DQI, EQ-5D), then a matrix that contains similarity judgments on all health states is required. This seems an extremely challenging task as it could require millions of similarity responses. Future work should address this particular issue and investigate avenues to overcome this limitation. One possible solution that has been put forward for another novel health-state valuation method [43] is to include similarity response tasks as a standardized part of (inter)national health surveys. In time this could lead to a sufficient amount of data to estimate values for all possible health states of a particular health-state classification system. Another way of applying MDS is by transforming preference data (e.g. TTO, DC) into similarity data. Nevertheless, this methodology would still suffer from dominant choice sets.
Since similarity data have the biggest potential for scaling data to a one-dimensional interval-level scale based on a single response task, the above-mentioned suggestions and limitations point to fruitful directions for future research in the field of health-state valuation. | 2017-04-06T20:49:28.390Z | 2014-02-21T00:00:00.000 | {
"year": 2014,
"sha1": "f5ddde3195c4b9e7eb95f26b85ce5996ddde945f",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0089091&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "db092e81321a44c2707b0cf9a1475e581b89c0b6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Mathematics",
"Medicine"
]
} |
3214532 | pes2o/s2orc | v3-fos-license | Signalling by Transforming Growth Factor Beta Isoforms in Wound Healing and Tissue Regeneration
Transforming growth factor beta (TGFβ) signalling is essential for wound healing, including both non-specific scar formation and tissue-specific regeneration. Specific TGFβ isoforms and downstream mediators of canonical and non-canonical signalling play different roles in each of these processes. Here we review the role of TGFβ signalling during tissue repair, with a particular focus on the prototypic isoforms TGFβ1, TGFβ2, and TGFβ3. We begin by introducing TGFβ signalling and then discuss the role of these growth factors and their key downstream signalling mediators in determining the balance between scar formation and tissue regeneration. Next we discuss examples of the pleiotropic roles of TGFβ ligands during cutaneous wound healing and blastema-mediated regeneration, and how inhibition of the canonical signalling pathway (using small molecule inhibitors) blocks regeneration. Finally, we review various TGFβ-targeting therapeutic strategies that hold promise for enhancing tissue repair.
Transforming Growth Factor Beta (TGFβ) Signalling
The TGFβ superfamily consists of 33 members, most of which are dimeric, secreted polypeptides. In addition to the three prototypic TGFβ isoforms (TGFβ1, TGFβ2 and TGFβ3), the superfamily also includes the activins, inhibins, Bone Morphogenic Proteins (BMPs), Growth and Differentiation Factors (GDFs), myostatin, nodal, leftys and Mullerian Inhibiting Substance (MIS) [1]. The scope of this review will be largely limited to the three isoforms of TGFβ ligand: TGFβ1, TGFβ2 and TGFβ3. The specific roles of other members of this superfamily in tissue repair and regeneration have been thoroughly reviewed elsewhere (see [2] and [3] for activins and BMPs, respectively).
Members of the TGFβ family are widely recognized as key signal transducers among multicellular animals (metazoans), including both invertebrates (e.g., the placozoan Trichoplax adhaerens [4], and acorn worms [5]), and vertebrates. The three prototypic TGFβ isoforms, TGFβ1, TGFβ2 and TGFβ3, are structurally similar cytokines encoded by separate genes that act in autocrine and paracrine manners to regulate early embryonic development, the maintenance and regeneration of adult tissues, as well as various disease processes [6][7][8]. TGFβ ligands are secreted as inactive precursors bound to latency-associated peptides and are either directly activated or embedded in the extracellular matrix (ECM) to be activated at a later time. In most tissues, significant amounts of TGFβ are stored in the ECM [9]. TGFβ ligand activation is accomplished by the lytic action of proteases including elastase and matrix metalloproteases (MMPs), or through conformational changes induced by various integrins [10,11]. as well as numerous protein-protein interactions with transcriptional co-activators and co-repressors mediated through SMAD's MH2 domain [4]. This ability of SMAD proteins to interact with numerous other proteins allows them to act as an integration hub for cell signalling crosstalk and greatly influences signalling outcome [4].
The extent of SMAD signalling activation is modulated by different mechanisms, including competitive receptor binding by R-SMADs and I-SMADs, the specific and timely degradation of signalling mediators, and receptor trafficking. I-SMADs (SMADs 6 and 7) bind directly to the TβR complex and block R-SMAD access to the receptor [26]. I-SMADs also compete with R-SMADs in the nucleus. Via its MSH2 domain, SMAD7 binds directly to DNA and prevents SMAD2, 3 and 4 from binding [1]. Ubiquitin-dependent proteasomal degradation of TGFβ-activated R-SMADs (such as SMADs 1, 2 and 3) is mediated by different E3-type ubiquitin ligases, and modulates both their steady-state levels, as well as the duration of their activated state. Among these, the best-documented are the Smad ubiquitination related factors 1 and 2 (Smurf1 and 2, respectively) [26]. SMAD4 is also targeted for proteasome-dependent degradation by a Smurf-independent pathway that might involve the SCFSkp2 complex [27]. Finally, I-SMADs also serve as adaptors that recruit the E3 ubiquitin ligases Smurf 1 and 2 to the TβR complex, facilitating proteasome-dependent degradation of activated TβRI [26]. SMAD7 can additionally recruit the phosphatase GADD34-PP1c to the activated TβR complex to attenuate signalling [1].
The localization of the TβR complex to specific membrane domains is key for signalling modulation, as it dictates its internalization via different routes and determines whether or not signalling will occur [28]. Internalization of the receptor via clathrin-coated vesicles (CCV) into early endosome antigen 1 (EEA-1)-postive and SARA-containing endosomes promotes signalling [29]. In contrast, internalization via membrane rafts (membrane domains of tightly packed cholesterol-sphingolipids protein complexes) into caveolin-positive vesicles results in receptor degradation and prevents signalling [29]. The latter vesicles specifically carry the inhibitory SMAD7, which by associating with Smurf2 facilitates Smurf-mediated targeting of TβRI for proteasome-dependent degradation [29]. Although what causes the receptors to segregate into these two different routes is not fully understood (ligand binding does not necessarily favors one route over the other [26]), it is known that the extracellular domain of TβRII (possibly via interaction with other cell surface glycoproteins) is required for partitioning into membrane rafts [30]. Further, when TβRIII/betaglycan is present, it recruits both TβRII and TβRI to non-raft membrane fractions, thus promoting SMAD signalling [31]. Taken together, these results indicate that the levels of expression of the TGFβ receptors themselves, and in particular betaglycan, dictate the extent of canonical signalling activation via modulation of receptor trafficking.
Despite sharing 71%-76% sequence identity and signalling through the same canonical SMAD intermediates (SMAD2 and SMAD3), a growing body of evidence suggests that the three TGFβ isoforms have different physiological roles [38]. Each TGFβ isoform is transcribed from a unique promoter and has a distinct pattern of tissue expression [39]. The differences in isoform expression patterns are reinforced by non-overlapping phenotypes seen in TGFβ isoform-specific transgenic and knockout mice [38]. Some of the most well-studied examples of TGFβ isoform-specific biology are cardiac development [40], palate formation [41], and cutaneous wound healing; the latter will be discussed in Section 2.
Overall, the outcome of TGFβ signalling input is highly context-dependent, as it is the net result of numerous contributing factors, including: the specific ligand(s) present in the microenvironment; the bioavailability and concentration of such ligands; the cell type; the levels of signalling mediators within the cell; the extent of activation of canonical versus non-canonical signalling pathways; and the extent to which both of these branches of TGFβ signalling crosstalk with signalling inputs via other receptor systems, both in the cytoplasm and in the nucleus [4]. Importantly, increasing evidence indicates that major modifying signalling inputs are mediated by the cellular cytoskeleton in response to mechanical stimuli, such as loss of integrity of cell-cell contacts [42], cellular tension [43], and ECM stiffness [44,45]. Mechanotransduction of these stimuli in the presence of active TGFβ signalling results in synergistic responses between mechanosensitive transcriptional co-activators and TGFβ-regulated signal transducers, such as the R-SMADs [46][47][48]. As discussed in more detail below, this synergy plays an important role during key steps of wound healing and regeneration, such as fibrogenesis [46,47].
Cutaneous Wound Repair
Among vertebrates, the reparative response to injury follows a stereotypical sequence of events that can be divided into three main overlapping phases: hemostasis and inflammation; proliferation; and maturation and remodeling [49,50]. Throughout these events, TGFβ plays a number of crucial roles that vary in a context and cell type-dependent manner. The pleiotropic effects of TGFβ include regulating cell proliferation, differentiation, migration, invasion and chemotaxis of the epithelial, fibroblastic and immune cell tissue compartments (the latter involved in inflammatory response), as well as endothelial cell proliferation, migration and invasion, and mural cell maturation (to generate functional blood vessels) during angiogenesis [1,51].
Hemostasis and Inflammation Phase
TGFβ isoforms demonstrate a number of dynamic interactions throughout the processes of hemostasis and inflammation. Following tissue injury, blood vessels rupture and the resulting exposure of platelets (thrombocytes) to sub-endothelial collagen causes platelet aggregation, degranulation and activation of the coagulation cascade [49]. Platelet alpha-granules are a particularly rich source of TGFβ1 (upwards of 40 to 100 times more than in other cell types) [52]. Alpha-granules also contain other TGFβ isoforms, although the ratio is heavily skewed (4000 TGFβ1: 1 TGFβ2: 10 TGFβ3) [53,54]. Platelet-induced activation of the coagulation cascade results in the formation of a fibrin clot which achieves hemostasis as well as serves as a scaffolding for the migration of inflammatory cells into the wounded tissue [49].
Following hemostasis, TGFβ next participates as a potent chemoattractant and inflammatory mediator for various types of immune cells, including neutrophils and other polymorphonuclear (PMN) cells (basophils, eosinophils, mast cells; beginning 24 to 48 h after wounding) [55][56][57] and circulating monocytes (48 to 96 h post-wounding) [58][59][60]. Curiously, TGFβ ligands are also known to antagonize other neutrophil chemoattractants, such as interleukin-8, and can suppress the ability of immune cells to transmigrate into injured tissues [56,61]. Hence, TGFβ participates in both stimulating the initial immune response, through the recruitment of PMN, and limiting the extent of the inflammatory response [56]. Whereas platelets are characterized as being rich in TGFβ1, in neutrophils, the ratio of TGFβ isoforms is biased towards TGFβ3 (12 TGFβ1: 1 TGFβ2: 34 TGFβ3), indicating the possibility of isoform-specific differences throughout the wound-healing process [54]. Following their recruitment, many subsequent roles of macrophages-including the initiation of granulation tissue formation and angiogenesis-are also known to be mediated by TGFβ [50,58].
Proliferative Phase
The proliferative phase involves three major TGFβ-mediated events: re-epithelialization; angiogenesis; and extracellular matrix (ECM) synthesis. In response to injury, epithelial cells located at the wound margins become activated and undergo a phenotypic change characterized by an alteration of their cytoskeleton and the dissolution of cell-cell contacts [62,63]. Migration and proliferation of epithelial cells is driven by a variety of autocrine and paracrine signalling pathways (reviewed by [63] and [64]), of which TGFβ is one of the most extensively studied. Prior to injury, TGFβ1 in the epidermis functions as a homeostatic cytokine, blocking cell-cycle progression and suppressing epithelial hyperplasia [65][66][67]. Following injury, all three TGFβ isoforms promote re-epithelialization [67][68][69], and their abolishment (with the use of neutralizing antibodies) impairs wound closure [69][70][71][72]. However, whereas TGFβ1 acts to promote keratinocyte migration in vitro [67], TGFβ3 does not [69].
The key mechanism involved in re-epithelialization is the epithelial to mesenchymal transition (EMT) [73]. Key cellular events during EMT, including the loss of cell-cell contacts and increased motility, are driven by both canonical and non-canonical TGFβ signalling [73]. Changes in the levels of SMAD3 might play an important role in the switch of TGFβ function from a growth-suppressing cytokine in intact epithelium to an EMT-promoting one in wounded epithelium. SMAD3 mediates TGFβ's growth-suppressive effects, and a decline in endogenous SMAD3 occurs in parallel to EMT and leads to loss of growth-inhibitory response to TGFβ during this process [74]. In agreement with these findings, mice that are heterozygous or null for SMAD3 show enhanced re-epithelialization and wound closure [75,76].
Epithelial cell injuries, such as those involving disruption of the Crumbs complex that associates with the tight junction (apical cell-cell contacts), are also known to sensitize cells to TGFβ-mediated EMT by enhancing nuclear translocation of SMAD2/3 via the Hippo pathway mediators TAZ (transcriptional co-activator with PDZ-binding domain) and YAP (Yes-associated protein) [46,77]. Interestingly, TAZ silencing prevents robust expression of alpha smooth muscle actin (αSMA) by TGFβ and subsequent epithelial to myofibroblast conversion in wounded epithelium [46], and skin-specific deletion of both TAZ and YAP in adult mice impairs skin regeneration after wounding [78]. This impairment was in part attributed to the role of TAZ/YAP in maintaining the stem-cell population of the basal layer of the skin [78]. Together, these observations suggest that a TGFβ and Hippo signalling crosstalk mediates TGFβ's wound-healing properties.
Another key event during the proliferation phase is angiogenesis. Angiogenesis involves the invasion of the wound bed by capillary sprouts to create a de novo microvascular network [79][80][81][82]. Although still not fully understood, due to its context-dependency, a role for TGFβ as a modulator of angiogenesis has long been recognized [83]. TGFβ's ability to induce angiogenesis might be linked, at least in part, to its capacity to promote vascular endothelial growth factor (VEGF) expression at the site of injury. VEGF mediates angiogenic activity during the proliferative phase of wound healing [80], and TGFβ is known to recruit VEGF-producing hematopoietic effector cells to promote angiogenesis in vivo [84]. All three TGFβ isoforms can also induce endothelial to mesenchymal transition (EndoMT) [40], which has been widely implicated in pathologic fibrosis of various organs (including the skin [85,86]), as well as the sprouting phase of angiogenesis [87].
Finally, TGFβ is involved in ECM synthesis and the recruitment of fibroblasts from the adjacent dermis [88], as well as from perivascular sources (e.g., pericytes) and bone marrow (i.e., fibrocytes) [89][90][91]. Once they have entered the wound bed, fibroblasts proliferate and begin synthesizing the provisional ECM (mostly collagen and fibronectin) that precedes the formation of granulation tissue proper. Granulation tissue is a transient, heavily vascularized reparative organ characterized by a loose matrix of collagen, fibronectin and hyaluronic acid interspersed with fibroblasts and macrophages [49,50]. TGFβ ligands play a fundamental role in fibroblast regulation and the production of granulation tissue. TGFβ1 mediates fibroblast collagen production (specifically type I and III), as well as in the inhibition of MMPs [92]. Related to this, TGFβ1-mediated signalling has been implicated in diseases characterized by excessive collagen deposition including keloids and scleroderma [92][93][94]. Importantly, while TGFβ1 and TGFβ2 promote collagen deposition and scar formation, TGFβ3 appears to be anti-fibrotic [95,96]. Hence, the combined effect of TGFβ3 and TGFβ1 is interpreted as a fine-tuning of collagen production [92,97]. As the proliferative phase of wound healing progresses, a subset of fibroblasts will differentiate into myofibroblasts and another subset will undergo apoptosis, thereby marking the beginning to the final stage of wound healing, the remodeling phase [49].
Remodeling Phase
The final phase of wound healing is remodeling, involving the apoptosis of resident cells (including fibroblasts and endothelial cells), as well as wound contracture, and the replacement of fibronectin and type III collagen in the wound bed with type I collagen [49,92]. As a result, the once highly cellular and heavily vascularized mass of granulation tissue is transformed into a largely avascular and acellular scar [88,91]. Wound contracture is facilitated by myofibroblasts, a population of fibroblasts that acquire a contractile phenotype, as evidenced by their expression of αSMA [91]. The acquisition of αSMA expression is controlled by TGFβ1, through SMAD-dependent and independent transcriptional activity at the αSMA promoter [44,91,98], as well as by mechanical loading of the wound environment [91]. Curiously, myofibroblasts are absent from the wound bed during the earlier phases of wound healing when levels of TGFβ1 are at their highest [91]. One explanation is that in order to express αSMA, fibroblasts require a combination of a stiff milieu/mechanical stress and TGFβ1 [91,98]. In support of this prediction, in vitro experiments have demonstrated that even in the presence of adequate TGFβ1 levels, fibroblasts fail to transition to myofibroblasts if plated on low stiffness environments [44]. This might be related to the observation that a mechanoresistant/stiff ECM facilitates the activation of latent, ECM-sequestered TGFβ1 by the myofibroblasts themselves [45]. In this study, a stiff ECM was found to be required for integrin-mediated activation of self-produced TGFβ1 by myofibroblast, as a result of their cytoskeletal contraction caused by ECM tension [45]. In agreement with these findings, myofibroblast-populated wounds displayed a higher level of SMAD2/3 activation in stressed as compared to relaxed tissue, despite similar levels of TGFβ1 and TβRII [45]. This suggests that during wound remodeling, TGFβ1 activation (and the consequent maintenance of the myofibroblast phenotype) is restricted to areas with a stiff ECM, equivalent to that encountered in the late-wound granulation tissue [45].
Although the mechanisms through which fibroblasts and myofibroblasts interpret their environment are not completely understood, members of the Hippo signalling pathway, such as TAZ, are likely involved in mechano-sensing the tissue environment and modulating TGFβ1 responsiveness [46,48]. In agreement with this notion, TAZ was shown to confer SMAD3 sensitivity to the αSMA promoter, and to facilitate αSMA expression in response to TGFβ1 in combination with mechanical stretch [47]. In contrast, when there was only mechanical stretch (but no TGFβ1), another major mechanosensitive transcriptional co-activator known as myocardin related transcription factor (MRTF), interacted with TAZ and SMAD3 to suppress SMAD3-TAZ-mediated activation of the αSMA promoter [47]. Together, these findings support a model whereby stretch alone promotes a limited contractile response, possibly promoting healing, while stretch plus TGFβ1 favors the formation of fibrotic tissue [47].
Similar to TGFβ1, TGFβ2 is also a potent inducer of the fibroblasts to myofibroblast transition (both in vitro and in vivo) [99]. In contrast, the role of TGFβ3 is more complex. While TGFβ3 appears to promote the acquisition of a myofibroblast phenotype in vitro, in vivo it inhibits myofibroblast formation [96,99].
Exceptions to Scar Formation in Mammals
Among mammals it is well understood that most injuries to the skin are resolved with the formation of scar tissue. Although scar tissue acts to help restore structural integrity and homeostasis, it is a dysfunctional replacement. Conspicuously, scar tissue fails to re-develop hair follicles and glands, as well as the protein elastin and the original basket-weave collagen architecture of the dermis. As a result, scars lack the tensile strength of uninjured skin [96,100]. However, a number of remarkable exceptions to this mammalian scarring paradigm exist. For example, in some species of African spiny mice (Acomys), large sections of dorsal body skin can be shed (autotomized) and then regenerated scar-free, complete with hair follicles and glands [101]. These species can also regenerate through-and-through ear punch wounds, regenerating skin and cartilage [101]. Curiously, a recent qRT-PCR screen has revealed that TGFβ1, typically considered a pro-inflammatory cytokine, is significantly upregulated during wound healing in Acomys: a seven-fold increase compared to uninjured skin; in mice (which scar) the increase is only three-fold [102].
Another notable example comes from fetal mammals. Many mammals (including humans, rats, mice, pigs and monkeys) are capable of scar-free cutaneous healing in the early-to mid-gestation stages of fetal development [88,103,104]. Although details of the mechanisms permitting scar-free fetal wound healing remain to be fully elucidated, a role for TGFβ has been established [88]. One of the key observations is that the expression of TGFβ isoforms differs between the fetal and adult responses to injury. More specifically, whereas adult cutaneous wounds demonstrate high levels of TGFβ1 and TGFβ2, but low levels of TGFβ3, the expression pattern in the fetal wound is the reverse (high expression of TGFβ3, low expression of TGFβ1 and TGFβ2) [105,106]. If fetal wounds are treated with exogenous TGFβ1, the result is scarification [107]. Alternatively, if adult wounds are treated with exogenous TGFβ3, or if endogenous TGFβ1 and TGFβ2 are blocked (e.g., with neutralizing antibodies), the severity of scarring is reduced [96]. These observations combined with numerous other examples from adult wound healing place TGFβ isoforms, and in particular their relative ratios, as a driving force in determining the balance between tissue repair and tissue regeneration. To better understand this phenomenon, the next section examines the role of TGFβ isoforms in species that possess the unique ability, like fetal wounds, to heal without scarification.
The involvement of specific TGFβ isoforms in the three phases of cutaneous wound healing is summarized in Figure 1. seven-fold increase compared to uninjured skin; in mice (which scar) the increase is only three-fold [102]. Another notable example comes from fetal mammals. Many mammals (including humans, rats, mice, pigs and monkeys) are capable of scar-free cutaneous healing in the early-to mid-gestation stages of fetal development [88,103,104]. Although details of the mechanisms permitting scar-free fetal wound healing remain to be fully elucidated, a role for TGFβ has been established [88]. One of the key observations is that the expression of TGFβ isoforms differs between the fetal and adult responses to injury. More specifically, whereas adult cutaneous wounds demonstrate high levels of TGFβ1 and TGFβ2, but low levels of TGFβ3, the expression pattern in the fetal wound is the reverse (high expression of TGFβ3, low expression of TGFβ1 and TGFβ2) [105,106]. If fetal wounds are treated with exogenous TGFβ1, the result is scarification [107]. Alternatively, if adult wounds are treated with exogenous TGFβ3, or if endogenous TGFβ1 and TGFβ2 are blocked (e.g., with neutralizing antibodies), the severity of scarring is reduced [96]. These observations combined with numerous other examples from adult wound healing place TGFβ isoforms, and in particular their relative ratios, as a driving force in determining the balance between tissue repair and tissue regeneration. To better understand this phenomenon, the next section examines the role of TGFβ isoforms in species that possess the unique ability, like fetal wounds, to heal without scarification.
The involvement of specific TGFβ isoforms in the three phases of cutaneous wound healing is summarized in Figure 1. Generally, TGFβ1 and TGFβ2 are stimulatory, while TGFβ3 is inhibitory. However, TGFβ3 can also stimulate specific processes (e.g., re-epithelialization). Green arrow: stimulatory; continuous red line: inhibitory; dashed red line: potentially inhibitory, inferred from relative levels at the beginning (low) and end (high) of the hemostasis and inflammation phase.
Blastema Formation
Amongst vertebrates, many of the most striking examples of multi-tissue regeneration begin with the formation of a mass of mesenchymal-like cells at the wound site-the blastema [108]. Although the blastema appears to be composed of a homogeneous population of undifferentiated cells, various recent studies have demonstrated that blastema cells are actually a heterogeneous pool of lineage-restricted progenitor cells [109][110][111]. Consequently, blastema cells are not a pluripotent (or perhaps even multipotent) population, but instead retain a memory of their germ-layer origin (axolotls: [109], mouse digits: [111]). Details of blastema formation remain poorly understood, but it is predicted to be the result of either reprogramming events occurring amongst the different lineage restricted cell populations, or rapid expansion of tissue-specific stem-cell populations, or a combination of both [109][110][111].
One of the earliest signs of blastema-mediated (i.e., epimorphic) regeneration is the formation of a wound epithelium. The wound epithelium first forms as original epidermal cells surrounding the wound migrate across the site of injury [112]. Once re-epithelialization is complete, the wound epithelium begins to thicken, resulting in a capping structure that closely resembles the apical ectodermal ridge (AER) observed during limb development [113,114]. In addition to thickness, the wound epithelium also differs from the pre-wounding epidermis in that it lacks the distinctive stratified appearance, basal keratinocyte polarity and a mature basal lamina [115,116]. Furthermore, the wound epithelium demonstrates unique protein and gene expression profiles compared to normal epithelium [117][118][119]. Independent reports have established that the wound epithelium is key for blastema induction and proliferation [114,120].
TGFβ in Multi-Tissue Regeneration
One of the best-documented investigative approaches to demonstrate the requirement for TGFβ signalling during in vivo regeneration involves the use of the potent small molecule inhibitor SB-431542. This is a selective inhibitor of the type I receptors ALK4, ALK5 and ALK7, and acts to inhibit phosphorylation of SMAD2 and SMAD3 [121]. In axolotls, TGFβ1 mRNA is normally upregulated by blastema cells during the early (preparatory) phase of limb regeneration [122]. Moreover, if amputated animals are treated with SB-431542, cell proliferation is halted, the blastema fails to form, and regeneration is prevented [122]. Similarly, spontaneous tail regeneration by Xenopus tadpoles involves an increase in phosphorylated SMAD2 (pSMAD2) expression, as well as an upregulation of TGFβ family members xTGFβ2 (similar to TGFβ2), xTGFβ5 (similar to TGFβ1), as well as xGDF11 and xActivin-βA [123]. When amputated tadpoles are treated with SB-431542, wound healing is blocked, cell proliferation is reduced, and the blastema fails to form [123].
Other evidence supporting the involvement of TGFβ in regeneration comes from experiments with zebrafish. Following tail fin amputation, spontaneous regeneration of the appendage involves a significant upregulation of activin-βA, one of the subunits of the activin complexes AB and B [124]. Treatment with SB-431542 results in an abnormal wound epidermis, reduced cell proliferation, and the failure of the blastema to properly form. To expand these findings, the authors then used knockdown morpholinos to silence activin-βA expression. The result was a 50% reduction in regenerated tail size [124]. Combined, these experimental observations support the role of TGFβ signalling in cell proliferation, in addition to blastema formation and maintenance.
TGFβ signalling is also involved in zebrafish cardiac regeneration following cryoinjury. The cryoinjury method results in localized cell death along the ventricular wall, and has the advantage of histologically mimicking a myocardial infarction otherwise characteristic of mammals, including humans [125]. Myocardial repair is a two-step process, beginning with scar formation, which is then gradually replaced with new cardiac muscle [125]. During myocardial repair, all three TGFβ isoforms (TGFβ1, TGFβ2, TGFβ3), as well as activin βB (but not activin βA) were upregulated [126]. This increase in TGFβ ligands corresponds to a robust induction of pSMAD3 in both the injured myocardium and the uninjured myocardium directly adjacent to the wound, confirming activation of the TGFβ signalling pathway [126]. When cryoinjured fish were treated with SB-431542, myocardial regeneration failed. This regenerative failure is the result of both a suppression of initial collagen synthesis, thus limiting the early formation of a scar, combined with the inhibition of cardiomyocyte proliferation [126].
A possible role for activin-βA during regeneration has also been proposed for the leopard gecko following tail loss. Similar to Xenopus tadpoles, cells of the leopard gecko's wound epithelium and blastema demonstrate widespread expression of pSMAD2 [127]. In order to identify the ligand(s) responsible for SMAD activation, a qRT-PCR screen was performed (including TGFβ1, TGFβ2, TGFβ3, and activin-βA), but only activin-βA was significantly upregulated [127]. Combined, these experiments underscore the necessary and highly conserved role of TGFβ signalling in spontaneous regeneration, and point towards the activins as potential key players.
Murphy Roths Large (MRL) Mice
Murphy Roths Large (MRL/Mpj) mice were originally developed by selective inbreeding for studies of systemic lupus erythematosus, an autoimmune condition with debilitating clinical effects. Surprisingly, however, this mouse strain possesses an exaggerated healing response characterized by the ability to close ear hole wounds and to heal injuries to the myocardium [128,129]. The mechanism behind this increased regenerative ability remains poorly understood, but various lines of evidence point to a role for TGFβ signalling. First, MRL mice demonstrate enhanced levels of the three TGFβ isoforms in various tissues [130], and increased TGFβ response to bacterial infection or lipopolysaccharide (LPS) challenge, compared to wild-type mice [131]. Second, two loci strongly correlating to autoimmunity on chromosome 7 and 12 (and possibly responsible for the lupus phenotype in the MRL mice) co-localize with the genes for TGFβ1 and TGFβ3 (respectively) suggesting a possible, albeit speculative, mechanistic link [131,132]. Supporting this possibility, in skin graft models employing MRL mice skin or the skin of a haplotypically identical mouse (B10.BR) on B10.BR recipients suggests that the improved tissue repair in MLR mice is mediated by reduced pro-inflammatory response possibly mediated by TGFβ signalling [133].
TGFβ1 Receptor Mutant Mice
In an attempt to identify candidate genes involved in tissue regeneration, a forward genetics screen using N-ethyl-N-nitrosurea was used to generate a mouse strain with a fast-healing phenotype identified by ear hole wounding [134]. This phenotype was mapped back to a G to A transition in the gene that codes the TβRI, resulting in a substitution of a conserved arginine residue in the regulatory domain of TβRI. This mutation leads to a modest increase in TGFβ1 responsiveness (two-fold increase as measured by a PAI luciferase vector), as well as a slight increase in SMAD2 phosphorylation [134]. Unfortunately, the responsiveness to other isoforms of TGFβ was not evaluated; however, nearly three-quarters of known TGFβ-responsive genes were not affected by this mutation, thus suggesting tailored modification to the TGFβ signalling pathway. This result demonstrates that receptor-level modifications can lead to phenotypically relevant changes leading to an enhanced regenerative ability, and this situation could be analogous to isoform-specific differences in receptor activation.
TGFβ Signalling Targeting in Wound Healing and Tissue Regeneration
As TGFβ signalling drives a number of pathologic conditions, TGFβ-targeting agents have been developed for medical applications in oncology, fibroproliferative disorders, vascular diseases, and wound healing (reviewed in [135]). However, the clinical development of these agents has been challenging, in part due to the fact that TGFβ ligands are highly cell-type and context-dependent. Despite this limitation, the strategies discussed below hold therapeutic promise as potential enhancers of regenerative capacity.
Small Molecule Inhibitors
There are a number of small molecule inhibitors (SMI) of type II and type I TGFβ receptor kinases, but only the latter have progressed to phase I/II clinical trials (reviewed in [136]). SB-431542, a TβRI SMI discussed above, was extensively used in in vivo studies that demonstrated the role of canonical TGFβ signalling in tissue regeneration. However, more specific inhibitors have been developed since. One of these is LY2157299 (Eli Lilly and Company, Indianapolis, IN: Clinicaltrials.gov: NCT01373164), which has progressed to phase II in the oncology setting (reviewed in [137]). Although the application of this and similar SMIs to the improvement of healing and/or regeneration might be limited by their broad inhibition of signalling by different TGFβ family ligands (some of which may be crucial to these processes [123,124]), preclinical studies indicate potential in specific settings. For instance, a study evaluating the role of TGFβ in muscle regeneration found that TGFβ1 serum levels were elevated in older mouse and humans, and this effect was associated with reduced capacity of satellite cells to regenerate muscle in aged individuals [138]. In this study, systemic treatment of older mice with an SMI inhibitor of TβRI ALK4, 5 and 7 (A83-01), but not a neutralizing antibody or decoy receptor, restored the reparative capacity of old muscle [138]. A SMI of TβRI (CAS-446859-33-2) was also observed to improve cardiomyoblast-mediated regeneration in mice [139]. Although little is known of the applicability of TβRI SMIs to improve wound healing, subconjunctival administration of SB-431542 was shown to reduce scar formation after glaucoma surgery in rabbits [140]. As these inhibitors progress through oncological clinical trials, it will be interesting to see how patients fare in the context of post-surgical wound healing following neoadjuvant therapy, as well as overall wound-healing capacity during and after adjuvant treatment.
Another target of SMI are integrins. Previous studies have determined that various integrins (e.g., α v β 1 ,) mediate non-proteolytic activation of TGFβ1 [141,142]. A SMI of the α v β 1 integrin (c8) has recently been developed, and used to treat two different mouse models of pathologic fibrosis: induced pulmonary fibrosis and induced hepatic fibrosis [142]. Subcutaneous treatment with c8 resulted in a reduction of collagen deposition in both models. The authors concluded that inhibition of α v β 1 integrin by c8 protects against TGFβ1-mediated fibrosis, although other potential integrin-dependent but TGFβ-independent anti-fibrotic mechanisms may also participate [142].
Monoclonal Antibodies
Compared to SMI, monoclonal antibodies have several distinct advantages, including target ligand specificity and extracellular mode of action. This is particularly relevant to tissue regeneration, as isoform-specific antibodies have the capacity to neutralize specific "inhibitory" ligands in the extracellular space. A number of antibodies directed against TGFβ ligands have progressed through various phases of clinical development [136]. One particularly promising example is fresilimumab (GC-1008, Genzyme/Sanofi, Cambridge, MA, USA), a humanized antibody that targets TGFβ1, TGFβ2 and TGFβ3 ligands. To date, fresilimumab has progressed through phase I clinical trials in focal segmental glomerular sclerosis (NCT00464321), systemic sclerosis (NCT01284322) and idiopathic pulmonary fibrosis (IPF)(NCT0125385) [143].
Isoform-specific monoclonal antibodies against both TGFβ1 (metelimumab, and TGFβ2 (lerdelimumab, CAT-152) have also been developed (Cambridge Antibody Technology, Cambridge, UK; now part of AztraSeneca). Lerdelimumab (targeting TGFβ2) did show promise in glaucoma surgery by reducing scarring during subconjunctival wound healing in a randomized study in rabbits [144]. It also showed promise in a similar phase I/II study, in which the antibody was locally administered (subconjunctival injections) pre-and post-operatively to humans [145]. Although a phase III study that investigated its use in preventing scarring after first-time trabeculectomy for primary open-angle glaucoma (POAG) or chronic angle-closure glaucoma (CACG) did not find it beneficial [146], lederlimumab was found to be safe in this and the previously mentioned human trials. Despite discontinued clinical development of both lerdelimumab and metelimumab [147], pre-clinical and clinical studies with these or similar antibodies in different scenarios of healing/regeneration are necessary, as they may still provide the ability for TGFβ isoform-specific modulation of the wound environment in favour of scar-free healing, with potentially minor side effects.
Ligand Traps/Decoy Receptors
Several TGFβ ligand traps have been developed based on the peptide sequence of the TGFβ co-receptor betaglycan (a TβRIII). One such ligand trap, referred to as P144 or disitertide, is a peptide encompassing amino acids 730´743 from the membrane-proximal ligand-binding domain of betaglycan. P144 acts by interfering with binding and activity of TGFβ1 [148]. Systemic (intraperitoneal) treatment with P144 prevents fibrosis following a chemically induced liver injury in rats [148], while its topical administration ameliorates both bleomycin-induced skin fibrosis in mice [149] and human scar hypertrophy in a xenotransplant model in mice [150]. P144 (disitertide topical cream) is ready to enter phase II clinical trials for potential application in the treatment of localized scleroderma, and phase IIb for systemic sclerosis (http://dignabiotech.com).
Antisense Oligonucleotides
Another approach to target TGFβ signalling consists of blocking TGFβ ligand gene expression, or the expression of specific SMADs, through the use of anti-sense oligonucleotides (ASON). These short polymers inhibit target gene expression by binding to target mRNA sequences and blocking mRNA translation. Trabedersen, developed by Antisense Pharma (now Isarna Therapeutics, Munich, Germany), is a TGFβ2-specific ASON with demonstrated efficiency in phase II and III trials in oncology applications, specifically glioblastoma (reviewed by [136]). The evaluation of TGFβ and SMAD-specific oligonucleotides in wound healing and regeneration is still at the preclinical stage, but the results so far are encouraging. Both TGFβ2-targeting and TGFβ1-specific ASONs showed a reduction in post-operative scarring after a single administration at the time of surgery in two different animal models of human glaucoma filtration surgery [151]. In this study, TGFβ2-targeting ASON was determined to be the most effective treatment. A more recent study demonstrated that SMAD3-specific ASON prevents scarring following flexor tendon repair surgery [152]. One advantage of the anti-sense oligonucleotide therapy seems to be a long-lasting effect [151], which might reduce the number of necessary post-surgical administrations.
Indirect TGFβ-targeting Agents
Anti-TGFβ signalling effects and associated regenerative properties have also been observed in biologically active molecules produced by plants and animals or that were chemically synthesized; some have already been approved for human and veterinary medicine. These include curcumin [153], decorin [154], halofuginone [155], quercetin, asiaticoside, and tetrandine [156].
An emerging example is the angiotensin receptor blocker Losartan. In addition to its widespread use in treating hypertension, Losartan also inhibits TGFβ-induced activation of canonical and non-canonical signalling mediators [157]. Related to this, it shows some promise for patients suffering from Marfan syndrome and possibly other inherited connective tissue disorders where excessive TGFβ signalling predisposes to aortic root aneurism and/or skeletal muscle myopathy [157,158]. Losartan treatment at specific doses and schedules also improves muscle healing in a mouse model of contusion-induced muscle injury [159], and facilitates epidermal wound regeneration in a model of streptozotocin-induced diabetes in mice [160].
Another promising compound is pirfenidone (PFD), an anti-fibrotic, anti-inflammatory, and antioxidant with demonstrated abilities in down-regulating a number of profibrotic cytokines, including TGFβ1 [161]. PFD has been licensed in many countries (except for the United States) for the treatment of idiopathic pulmonary fibrosis, a chronic lung disease resulting from an aberrant wound-healing circuitry in pulmonary epithelium [162]. PFD nanoparticles, administered 1 h post-injury and daily for up to 7 days, promoted re-epithelization, and decreased collagen type I synthesis and cornea opacity in a mouse model of alkali-induced corneal burn [163]. A more recent study on excisional wound healing in mice tested the effect of PFD delivered using several different topical modalities. Regardless of mode of delivery, PFD was found to accelerate wound contraction and significantly reduce TGFβ expression as well as scarring [164].
Recombinant TGFβ
An alternative strategy to the pharmacological approaches described above involves the application of exogenous TGFβ ligands, most notably the recombinant TGFβ3 (Avotermin/Juvista) produced by Renovo (Manchester, UK) [165]. As demonstrated by three randomized, double-blind, placebo-controlled phase II clinical trials (NCT00594581, NCT00432211 and NCT00430326), avotermin treatment is safe, well tolerated, and offers a significant improvement in scar appearance [166][167][168]. Data from in vitro and pre-clinical studies (reviewed in [169]) also indicate that avotermin enhances chondrogenesis. Of note is the proposed use of cartilage-ECM-derived scaffolds that might allow for controlled release of TGFβ3 to promote chondrogenesis of intrapatellar fat pad-derived stem cells for use in articular cartilage regeneration [170].
The use of recombinant ligand to promote tissue regeneration might not be limited to TGFβ3. A recent study comparing the effect of TGFβ1 and BMP2 on calvarial defect healing and suture regeneration in rabbits, suggests TGFβ1 to be a superior factor in this particular setting, by promoting bone healing via the native intramembranous ossification pathway [171].
Conclusions
Both canonical and non-canonical signalling activated by TGFβ isoforms 1, 2 and 3, as well as activin play crucial roles in wound healing and multi-tissue regeneration across vertebrates. The ultimate outcome of this signalling depends on an exquisite balance of ligand levels, the cell type, and the micro-environmental context in which the ligand is presented, including the stiffness of the ECM. In adult mammals, high levels of TGFβ1 and TGFβ2, and low levels of TGFβ3 facilitate scar-forming healing, while in fetal mammals, high levels of TGFβ3 and low levels of TGFβ1 and TGFβ2 favour scar-free healing. ALK-mediated signalling by TGFβ1, TGFβ2 and activin βA drives early stages of blastema-mediated, multi-tissue regeneration in axolotls, Xenopus, zebrafish and possibly leopard geckos, with one or more of these ligands playing a prominent role, depending on the species. Canonical signalling by distinct TGFβ isoforms also modulate repair of cardiac and skeletal muscle, bone, and cartilage. Based on the knowledge accumulated over the last three decades, a number of different strategies to modulate TGFβ signalling are either under investigation or have been approved (e.g., recombinant-human TGFβ3) to promote scar-free wound healing and/or regeneration of specific tissues in humans. Further research on regeneration-competent vertebrates is encouraged, as this will lead to the identification of the elements lacking in regeneration-incompetent vertebrates, thus informing pharmacological strategies of broad applicability to both human and veterinary regenerative medicine.
Acknowledgments:
The authors thank the National Science and Engineering Research Council (NSERC) of Canada for providing funds to M.K.V. (Grant # 400358) and A.V.P. (Grant # 400419) to develop the initial studies that led to this review, and gratefully acknowledge the insightful comments by two anonymous reviewers.
Author Contributions: All three authors: R.W.D.G, M.K.V., and A.V.P. conceived the specific topics of this review and wrote the paper. A.V.P. designed Figure 1.
Conflicts of Interest:
The authors declare no conflict of interest. The founding sponsors had no role in the writing of the manuscript, and in the decision to publish.
Abbreviations
The following abbreviations are used in this manuscript: | 2016-07-25T08:52:20.182Z | 2016-06-01T00:00:00.000 | {
"year": 2016,
"sha1": "d9e4e305f14d13eaa8e95c8b1b3f09de302d0531",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/jdb4020021",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d9e4e305f14d13eaa8e95c8b1b3f09de302d0531",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
253734382 | pes2o/s2orc | v3-fos-license | Sums of powers of integers and generalized Stirling numbers of the second kind
By applying the Newton-Gregory expansion to the polynomial associated with the sum of powers of integers $S_k(n) = 1^k + 2^k + \cdots + n^k$, we derive a couple of infinite families of explicit formulas for $S_k(n)$. One of the families involves the $r$-Stirling numbers of the second kind $\genfrac{\{}{\}}{0pt}{}{k}{j}_r$, $j=0,1,\ldots,k$, while the other involves their duals $\genfrac{\{}{\}}{0pt}{}{k}{j}_{-r}$, with both families of formulas being indexed by the non-negative integer $r$. As a by-product, we obtain three additional formulas for $S_k(n)$ involving the numbers $\genfrac{\{}{\}}{0pt}{}{k}{j}_{n+m}$, $\genfrac{\{}{\}}{0pt}{}{k}{j}_{n-m}$ (where $m$ is any given non-negative integer), and $\genfrac{\{}{\}}{0pt}{}{k}{j}_{k-j}$, respectively. Moreover, we provide a formula for the Bernoulli polynomials $B_k(x-1)$ in terms of $\genfrac{\{}{\}}{0pt}{}{k}{j}_{x}$ and the harmonic numbers.
Introduction
Following Broder [5,Equation 57] (see also Carlitz [6,Equation (3.2)]) we define the generalized (or weighted) Stirling numbers of the second kind by where x stands for any arbitrary real or complex value, and where the k j 's are the ordinary Stirling numbers of the second kind [3]. Note that R k,j (x) is a polynomial in x of degree k − j with leading coefficient k j and constant term R k,j (0) = k j . Furthermore, we have that R k,j (1) = k+1 j+1 . In general, when x is the non-negative integer r, R k,j (r) becomes the r-Stirling number of the second kind k+r j+r r [5]. A combinatorial interpretation of the polynomial R k,j (x) is given in [5,Theorem 27] (see also the definition provided by Bényi and Matsusaka in [1,Definition 2.13]).
For convenience and notational simplicity, in this paper we employ the notation k j r to refer to Broder's r-Stirling numbers of the second kind k+r j+r r . The former notation has been used recently by Ma and Wang in [14] (see also [1] and [15]). The numbers k j r are then given by Likewise, adopting the notation in [14], we define the counterpart or dual of k j r for negative integer r as Alternatively, k j r and k j −r can be equivalently expressed in the form and respectively. Clearly, both k j r and k j −r reduce to k j when r = 0. It is to be noted that the numbers k j −r were introduced and studied by Koutras under the name of non-central Stirling numbers of the second kind and denoted by S r (k, j) (see [13,Equations (2.5) and (2.6)]).
On the other hand, for non-negative integer k, let S k (n) denote the sum of k-th powers of the first n positive integers S k (n) = 1 k + 2 k + · · · + n k , with S k (0) = 0 for all k. As is well known, S k (n) can be expressed in terms of the Stirling numbers of the second kind as (see, e.g., [18]) where δ k,0 is the Kronecker delta, which ensures that S 0 (n) = n. Additionally, S k (n) admits the following variant of (3): (see, e.g., [20,Equation (9)], [7], [11,Corollary 2], and [8,Theorem 5]). The first expression in (4) can be readily obtained from the exponential generating function [4,Equation (11)] Of course, (3) and (4) are equivalent formulas. Indeed, it is a simple exercise to convert (3) into (4), and vice versa, by means of the recursion k j = j k−1 j + k−1 j−1 and the well-known combinatorial identity n j+1 + n j = n+1 j+1 . Incidentally, it is worthwhile to mention that, in his 1928 Monthly article [10], Ginsburg wrote down explicitly the first few instances of (4) for k = 2, 3, 4, 5 in terms of the binomial coefficients n j+1 , where j = 0, 1, . . . , k, namely As noted by Ginsburg, the above formulas appeared on page 88 of the book by Schwatt, Introduction to Operations with Series (Philadelphia, The Press of the University of Pennsylvania, 1924).
In this paper, we obtain a unifying formula for S k (n) giving (3) and (4) as particular cases. Indeed, we derive a couple of infinite families of explicit formulas for S k (n), one of them involving the numbers k j r and the other the numbers k j −r , with j = 0, 1, . . . , k. Specifically, we establish the following theorem which constitutes the main result of this paper. (1) and (2), respectively, where r stands for any arbitrary but fixed non-negative integer. Then and Before we prove Theorem 1 in the next section, a few observations are in order.
Remark 2.
It should be stressed that both (5) and (6) hold irrespective of the value taken by the non-negative integer parameter r. This means that, actually, the right-hand side of (5) and (6) provides us with an infinite supply of explicit formulas for S k (n), one for each value of r. For example, for r = 2, and noting that S k (1) = 1 for all k, we have from (5) Analogously, for r = 2, we have from (6) Remark 3. In the last section, we obtain a more general formula for S k (n) involving the numbers , where m is any given non-negative integer (see equations (19) and (20)).
Furthermore, we provide another formula for S k (n) involving the numbers k j k−j (see equation
Proof of Theorem 1
The proof of Theorem 1 is based on the following lemma. Lemma 1. For x a real or complex variable, let S k (x) denote the unique interpolating polynomial in x of degree k + 1 such that S k (x) = 1 k + 2 k + · · · + x k whenever x is a positive integer (with S k (0) = 0). Then where a is a parameter taking any arbitrary but fixed real or complex value.
Proof. As is well known (see, e.g., [9,Equation (15]), S k (x) can be expressed in terms of the Bernoulli polynomials B k (x) as follows Recall further that the forward difference operator ∆ acting on the function f (x) is defined by ∆f (x) = f (x + 1) − f (x). Then, the following elementary result follows immediately from (10) and the difference equation ∆B k+1 (x) = (k +1)x k [9, Equation (12)].
On the other hand, the Newton-Gregory expansion of the function f (x) is given by (see, e.g., [17,Equation (A.9), p. 230]) where, for any integer j ≥ 1, the j-th order difference operator ∆ j is defined by Hence, applying the Newton-Gregory expansion to the power sum polynomial S k (x) and using (11) yields where the terms in the summation with index j greater than k have been omitted because ∆ j (x + 1) k = 0 for all j ≥ k + 1 [17, Equation (6.16), p. 68]. The connection between (12) and the generalized Stirling numbers R k,j (x) stems from the fact that (see, e.g., [5,Theorem 29] and [6, Equation (3.8)]) Therefore, combining (12) and (13), and making a → a − 1, we get (9).
Remark 4. By renaming r as n in (18) we find that which may be compared with (3).
Remark 5. It is to be noted that the equation (14) above is equivalent to the equation appearing in [2, Corollary 2.2] in which d = 1 and a = r.
Concluding remarks
Let us observe that, by letting r = n + m in (5), where m is any given non-negative integer, and using (15), we obtain Of course, (19) reduces to (7) and (8) when m = 0 and m = 1, respectively. Similarly, putting r = n − m in (5), where m is any given non-negative integer, we obtain Note that, when m = n, (20) taking r = k in (6) yields Incidentally, for n = 1, (21) gives us the identity k j=0 (−1) k−j j! k + 1 j k j k−j = 1.
Moreover, from (10) and (16), we obtain the following formula for the Bernoulli polynomials evaluated at the non-negative integer r Likewise, making r → −r in the preceding equation and using (15) gives the following formula for the Bernoulli polynomials evaluated at the negative integer −r One can naturally extend the above formula for B k+1 (r) to apply to any real or complex variable x as follows where, using the notation in [1], k j x refers to the Stirling polynomial of the second kind R k,j (x), namely Finally, we point out that B k (x − 1) can be expressed in the form where H j = 1 + 1 2 + · · · + 1 j is the j-th harmonic number. In particular, setting x = 1 in (22) yields the following known formula for the Bernoulli numbers (see, e.g., [19,Equation (5.9)]) B k = k j=0 (−1) j j!H j+1 k + 1 j + 1 .
Furthermore, from (10) and (22), we arrive at the following formula for the sum of powers of integers which holds for any integers n ≥ 0 and k ≥ 1, with S k−1 (0) = 0 for all k ≥ 1. | 2022-11-22T06:41:24.870Z | 2022-11-21T00:00:00.000 | {
"year": 2022,
"sha1": "00ff29ce3099f2d1b54103114fee1235d49462b8",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "f3a55b87ad74d422943f4dd45e307cfb7e7ccff5",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
235614155 | pes2o/s2orc | v3-fos-license | The Three-Dimensional Morphology of Femoral Medullary Cavity in the Developmental Dysplasia of the Hip
Objective This study aimed to assess the morphology of the femoral medullary canal in subjects with developmental dysplasia of the hip (DDH) with the intent of improving the design of femoral stems in total hip arthroplasty. Methods Computed tomography images of 56 DDH hips, which were classified into Crowe I to Crowe IV, and 30 normal hips were collected and used to reconstruct three-dimensional morphology of the femoral medullary cavity. Images of twenty-one cross sections were taken from 20 mm above the apex of the lesser trochanter to the isthmus. The morphology of femoral cavity was evaluated on each cross section for the longest canal diameter, the femoral medullary torsion angle (FMTA), and the femoral medullary roundness index (FMRI). Results The Crowe IV group displayed the narrowest medullary canal in the region superior to the end of the lesser trochanter, but then gradually aligned with the medullary diameter of the other groups down to the isthmus. The FMTA along the femoral cavity increased with the severity of DDH, but the rate of variation of FMTA along the femoral canal was consistent in the DDH groups. The DDH hips generally showed a larger FMRI than the normal hips, indicating more elliptical shapes. Conclusion A femoral stem with a cone shape in the proximal femur and a cylindrical shape for the remainder down to the isthmus may benefit the subjects with severe DDH. This design could protect bone, recover excessive femoral anteversion and facilitate the implantation in the narrow medullary canal.
INTRODUCTION
Developmental dysplasia of the hip (DDH), which is characterized by a shallow acetabulum, shortened femoral neck, excessive femoral anteversion and narrow femoral medullary cavity (Nakahara et al., 2014), may result in a range of complications to the hip joint such as dislocation, focal necrosis, and discrepancies in leg length (Naci and Kadri, 2011).
Modular hip system and straight cone femoral stem are usually recommended for the treatment of DDH, due to the free setting of anteversion (Zhen et al., 2017;Wang, 2019;Gholson et al., 2020), and have shown good postoperative results (Gholson et al., 2020). However, difficult operative implantation of femoral stem in the patients with highly dislocated DDH hips (Liu et al., 2016) and intraoperative femoral fracture (Zang et al., 2019) were reported in previous clinical studies. Clinical experience suggests that the anatomic abnormalities inherent in the dysplastic femur increase with the degree of subluxation of the hip (Crowe et al., 1979;Sugano et al., 1998), thus the abnormal morphology of femoral medullary cavity in the DDH hips possibly limited the effective femoral stem implantation. Changes in the anatomy of the proximal femur associated with DDH can make implantation of a femoral stem challenging in cases where total hip arthroplasty is required (Yang and Cui, 2012). Good conformity between the femoral stem and the proximal femoral medullary cavity is required for stimulating bone ingrowth and improving initial stability and long-term fixation (Hayashi et al., 2015). However, few publications have reported on the detailed morphology of the femoral canal in subjects with DDH. In the previous study (Noble et al., 2003), CT scans were used to measure the medial-lateral width and anterior-posterior width of the proximal medullary canal in DDH patients. However, excessive femoral anteversion in DDH hips (Nakahara et al., 2014) is known to cause a twist in the femoral canal, and the medial-lateral width and anteriorposterior width may vary along the femoral medullary cavity. Liu et al. (2016) measured three cross sections of different regions of the femoral canal in DDH subjects, but the morphology variation in the whole proximal medullary cavity, where the femoral stem was fit, was still unknown and may limit the improvement of femoral stem designs.
The purpose of this study was to assess the morphology of the femoral medullary canal in the subjects with DDH with the intent of offering guidance for suitable femoral stem designs for total hip arthroplasty. The three-dimensional (3D) morphology of the femoral medullary cavity was reconstructed from computed tomography (CT) images of the femur. The femoral medullary cavity in DDH hips was classified according to the Crowe classification method (Crowe et al., 1979). The morphological characteristics of the femoral medullary canal in subjects with DDH were assessed regarding the size, torsion, and shape.
Subjects
The subject pool consisted of 56 adult patients (56 hips) who were diagnosed as DDH according to a comprehensive assessment with the lateral center edge angle [the lateral center edge angle <20 • (Zhang et al., 2020)], acetabular angle [the acetabular angle >47 • (Fujii et al., 2012)] and qualitative subluxation estimation, and were scheduled for total hip arthroplasty in our clinic from January 2019 to December 2019. The pool also included 30 healthy subjects with no signs of hip disease who acted as the control group (Table 1). Exclusion criteria were the presence of DDH with femoral fracture, bone defects, or bone tumors. All study protocols were approved by our institutional review board. Informed consent was obtained from all participants. Moreover, all the participants have consented to the publication of their data. Using the classification method detailed by Crowe et al. (1979), the final subject pool consisted of 26 subjects classified as Crowe I (less than 50% subluxation), nine as Crowe II (50 to 75% subluxation), seven as Crowe III (75 to 100% subluxation) and 14 as Crowe IV (more than 100% subluxation) ( Table 1).
Three-Dimensional Reconstruction of CT Images
The hips of all subjects (DDH and control groups) were scanned in a supine position with the lower limbs in neutral rotation using a multislice CT scanner (Discovery CT750 HD, GE Healthcare, United States) operating at 120 kV and 100 mA. All subjects were scanned from the superior margin of the ilium to one mm below the femoral condyle with a slice thickness of one mm. This region was chosen because of its importance in securing the femoral stem after a total hip replacement (Hayashi et al., 2015).
All CT images were recorded in DICOM format (512 × 512 pixels) and then imported into Mimics for 3D reconstruction (Mimics 17.0,Materialize,Leuven,Belgium). In Mimics, the femur was isolated from surrounding bone and soft tissues and then a 3D model of the femur was automatically reconstructed based on the default optimal settings. A femoral model was created for each of the DDH subjects and control subjects. The region from 20 mm above the apex of the lesser trochanter (T+20) to the isthmus was chosen for measurement because the medullary cavity in this span is critical for securing the femoral stem in a hip prosthesis (Hayashi et al., 2015). Due to individual variations in height, the region between T+20 and the end of the lesser trochanter (T-end) was evenly divided into ten intervals using nine cross sections (CS2 to CS10 in Figure 1), with the span ranging between 3.6 and 4.8 mm. The remaining region of each femur (T-end to the isthmus) was evenly divided into ten spaces (ranging from 7.2 to 8.7 mm) by the cross sections CS12 to CS20 in Figure 1. The cross sections on the levels of T+20, T-end, and isthmus were, respectively, marked as CS1, CS11, and CS21 (Figure 1).
Measuring the Morphology of the Femoral Medullary Cavity
The 3D morphology of the femoral medullary canal was represented by three parameters: the longest canal diameter, the femoral medullary torsion angle (FMTA), and the femoral medullary roundness index (FMRI).
To reduce the effect of the femoral torsion on the measurements, the longest canal diameter (red line in Figure 1) in each cross section was chosen to assess the size of the femoral cavity. The built-in Measure Distance function in Mimics 17.0 (Materialize, Leuven, Belgium) (accuracy: 0.01 mm) was used to record the canal diameter and each measurement was repeated three times and then averaged. The resulting diameter of each cross section (CS1-CS21) was averaged for all femurs in the same group (DDH and control groups). This averaged value was used to represent the longest medullary canal diameter in that group. The variation in the longest canal diameter from T+20 to T-end and from T-end to isthmus were, respectively, calculated for each femur to investigate the change in the size of the femoral cavity. The femoral medullary torsion angle was used to assess the degree of torsion within the canal. It was determined by the angle between the longest canal diameter (red line) and the posterior condyle (PC) line (Figure 1; Gaffney et al., 2019), measured using the MB-ruler software (Markus Bader, Germany) with the accuracy of 0.01 degrees. The PC line was determined by the two most prominent points on the posterior condyle (P1 and P2 in Figure 1; Gaffney et al., 2019). The femoral medullary torsion angle in each cross section from CS1 to CS21 was measured three times and the average was used as the value for that cross section. The femoral medullary torsion angle on each cross section was averaged for all femurs in the same group (DDH and control groups). This averaged value was used to represent the femoral medullary torsion angle for the cross section in that group.
The average rate of variation of femoral medullary torsion angle along the femoral cavity from T+20 to T-end and from T-end to isthmus was also calculated. These values were used to investigate how the femoral medullary torsion angle varies along the canal. Due to differences in individual heights, the rate of variation of femoral medullary torsion angle (V a−b ) along the femoral medullary canal was defined by Eq. 1. In this equation, a and b were the cross section numbers, FMTA a and FMTA b were, respectively, the femoral medullary torsion angle values for cross section a and b, distance a−b was the distance between the cross section a and the cross section b, and was measured with the built-in Measure Distance function (accuracy: 0.01 mm) in Mimics 17.0. As with the femoral medullary torsion angle values above, the rate of variation of femoral medullary torsion angle along the canal from T+20 to T-end and from T-end to isthmus was, respectively, calculated by V 1−11 and V 11−21 using Eq. 1. The results for V i−j for the same region were averaged for all femurs in the same group and the average was used to represent the value for that group. A statistical investigation on the rate of variation of femoral medullary torsion angle along the femoral canal for each research group was performed.
The femoral medullary roundness index (FMRI) was used to evaluate the shape of the femoral medullary canal. The femoral medullary roundness index was defined as the ratio of the longest diameter of the medullary cavity to the width of its perpendicular diameter (green line in Figure 1) passing through the midpoint of the longest diameter. Measurement of the femoral medullary roundness index was repeated three times for each cross section and averaged. The value of the femoral medullary roundness index for each cross-section was then averaged in each group and used to represent the shape of the medullary cavity in that group.
Accuracy of Measurement
The whole measurement process was repeated independently by three operators. The intraclass correlation coefficient (ICC) for the three operators was 0.912 for the longest canal diameter, 0.803 for the femoral medullary torsion angle (FMTA), and 0.813 for the femoral medullary roundness index (FMRI), suggesting good reliability across all measures (Weir, 2005). A univariate analysis was used to assess the inter-group differences for each cross section in the Crowe I, Crowe II, Crowe III, Crowe IV DDH groups and the control group with regard to the three parameters representing the 3D morphology of the femoral medullary canal (the longest canal diameter, the femoral medullary torsion angle, and the femoral medullary roundness index). The continuous variables were expressed as a mean and range. Shapiro-Wilk test was used to perform the normality test. The inter-group differences in the Crowe I, Crowe II, Crowe III, Crowe IV DDH groups, and the control group were assessed by using a one-way ANOVA test for the parametric variables and a Kruskal-Wallis test for the non-parametric variables. A priori power analysis with a significance level of 0.05 (type-I error), the desired power of 80%, and the effect size of 0.5 indicating a medium difference (Sullivan and Feinn, 2012) was performed for ANOVA and Kruskal-Wallis tests to evaluate the sample size. IBM SPSS 22.0 (IBM Corp., New York, NY, United States) was used for all statistical analyses. A p value less than 0.05 was considered significant.
The Longest Medullary Canal Diameter
The size of the medullary canal was determined by the longest canal diameter on each cross section from CS1 to CS21 (Figure 2). The results showed that there was a general decrease in canal diameter from the proximal to the distal femur (Figure 2). The variation in canal diameter from T+20 to T-end [Crowe I: 25.13 mm (16.66 to 34.45 mm); Crowe II: 24.14 mm (19.02 to 30.74 mm); Crowe III: 22.88 mm (10.56 to 28.19 mm); Crowe IV: 17.80 mm (7.61 to 26.43 mm)] was much larger than the variation from the T-end level to the isthmus level, with the values for the four Crowe types ranging between 4.67 to 5.17 mm ( Table 2). Moreover, Crowe IV displayed an apparent shorter medullary canal diameter than those in control, Crowe I and Crowe II groups from T+20 to T-end (p < 0.001 for control and Crowe I groups, p = 0.021 for Crowe II group) and from T-end to isthmus (p < 0.001 for control group, p = 0.003 for Crowe I group, and p = 0.004 for Crowe II group) indicating a narrower canal. While there were no significant differences between Crowe III and Crowe IV in the variations of the longest diameter along the femoral canal (p = 0.157 for the variation between the T+20 and the T-end levels, and p = 0.097 for the variation between the T-end and the isthmus levels). Additionally, the medullary diameter in the subjects with severe DDH (Crowe III and Crowe IV groups) was not significantly different from those in the other groups on the isthmus level (p = 0.319 in Table 2).
Femoral Medullary Torsion Angle (FMTA)
All groups had a similar trend in femoral medullary torsion angle, with the angle increasing from proximal to distal (Figure 3). The average femoral medullary torsion angle at the T+20, T-end, and isthmus levels were shown in Table 3 and indicated that there were significant differences in all the groups (p < 0.001). A further study on the comparison between every two groups revealed that the FMTA in the healthy group was significantly lower than those in DDH groups (p < 0.001 for all the Crowe groups) on the T+20 level, as well as not significantly different from Crowe I (p = 0.711) and Crowe II (p = 0.387) and still lower than Crowe III (p = 0.011) and Crowe IV (p < 0.001) on the isthmus level.
The rate of variation of femoral medullary torsion angle along the femoral cavity was significantly different in the research group (DDH and control groups) from T+20 to T-end (p = 0.002) and kept consistent from T-end to isthmus (p = 0.796) for all the subjects (Table 4). A further study on the DDH groups only indicated that the rate of variation of femoral medullary torsion angle along the canal didn't vary significantly with the severity of DDH, with the p values from T+20 to T-end as well as that from T-end to isthmus being, respectively, 0.273 and 0.655.
Femoral Medullary Roundness Index (FMRI)
The graph in Figure 4 showed that all groups had a similar trend in the variation of femoral medullary roundness index over the length of the medullary cavity, with the femoral medullary roundness index decreasing from the T+20 level to the T-end level but then reverting as far as the isthmus. Also apparent was that the femoral medullary roundness index for the DDH group was generally larger than the control group from T+20 to T-end. These results signified that the shape of the femoral canal varied from an elliptical shape at T+20 to a more circular shape around T-end, and then reverted to be more elliptical again down to the isthmus. Moreover, the shape of the femoral canal for DDH subjects was generally more elliptical than the control subjects in the region of the lesser trochanter (Figure 4). The statistical analysis in Table 5 showed that the DDH groups had a generally larger femoral medullary roundness index than the control group, except for in the region around the end of the lesser trochanter (p = 0.689), and the femoral medullary roundness index for Crowe IV was significantly larger than the other groups at the isthmus level (p = 0.011).
DISCUSSION
This study aimed to evaluate the three-dimensional morphology of the femoral medullary canal in subjects with DDH with the intent to offer guidance for improved femoral stem design in total hip replacements. It was found that: (1) The Crowe IV group displayed the narrowest medullary canal in the region superior to the end of the lesser trochanter, but then gradually aligned with the medullary diameter of the other groups down to the isthmus.
(2) The femoral medullary torsion angle along the femoral cavity increased with the severity of DDH, but the rate of variation of femoral medullary torsion angle along the femoral cavity was consistent among the DDH groups (Crowe I to Crowe IV). (3) The femoral medullary roundness index in the DDH groups was generally larger than the control group, except for around the end of the lesser trochanter. In this study, the longest canal diameter was used to assess the size of the medullary canal. It was found that the Crowe IV hips showed the most severe narrowing of all research groups (DDH and control groups) (Figure 2). This result was in agreement with the studies by Sugano et al. (1998) and Noble et al. (2003), where the medullary cavity of DDH subjects was reported to be narrower than that of the control group, and the size of medullary canal reduced as the severity of the femoral deformity increased.
The results of this current study also show that the medullary canal diameter in the DDH subjects reduced around the lesser trochanter (T+20 to T-end), but then varied only slightly from the T-end level to the isthmus ( Table 2). The measurements of femoral medullary cavity suggest that a small stem with a large taper angle around the lesser trochanter (T+20 to T-end) and a small taper angle in the remaining region down as far as the isthmus (T-end to isthmus) may be suitable for Crowe IV hips. This design may reduce the high incidence of intraoperative femoral fracture caused by femoral stems with a consistent taper from the proximal femur to the isthmus (for example, the Wagner SL implant, Zimmer Inc., Warsaw, IN, United States) (Zang et al., 2019).
The results also showed that the femoral medullary torsion angle was larger in the DDH groups than in the control group and the value increased with the severity of DDH, which is in an agreement with Noble et al. (2003). The femoral medullary torsion angle values at the T+20, T-end, and Isthmus levels in our study ( Similarly, the reported angles at these three levels in Noble's study (Noble et al., 2003) were 30 • , 59 • and 86 • in the control group and 58 • , 77 • , and 98 • in the Crowe IV group. Additionally, it was found that the rotation angle from T+20 to the isthmus was the largest in the control group (66.60 • ), followed by Crowe I (51.94 • ), Crowe II (53.34 • ), Crowe III (54.12 • ), and Crowe IV (50.63 • ) ( Table 4). The results are largely in agreement with those reported in the literature (Sugano et al., 1998), in which the femoral canal was reported to be twisted at 48 • in normal hips, and then smaller angles for DDH groups (Crowe I: 36 • , Crowe II and Crowe III: 42 • , Crowe IV: 37 • ). The slight difference may be explained by the fact that different sections of the femur were chosen for analysis. In our study, the targeted range of the rotation was from 20 mm proximal to the apex of the lesser trochanter to the isthmus, while (Noble et al., 2003) considered from 20 mm distal to the center of the lesser trochanter to the canal isthmus. Other possible causes for the variation may be down to differences in the subject population, sex, as well as methods used for measuring the medullary canal. However, the discrepancies between the results of this study and those reported by Sugano et al. (1998) and Noble et al. (2003) were minor and the change in femoral medullary torsion angle in the DDH and control groups showed a similar trend between these studies. Considering the rate of variation of femoral medullary torsion angle along the femoral cavity, the results in this study indicated that the femoral cavity in the DDH groups twisted at a consistent rate of variation, and the value in the DDH groups was smaller than that in the control group in the proximal femoral region. The results indicated that the difference in femoral medullary torsion angle between the DDH and control medullary canals was possibly caused by the femoral anteversion, which refers to the rotation of the femoral neck around the diaphysis. A normal femoral anteversion is beneficial to restoring the normal biomechanics of the dysplastic joint (Noble et al., 2003;Wang, 2019), as well as improving the postoperative durable implant fixation and joint mobility (Dorr et al., 2009), thus a suitable femoral replacement design for patients with severe DDH may need to consider correcting the excessive femoral anteversion. This could be achieved using a modular stem with a rotatable sleeve (Liu et al., 2016), or a cone stem (Zhen et al., 2017). Alternatively, additive manufacturing is a developing technology that has shown promising results for achieving a close match to the individual anatomy of the femoral medullary cavity (Hua et al., 2010). Similar techniques could be used to create a customized uncemented femoral stem for patients with severe DDH to recover excessive femoral neck anteversion. The shape of the femoral canal was assessed through the femoral medullary roundness index. It was found that the femoral medullary roundness index in all subjects decreased from the level of T+20 down to T-end, and then increased down to the isthmus. In the region around T-end, there was no significant difference in femoral medullary roundness index between all groups (control and DDH groups). The results also showed that the shape of the femoral canal varied from an elliptical shape at T+20 to a more circular shape around T-end, and then reverted to an ellipse down to the isthmus. In addition, the femoral roundness index for the DDH groups was generally larger than the healthy subjects, indicating a more elliptical shape in the DDH hips. Considering the twisted femoral medullary canal, the elliptical cross section shape of the femoral cavity indicated that a femoral stem with a circular cross section may be beneficial for protecting bone. Clinical studies on cone-shaped hip stems (Wagner cone prosthesis hip stem, Zimmer R , United States) have shown good recovery of excessive femoral anteversion, stable proximal fixation, and good survivorship for the patients with DDH (Zhen et al., 2017). Considering the size of the femoral canal reduced steeply only in the region superior to the end of the lesser trochanter and changed slightly in the remaining distal region down to the isthmus (Figure 2), a femoral stem with a cone shape in the lesser trochanter and a cylindrical shape down to the isthmus is recommended for patients with severe DDH (i.e., Crowe IV), which is characterized by a large femoral medullary rotational angle and narrow metaphyseal canal. This design may reduce interoperative bone fracture, offer good conformity between the femoral stem and the proximal femoral medullary cavity, as well as protect bone. The suitable design of femoral stem for DDH is not only beneficial to improve gait performance but helpful for trunk activities, due to the correlation between gait and paraspinal muscle activation (Miscusi et al., 2019).
A limitation of this study was that the number of the research subjects was limited, as well as the number and sex of subjects assigned to each of the DDH groups and the control group were not consistent. Nevertheless, the femoral medullary torsion angle results in this study were consistent with those reported by Sugano et al. (1998) and Noble et al. (2003) and confirm the reliability of the model used in this study. The second limitation was that the femoral stem design with a cone shape around the lesser trochanter and a cylinder shape for the remaining region down to the isthmus was only recommended based on the study on the morphology of femur in the subjects with DDH, a further comprehensive assessment for the proposed stem design could be performed in the future.
In conclusion, the subjects with severe DDH deformities displayed a narrow medullary canal in the region only superior to the end of the lesser trochanter, and then a slight variation in the canal size from the other groups for the remainder of the canal down to the isthmus. In addition, the severe DDH group had a larger twist and a more elliptical-shaped canal in comparison to the healthy subjects. The characteristics of morphology of femur with severe DDH deformities indicated that a femoral stem with a cone shape in the region only superior to the end of the lesser trochanter and a cylinder shape for the remaining region down to the isthmus may benefit the subjects with severe DDH, protecting bone, recovering excessive femoral anteversion and facilitating the implantation in the narrow medullary canal.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
ETHICS STATEMENT
Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article.
AUTHOR CONTRIBUTIONS
MZ, G-QZ, and C-KC: conceptualization. MZ, Q-QY, and C-KC: methodology. MZ, B-LL, X-ZQ, J-YS, and Q-YZ: formal analysis and investigation. MZ: writing -original draft preparation. B-LL, X-ZQ, Q-QY, J-YS, Q-YZ, G-QZ, and C-KC: writing, review, and editing. G-QZ and C-KC: supervision. All authors have agreed to be accountable for the content of the work. | 2021-06-24T13:12:45.188Z | 2021-06-24T00:00:00.000 | {
"year": 2021,
"sha1": "7a24939b31e9efea1c15c704b0a15fec8f82c0b4",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fbioe.2021.684832/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7a24939b31e9efea1c15c704b0a15fec8f82c0b4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
115153322 | pes2o/s2orc | v3-fos-license | Dysregulation of Neuronal Gαo Signaling by Graphene Oxide in Nematode Caenorhabditis elegans
Exposure to graphene oxide (GO) induced some dysregulated microRNAs (miRNAs), such as the increase in mir-247, in nematode Caenorhabditis elegans. We here further identified goa-1 encoding a Gαo and pkc-1 encoding a serine/threonine protein kinase as the targets of neuronal mir-247 in the regulation of GO toxicity. GO exposure increased the expressions of both GOA-1 and PKC-1. Mutation of goa-1 or pkc-1 induced a susceptibility to GO toxicity, and suppressed the resistance of mir-247 mutant to GO toxicity. GOA-1 and PKC-1 could also act in the neurons to regulate the GO toxicity, and neuronal overexpression of mir-247 could not affect the resistance of nematodes overexpressing neuronal goa-1 or pkc-1 lacking 3′-UTR to GO toxicity. In the neurons, GOA-1 acted upstream of diacylglycerol kinase/DGK-1 and PKC-1 to regulate the GO toxicity. Moreover, DGK-1 and GOA-1 functioned synergistically in the regulation of GO toxicity. Our results highlight the crucial role of neuronal Gαo signaling in response to GO in nematodes.
Figure 1.
Genetic interaction between mir-247 and goa-1 or pkc-1 in the regulation of GO toxicity. (a) Genetic interaction between mir-247 and goa-1 or pkc-1 in the regulation of GO toxicity in inducing intestinal ROS production. (b) Genetic interaction between mir-247 and goa-1 or pkc-1 in the regulation of GO toxicity in decreasing locomotion behavior. GO exposure concentration was 10 mg/L. Prolonged exposure was performed from L1-larvae to adult day-1. Bars represent means ± SD. ** P < 0.01 vs wild-type (if not specially indicated).
Neuronal overexpression of mir-247 could not affect the resistance of nematodes overexpressing neuronal goa-1 or pkc-1 lacking 3′-UtR to Go toxicity. To further confirm the roles of GOA-1 and PKC-1 as the target of neuronal mir-247, we next investigated the genetic interaction between mir-247 and goa-1 or pkc-1 in the neurons to regulate the GO toxicity. We introduced the goa-1 or pkc-1 lacking 3′-UTR driven by unc-14 promoter into the nematodes overexpressing neuronal mir-247. After GO exposure, the transgenic strain Is(Punc-14-goa-1-3′-UTR);Ex(Punc-14-mir-247) exhibited the similar resistance to GO toxicity to that in the transgenic strain Is(Punc-14-goa-1-3′-UTR) (Fig. 2). Additionally, the transgenic strain Is(Punc-14-pkc-1-3′-UTR);Ex(Punc-14-mir-247) showed the similar resistance to GO toxicity to that in the transgenic strain Is(Punc-14-pkc-1-3′-UTR) (Fig. 2). www.nature.com/scientificreports www.nature.com/scientificreports/ Tissue-specific activity of goa-1 in the regulation of Go toxicity. goa-1 gene is expressed in the pharynx, the neurons, and the muscle 42,43 . pkc-1 is only expressed in the neurons 44 . Using tissue-specific promoters, we investigated the tissue-specific activity of goa-1 in the regulation of GO toxicity. Rescue assay by expression of goa-1 in the pharynx or the muscle did not significantly influence the susceptibility of goa-1(sa734) mutant to GO toxicity (Fig. 3). Different from these, expression of goa-1 in the neurons could significantly suppress the susceptibility of goa-1(sa734) mutant to GO toxicity (Fig. 3). These results suggest that both GOA-1 and PKC-1 may act in the neurons to regulate the GO toxicity.
Identification of downstream targets for GOA-1 in the Gαo signaling pathway in the regulation of Go toxicity.
In the Gαo signaling pathway, DGK-1 is a downstream target for GOA-1, and dgk-1 encodes an ortholog of mammalian diacylglycerol kinase theta (DGKQ) 45 . In GO (10 mg/L) exposed goa-1 mutant, we detected the significant decrease in expressions of both pkc-1 and dgk-1 compared with those in GO (10 mg/L) exposed wild-type nematodes (Fig. 4a), which implies that both PKC-1 and DGK-1 may act as important downstream targets for GOA-1 during the control of GO toxicity.
Using induction of intestinal ROS production and locomotion behavior as the toxicity assessment endpoints, we found that the dgk-1(sy428) mutant was susceptible to GO toxicity (Fig. 4b,c), suggesting that GOA-1 may positively regulate GO toxicity by affecting functions of PKC-1 and DGK-1.
Genetic interaction between GOA-1 and PKC-1 or DGK-1 in the regulation of GO toxicity.
To determine the genetic interaction between goa-1 and dgk-1 or pkc-1 in the regulation of GO toxicity, we examined the effects of mutation of dgk-1 or pkc-1 on GO toxicity in transgenic strain overexpressing the neuronal goa-1. The nematodes overexpressing neuronal goa-1 exhibited the resistance to GO toxicity (Fig. 5). In contrast, after the GO exposure, dgk-1 or pkc-1 mutation suppressed the resistance of nematodes overexpressing neuronal goa-1 to GO toxicity (Fig. 5). Therefore, neuronal GOA-1 may act upstream of both DGK-1 and PKC-1 to regulate the GO toxicity. www.nature.com/scientificreports www.nature.com/scientificreports/ Genetic interaction between PKC-1 and DGK-1 in the regulation of GO toxicity. We further investigated the genetic interaction between the PKC-1 and DGK-1. After GO exposure, we observed the more severe GO toxicity in double mutant of pkc-1(ok563);dgk-1(sy428) compared with that in single mutant of pkc-1(ok563) or dgk-1(sy428) (Fig. 6a,b).
Discussion
In this study, we first provide several lines of evidence to indicate the potential role of GOA-1 and PKC-1 as the targets for neuronal mir-247 in the regulation of GO toxicity. First of all, expressions of both GOA-1 and PKC-1 could be suppressed by GO exposure, and their expressions could be further decreased by overexpression of neuronal mir-247 in GO exposed nematodes (Fig. S1). Secondly, in nematodes, the phenotypes in GO exposed goa-1(sa734) or pkc-1(ok563) mutant were opposite to those in GO exposed mir-247/797(n4505) mutant (Fig. S2). Thirdly, we found that mutation of goa-1 or pkc-1 could effectively inhibit the resistance of mir-247/797(n4505) mutant to GO toxicity (Fig. 1). Moreover importantly, we observed that neuronal overexpression of mir-247 did not influence the resistance of transgenic strain overexpressing neuronal goa-1 lacking 3′-UTR or pkc-1 lacking 3′-UTR to GO toxicity (Fig. 2), implying the binding of mir-247 to the 3-UTR of goa-1 or pkc-1. Previous study has identified the EGL-5 as the target for mir-247 in the control of male tail development 46 . In this study, we identified the GOA-1 and the PKC-1 as the potential targets for mir-247 during the control of GO toxicity formation in hermaphrodite nematodes.
GOA-1 activity is required for the regulation of asymmetric cell division in the early embryo, innate immunity, olfactory-mediated behaviors, and decision-making 42,43,47,48 . In this study, we further found the novel function of Gαo signaling in the control of nanotoxicity. In C. elegans, goa-1 mutation induced a susceptibility of nematodes to GO toxicity (Fig. S2), implying that goa-1-encoded Gαo signaling negatively regulates GO toxicity.
The tissue-specific activity assays indicated that the neuronal GOA-1 regulates the GO toxicity (Fig. 3). In organisms, G protein coupled receptors (GPCRs), seven-transmembrane receptors, can sense the environmental signals or molecules outside the cell and activate the inside signal transduction pathways and, ultimately, the cellular responses by coupling with the G proteins 49 . The function of goa-1-encoded Gαo signaling in the neurons www.nature.com/scientificreports www.nature.com/scientificreports/ implies that certain GPCRs in the neurons may be activated or suppressed by GO exposure, and the affected neuronal GPCRs may further function through the goa-1/Gαo-mediated signaling cascade to regulate the GO toxicity.
In this study, GOA-1 could further act upstream of diacylglycerol kinase/DGK-1 and PKC-1 to regulate the GO toxicity. Under the condition of GO exposure, goa-1 mutation decreased dgk-1 and pkc-1 expressions (Fig. 4a). Additionally, dgk-1 or pkc-1 mutation inhibited the resistance of transgenic strain overexpressing neuronal goa-1 to GO toxicity (Fig. 5). dgk-1 gene is expressed in most of the neurons. Therefore, a corresponding signaling cascade of GOA-1-DGK-1/PKC-1 can be raised to explain the molecular basis for neuronal mir-247 in response to GO exposure (Fig. 6c).
Prolonged exposure to GO (≥10 μg/L) increased the mir-247 expression 18 . Meanwhile, neuronal overexpression of mir-247 induced a susceptibility to GO toxicity 18 . Therefore, the raised neuronal signaling cascade of mir-247-GOA-1-DGK-1/PKC-1 provides an important molecular mechanism for the potential GO toxicity induction in nematodes.
In this study, we further found that DGK-1 and PKC-1 functioned synergistically to regulate GO toxicity (Fig. 6a,b). PKC-1 plays a role in regulating function of nervous system, such as the neurotransmission 50 . This observation implies the possibility that, besides the normally considered downstream diacylglycerol kinase/ DGK-1 signaling, the neuronal GOA-1/Gαo signaling may also regulate the GO toxicity by influencing the neurotransmission process. Our previous study has identified the NLG-1-PKC-1 signaling cascade in the regulation of GO toxicity 39 . Our results suggest that PKC-1 may act as an important link between the Gαo/GOA-1 signaling and the NLG-1 signaling in the regulation of GO toxicity. Additionally, PKC-1 may further act as the direct target for mir-247 in the regulation of GO toxicity (Fig. 2). These results imply the potential crucial role of neurotransmission process in the toxicity induction in GO exposed nematodes.
Head thrash and body bend were used to assess the locomotion behavior. The method was performed under the dissecting microscope by eyes as described previously 6,53 . Fifty nematodes were examined per treatment.
Reverse-transcription and quantitative real-time polymerase chain reaction (PCR). Total RNA was isolated from the nematodes using Trizol reagent (Invitrogen, UK) according manufacturer's protocol. Purity and concentration of RNA were evaluated by a ratio of OD260/280 using a spectrophotometer. The extracted RNA was used for the cDNA synthesis. After the cDNA synthesis, the relative expression levels of targeted genes were determined by real-time PCR in an ABI 7500 real-time PCR system with Evagreen (Biotium, USA). All reactions were performed in triplicate. Relative quantification of targeted gene was expressed as the ratio (targeted gene/reference gene tba-1 encoding a tubulin). The related primer in formation is shown in Table S1.
DNA constructs and germline transformation.
To generate entry vector carrying promoter sequence, promoter region for myo-2 gene specially expressed in pharynx, promoter region for myo-3 gene specially expressed in muscle, or promoter region for unc-14 gene specially expressed in neurons was amplified by PCR from wild-type C. elegans genomic DNA. The promoter fragment was inserted into pPD95_77 vector in the sense orientation. goa-1/C26C6.2.1 cDNA containing or lacking 3′-UTR was amplified by PCR, and inserted into corresponding entry vector carrying the myo-2, myo-3, or unc-14 promoter sequence. Transformation was performed by coinjecting testing DNA (10-40 μg/mL) and marker DNA of Pdop-1::rfp (60 μg/mL) into the gonad of nematodes as described 55 . The related primer information for DNA constructs was shown in Table S2.
RNA interference (RNAi).
RNAi assay was performed basically as described 54 . The nematodes were fed with E. coli strain HT115 (DE3) expressing double-stranded RNA for the examined gene. After grown in LB broth containing ampicillin (100 μg/mL), E. coli HT115 (DE3) expressing double-stranded RNA for the examined gene was plated onto NGM containing ampicillin (100 μg/mL) and isopropyl 1-thio-β-D-galactopyranoside (IPTG, 5 mM). L1 larvae were transferred onto certain RNAi plates until the nematodes became the gravid. The gravid adults were transferred to fresh RNAi-expressing bacterial lawns to let them lay eggs to obtain the second generation of RNAi population. The eggs were allowed to develop into L1-larvae for the toxicity assessment. statistical analysis. Data in this article were expressed as means ± standard deviation (SD). Statistical analysis was performed using SPSS 12.0 software (SPSS Inc., Chicago, USA). Differences between groups were determined using analysis of variance (ANOVA), and probability levels of 0.05 and 0.01 were considered statistically significant. | 2019-04-16T14:29:01.707Z | 2019-04-15T00:00:00.000 | {
"year": 2019,
"sha1": "c53d4ad49147becda246342ad24cea4d692d367e",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-019-42603-1.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c53d4ad49147becda246342ad24cea4d692d367e",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
119211246 | pes2o/s2orc | v3-fos-license | Three-dimensional Hydrodynamic Core-Collapse Supernova Simulations for an $11.2 M_{\odot}$ Star with Spectral Neutrino Transport
We present numerical results on three-dimensional (3D) hydrodynamic core-collapse simulations of an $11.2 M_{\odot}$ star. By comparing one-(1D) and two-dimensional(2D) results with those of 3D, we study how the increasing spacial multi-dimensionality affects the postbounce supernova dynamics. The calculations were performed with an energy-dependent treatment of the neutrino transport that is solved by the isotropic diffusion source approximation scheme. By performing a tracer-particle analysis, we show that the maximum residency time of material in the gain region is shown to be longer for 3D due to non-axisymmetric flow motions than 2D, which is one of advantageous aspects of 3D models to obtain neutrino-driven explosions. Our results show that convective matter motions below the gain radius become much more violent in 3D than 2D, making the neutrino luminosity larger for 3D. Nevertheless the emitted neutrino energies are made smaller due to the enhanced cooling. Our results indicate whether these advantages for driving 3D explosions could or could not overwhelm the disadvantages is sensitive to the employed numerical resolutions. An encouraging finding is that the shock expansion tends to become more energetic for models with finer resolutions. To draw a robust conclusion, 3D simulations with much more higher numerical resolutions and also with more advanced treatment of neutrino transport as well as of gravity is needed, which could be hopefully practicable by utilizing forthcoming Petaflops-class supercomputers.
INTRODUCTION
Core-collapse supernovae have long drawn the attention of astrophysicists because they have many aspects playing important roles in astrophysics. They are the mother of neutron stars and black holes; they play an important role for acceleration of cosmic rays; they influence galactic dynamics triggering further star formation; they are gigantic emitters of neutrinos and gravitational waves. They are also a major site for nucleosynthesis, so, naturally, any attempt to address human origins may need to begin with an understanding of core-collapse supernovae.
In fact, the neutrino-driven explosions have been obtained in the following state-of-the-art two-dimensional (2D) simulations (e.g., table 1 in Kotake (2011)). Using the MuD-BaTH code which includes one of the best available neutrino transfer approximations, Buras et al. (2006) firstly reported explosions for a non-rotating low-mass (11.2M ⊙ ) progenitor of Woosley et al. (2002), and then for a 15M ⊙ progenitor of Woosley & Weaver (1995) with a moderately rapid rotation imposed (Marek & Janka 2009). By implementing a multi-group flux-limited diffusion algorithm to the CHIMERA code (e.g., Bruenn et al. 2010), Yakunin et al. (2010) obtained explosions for a non-rotating progenitors of Woosley et al. (2002) in the mass range of 12M ⊙ and 25M ⊙ . More recently, Suwa et al. (2010) pointed out that a stronger explosion is obtained for a rapidly rotating 13M ⊙ progenitor of Nomoto & Mashimoto (1988) compared to the corresponding non-rotating model, in which the isotropic diffusion source approximation (IDSA) for the spectral neutrino transport (Liebendörfer et al. 2009) is implemented in the ZEUS code.
This success, however, is opening further new questions. First of all, the explosion energies obtained in these 2D simulations are typically underpowered by one or two orders of magnitudes to explain the canonical supernova kinetic energy (∼ 10 51 erg). Moreover, the softer nuclear equation of state (EOS), such as of the Lattimer & Swesty (1991) (LS) EOS with an incompressibility at nuclear densities, K, of 180 MeV, is employed in those simulations. On top of a striking evidence that favors a stiffer EOS based on the nuclear experimental data (K = 240 ± 20 MeV, Shlomo et al. (2006)), the soft EOS may not account for the recently observed massive neutron star of ∼ 2M ⊙ (Demorest et al. 2010) 1 . Using a stiffer EOS, the explosion energy may be even lower as inferred from Marek & Janka (2009) who did not obtain the neutrino-driven explosion for their model with K = 263 MeV 2 . What is then missing furthermore ? The neutrino-driven mechanism would be assisted by other candidate mechanisms such as the acoustic mechanism (e.g., Burrows et al. (2006)) or the magnetohydrodynamic mechanism (e.g., Kotake et al. (2004); Takiwaki et al. (2004Takiwaki et al. ( , 2009); Burrows et al. (2007a); Guilet et al. (2010); Obergaulinger & Janka (2011); ?, see also Kotake et al. (2006) for collective references therein). We may get the answer by taking into account new ingredients, such as exotic physics in the core of the protoneutron star (e.g., Takahara & Sato (1988); Sagert et al. (2009)), viscous heating by the magnetorotational instability (Thompson et al. 2005;Masada et al. 2011), or energy dissipation via Alfvén waves (Suzuki et al. 2008).
But before seeking alternative scenarios, it may be of primary importance to investigate how the explosion criteria extensively studied so far in 2D simulations could or could not be changed in 3D simulations. Nordhaus et al. (2010) is the first to argue that the critical neutrino luminosity for producing neutrino-driven explosions becomes smaller in 3D than 2D. They employed the CASTRO code with an adaptive mesh refinement technique, by which unprecedentedly high resolution 3D calculations were made possible. Since it is generally computationally expensive to solve the neutrino transport in 3D, they employed a light-bulb scheme (e.g., Janka & Müller (1996)) to trigger explosions, in which the heating and cooling by neutrinos are treated by a parametric manner. Since the light-bulb scheme can capture fundamental properties of neutrino-driven explosions (albeit on the qualitative grounds), it is one of the most prevailing approximation adopted in recent 3D models (e.g., Iwakami et al. (2008Iwakami et al. ( , 2009Wongwathanarat et al. (2010)). A number of important findings have been reported recently in these simulations, including a potential role of non-axisymmetric SASI flows in generating spins (Wongwathanarat et al. (2010); Rantsiou et al. (2010), see also Blondin & Mezzacappa (2007);Fernández (2010)) and magnetic fields (Endeve et al. 2010) of pulsars, stochastic nature of gravitational-wave (e.g., Kotake et al. (2009bKotake et al. ( , 2011Müller et al. (2011)) and neutrino emission (e.g., Duan & Kneller (2009)).
To go up the ladders beyond the light-bulb scheme, we explore in this study possible 3D effects in the supernova mechanism by performing 3D, multigroup, radiation-hydrodynamic core-collapse simulations. For the multigroup transport, the IDSA scheme is implemented, which can be done rather in a straightforward manner by extending our 2D modules (Suwa et al. 2010(Suwa et al. , 2011 to 3D. This can be made possible because we apply the so-called ray-by-ray approach (e.g., Buras et al. (2006)) in which the neutrino transport is solved along a given radial direction assuming that the hydrodynamic medium for the direction is spherically symmetric. From a technical point of view, it is worth mentioning that the rayby-ray treatment is highly efficient in parallization 3 on present supercomputers, most of which employ the message-passinginterface (MPI) routines. We focus here on the evolution of an 11.2M ⊙ star of Woosley et al. (2002). We first choose such a lighter progenitor star, not only because we follow a tradition in 2D literatures (e.g., Buras et al. (2006); Burrows et al. (2006)), but also because the neutrino-driven shock revival for the progenitor was reported to occur rather earlier afte r bounce in 2D models by Buras et al. (2006). So we anticipate that the cost of 3D simulations would not be too expensive for the progenitor. By comparing with our 1D and 2D results, we study how the increasing multi-dimensionality could affect the postbounce supernova dynamics.
The paper opens with descriptions of the initial models and the numerical methods (Section 2). The main results are shown in Section 3. We summarize our results and discuss their implications in Section 4.
NUMERICAL METHODS AND INITIAL MODELS
The basic evolution equations for our 3D simulations are written as, dρ dt where ρ, v, P, v, e * , Φ, are density, fluid velocity, gas pressure including the radiation pressure of neutrinos, total energy density, gravitational potential, respectively. d dt denotes the Lagrangian derivative. As for the hydro-solver, we employ the ZEUS-MP code (Hayes et al. 2006) which has been modified for core-collapse simulations (e.g., Iwakami et al. 2008Iwakami et al. , 2009. Q E and Γ N (in Equations (3) and (4)) represent the change of energy and electron fraction (Y e ) due to the interactions with neutrinos. To estimate these quantities, we employ the IDSA scheme (Liebendörfer et al. 2009). The IDSA scheme splits the neutrino distribution into two components, both of which are solved using separate numerical techniques. Although the current IDSA scheme does not yet include heavy lepton neutrinos (ν x ) and the inelastic neutrino scattering with electrons, these simplifications save a significant amount of computational time compared to the canonical Boltzmann solvers (see Liebendörfer et al. (2009) for more details). As already mentioned, we employ the ray-by-ray approximation, by which the 3D radiation transport is reduced essentially to the 1D problem. Following the prescription in Müller et al. (2010), we improve the accuracy of the total energy conservation by using a conservation form in Equation (3), instead of FIG. 1.-Three dimensional plots of entropy per baryon (left panels) and logarithmic density (right panels, in unit of g/cm 3 ) for three snapshots (top; t = 15 ms, middle; t = 65 ms, and bottom; t = 125 ms measured after bounce (t ≡ 0)) of our 3D model. In the right panels, velocities are indicated by arrows. The contours on the cross sections in the x = 0 (back right), y = 0 (back bottom), and z = 0 (back left) planes are, respectively projected on the sidewalls of the graphs. For each snapshot, the linear scale is indicated along the axis in unit of km. Figure 1 but for the net neutrino heating rate (left panels, logarithmic in unit of erg/cm 3 /s and τ adv /τ heat (right panels, see text for details), which is the ratio of the advection to the neutrino heating timescale. The gain region (colored by red in the top left panel) is shown to be formed at around t = 65 ms after bounce, which coincides with the epoch approximately when the neutrino-driven shock revival initiates in our 3D model. The condition of τ adv /τ heat 1 is satisfied behind the aspherical shock, the low-mode deformation of which is characterized by the SASI (bottom right panel).
solving the evolution of internal energy as originally designed in the ZEUS code. A Poisson equation (in Equation (5)) can be solved either by the ICCG 4 method in the original ZEUS-MP code or by the multi-domain spectral method developed in the Lorene code (Grandclément & Novak 2009). For the calculations presented here, the monopole approximation is employed.
The computational grid is comprised of 300 logarithmically spaced, radial zones to cover from the center up to 5000 km and 64 polar (θ) and 32 azimuthal (φ) uniform mesh points for our 3D model, which are used to cover the whole solid angle. To vary numerical resolutions, we run one more 3D model that has one-half of the mesh numbers in the φ direction (n φ =16), while fixing the mesh numbers in other directions. Both in 2D and 3D models, we take the same mesh numbers in the polar direction (n θ =64), so that we could see how the dynamics could change due to the additional degree of freedom in the φ direction. For the spectral transport, we use 20 logarithmically spaced energy bins reaching from 3 to 300 MeV. For our non-rotating progenitor, the dynamics of collapsing iron core proceeds totally spherically till the stall of the bounce shock. To save the computational time, we start our 2D and 3D simulation by remapping the 1D data after the stall of the bounce shock to the multi-D grids. To induce nonspherical instability, we add random velocity perturbations at less than 1 % of the unperturbed radial velocity.
RESULTS
In the following (section 3.1), we first outline hydrodynamic features in our 3D model. Then in sections 3.2 and 3.3, we move on to discuss how 3D effects impact on the explosion dynamics by comparing with the 1D and 2D results.
3.1. 3D dynamics from core-collapse through postbounce turbulence till explosion Figure 1 shows three snapshots, which are helpful to characterize hydrodynamic features in the 3D model. Top panel is for t = 15 ms after bounce, showing that the bounce shock stalls (indicated by inward arrows in the top right panel) at a radius of 150 km. Note that colors of the velocity arrows are taken to change from yellow to red as the absolute values become larger. By looking carefully at the top right panel, matter flows in supersonically (indicated by reddish arrows) in the standing shock (the central transparent sphere), and then advects subsonically (indicated by yellowish arrows) to the protoneutron star (PNS or the unshocked core, the central bluish region in the top left panel). As seen, the entropy (left panel) and density (right panel) configurations are essentially spherical at this epoch. 5 The middle panels shows an epoch (t = 65 ms) when the neutrino-driven convection is already active. From the right panel, turbulent motions can be seen (arrows in random directions) inside the standing shock, which is indicated by the boundary between red and yellow arrows. The entropy behind the standing shock becomes high by the neutrino-heating (reddish regions in the left panel). The size of the neutrino-heated hot bubble becomes larger in a non-axisymmetric way later on, which is indicated by smaller structures encompassed by the stalled shock (i.e., inside the central greenish sphere in the left panel).
The bottom panels (t = 125 ms) show the epoch when the revived shock is expanding aspherically, which is indicated by the outgoing yellowish arrows in the right panel. The asphericity of the expanding shocks could be more clearly visible by the sidewall panels. From the entropy distribution (left panel), the expanding shock is shown to touch a radius of ∼ 500 km (the projected back bottom panel). Inside the expanding shock (enclosed by the greenish membrane in the left panel), the bumpy structures of the hot bubbles are seen. In contrast to these smaller asphericities, the deformation of the shock surface is mild, which is a consequence of the SASI as will be discussed in section 3.2. Figure 2 shows the net neutrino heating rate (left panels) and the ratio of the residency timescale to the neutrino-heating timescale (right panels) for the two snapshots in Figure 1 (at t = 65 ms (top panels) and t = 125 ms (bottom panels)). Here the residency timescale and the neutrino heating timescale 6 are locally defined as, where r gain is the gain radius that depends on θ and φ, r shock is the shock radius, and v r is the radial velocity. We take the above criteria in order to estimate the residency timescale for material with positive radial velocities (v r > 0) behind the shock. 7 The heating timescale can be rather straightforwardly defined by dividing the local binding energy (e bind = 1 2 ρv 2 + e − ρΦ [erg/cm 3 ]) in the gain layer by the net neutrinoheating rate (:Q + ν, total [erg/cm 3 /s]). The circle that screens between the regions colored by blue and red and the whitish region outside corresponds to the surface of the stalled shock. Note that the positive and negative values are colored by red and blue, respectively (e.g., +|∆p/ p | (red) or −|∆p/ p | (blue) for the pressure perturbation). The linear scale and the time of these snapshots of the 3D model (t = 60 ms after bounce) are indicated in the top right, and bottom right edge of each plot. the ratio of the two timescales exceeds unity (yellowish region in the right panel). At t = 125 ms after bounce (bottom panels), the ratio reaches about 2 (reddish region in the bottom right panel) behind the shock (compare the bottom right panel in Figure 1), which presents an evidence that the shockrevival is driven by the neutrino-heating mechanism. Recently Pejcha & Thompson (2011) proposed an alternative definition of the onset time of the explosion, which is the so-called antesonic condition. From Figure 3, it can be seen that the criteria that the ratio of sound speed and escape velocity squared ∼ 0.2, is also satisfied in our 3D model when the neutrinodriven explosion sets in. Figure 4 shows distributions of pressure perturbation (top) and vorticity (bottom) at t = 60 ms after bounce. Here the pressure perturbation is estimated by ∆p/ p , with ∆p representing the deviation from the angle average pressure (: p ) at a given position. Here we define the angle average of variable A as The positive and negative deviations are colored by red and blue, respectively (e.g., +|∆p/ p | (red) or −|∆p/ p | (blue)). The left and right panels are for an equatorial (θ = π/2, and φ = 0) and a polar observer (θ = 0), respectively. In each plot, the circle that screens between the colored region and the whitish region outside corresponds to the surface of the stalled shock. From the top left panel, it is shown that the pressure waves (colored by red or blue) propagate outwards up to behind the stalled shock in a concentric fashion. Seen from the polar direction (top right), the color pattern at this snapshot indicates the dominance of ℓ = 2 and m = 2 modes in the pressure perturbation, which is related to the growth of SASI as we will discuss in section 3.2. The vorticity distributions seen from the equator (bottom left panel) show that the red and blue stripes appear alternatively behind the stalled shock. Seen from the pole (bottom right), the vorticity waves are shown to be spinning around the polar axis (the origin of the figure), which would be related to the growth of the spiral SASI modes. These fundamental features of the acoustic-vorticity feedbacks are akin to the ones obtained in Sato et al. (2009) who studied extensively the properties of SASI by their idealized numerical simulations. Our results might provide a supporting evidence that the advective-acoustic cycle (e.g., Foglizzo & Tagger (2000); Foglizzo (2001Foglizzo ( , 2009)) does work also in 3D simulations. Figure 5 shows the spacetime diagrams of entropy dispersion (σ s ) for the 3D model. Note that the dispersion of quan-tity A with respect to angular variation is defined by where A represents the angle average (Equation (8)). It is rather uncertain where the entropy production actually takes place in the supernova core in the context of the advectiveacoustic cycle (e.g., Sato et al. (2009)). The primary position is the surface of the PNS, where the advecting material receives faster deceleration by the walls of the PNS due to the localized gravitational potential (e.g., Blondin et al. (2003)).
In addition, the infalling material could also receive faster deceleration just outside the gain radius, where the neutrinoheating becomes maximum. Our 3D results tell that both of the two candidates are relevant indeed. As seen from Figure 5, the position where the entropy production takes place, roughly coincides with the gain radius (the dotted grey line) before 125 ms after bounce (indicated by the upward arrow). Later on, the position is shown to transit to the surface of the PNS surface (the dotted black line). Until now, we have focused on the postbounce dynamics only for our 3D model. From the next sections, we move on to look into more detail how they differ from the 1D and 2D results. Figure 6 shows the blast morphology for our 3D (left panel), 2D (middle), and 1D (right) model, respectively. In the 2D model (middle panel), the morphology is symmetric around the coordinate symmetry axis. 8 In contrast, non-axisymmetric structures are clearly shown in the 3D model (left panel). The direction of explosion is rather closely aligned with the polar axis in the 3D model. Owing to the use of the spherical coordinates, we cannot omit the possibility that the polar axis still gives a special direction in our 3D simulations. However we suspect that the alignment might be just an accident, because the axis-free 3D explosions were obtained in a number 8 Note that the polar axis is tilted (about π/4) both in the left and middle panel.
Blast morphology and explosion dynamics
of parametric 3D explosion models by using the same hydrocode (e.g., Iwakami et al. (2008Iwakami et al. ( , 2009); Kotake et al. (2009bKotake et al. ( , 2011). To clearly witness the stochastic natures concerning the explosion direction, we may need to investigate a number of 3D models by changing initial perturbations and numerical resolutions systematically, which we think as an important extension of this study (Takiwaki et al. in preparation).
The left panel of Figure 7 shows mass-shell trajectories for the 3D (red lines) and 1D model (green line), respectively. At around 300 ms after bounce, the average shock radius for the 3D model exceeds 1000 km in radius. On the other hand, an explosion is not obtained for the 1D model, which is in agreement with Buras et al. (2006). The right panel of Figure 7 shows a comparison of the average shock radius vs. postbounce time. In the 2D model, the shock expands rather continuously after bounce. This trend is qualitatively consistent with the 2D result by Buras et al. (2006) (see their Figure 15 for model s112 128 f), however the average shock of our 2D model expands much faster than theirs. We suspect that all of the neglected effects in this work including general relativistic effects, inelastic neutrino-electron scattering, and cooling by heavy-lepton neutrinos, could give a more optimistic condition to produce explosions. Apparently these ingredients should be appropriately implemented, which we hope to be practicable in the next-generation 3D simulations.
Comparing the shock evolution between our 2D (green line in the right panel of Figure 7) and 3D model (red line), the shock is shown to expand much faster for 2D. The pink line labeled by "3D low" is for the low resolution 3D model, in which the mesh numbers are taken to be half of the standard model (see Section 2). Comparing with our standard 3D model (red line), the shock expansion becomes less energetic for the low resolution model (later than ∼ 150 ms).
Above results indicate that explosions are easiest to obtain in 2D, followed in order by 3D, and 3D (low). At first sight, this may look contradicted with the finding by Nordhaus et al. (2010) who pointed out that explosions could be more easily obtained in 3D than 2D. In the following section, we proceed to discuss what is the reason of the discrepancy more in detail.
Comparison between 2D and 3D
In this section, we move on to illuminate the key differences between our 2D and 3D models. For the purpose, we highlight the SASI (section 3.3.1) and convective activities (section 3.3.2), the residency (section 3.3.3) and neutrino-heating timescales (section 3.3.4), respectively.
SASI activities in 2D and 3D
To compare the SASI activities in 2D and 3D, we first perform the mode analysis of the shock wave. The deformation of the shock surface can be expanded as a linear combination of the spherical harmonics components Y lm (θ, φ): where Y lm is expressed by the associated Legendre polynomial P lm and a constant K lm given as Y lm = K lm P lm (cos θ) e imφ , Here the expansion coefficients read, where the superscript * denotes complex conjugation. Figure 8 shows the time evolution of the expansion coefficients (Equation (10)) for the 3D (left panel) and 2D model (right panel), respectively. As can be seen, the amplitude of each mode grows exponentially until ∼ 100-150 ms after bounce, which corresponds to the linear SASI phase.
The top panels show that the mode of (ℓ, m)=(2,0) (green line) is dominant when the SASI enters to the saturation phase, which is common both in our 2D and 3D models. The epoch when the SASI shifts from the linear to non-linear phase is much delayed for 3D (t 150 ms) than 2D (t 80 ms), which was also seen in the parametric 3D models by Iwakami et al. (2008) (e.g., their Figure 9). These transition timescales are also consistent with Buras et al. (2006) who employed the same progenitor model as ours in their 2D simulations in which detailed neutrino transport was solved (see their Figure 22). It is also worth mentioning that the timescale seems rather insensitive to the employed progenitor. In fact, Figure 5 in Marek & Janka (2009) shows the transition timescale to be around 150 ms for a 15 M ⊙ progenitor model. In the bottom panels of Figure 8, the saturation levels of the even modes of (ℓ, |m|)= (4,0), (4,2), (4,4) in 3D are shown to become much larger than those in 2D (pink line), while the odd mode of (ℓ, m)= (3,0) is much the same.
Based on the pioneering work by Houck & Chevalier (1992), the linear growth rate of the SASI in core-collapse case was presented by Scheck et al. (2008). They pointed out that the cycle efficiency (:Q) which represents how many times the average radius expands compared to the original position per a unit oscillation frequency (:ω osc ) of the SASI, is an important quantity to characterize the linear growth rate. From Figure 8, Q and ω −1 osc in our simulation are approximately estimated to be 2 and 25 ms, respectively. Note that these values are in agreement with the ones obtained in 2D simulations by Scheck et al. (2008) (e.g., their Figure 17). From the two quantities, the linear growth rate can be straightforwardly estimated as exp(ln(Q)t ω osc ), which is shown in the top panels of Figure 8 as black-dotted lines. As can be seen, the growth rates observed both in our 3D (top left panel in Figure 8) and 2D simulations (top right) are close to the linear growth rate, which seems a rather generic trend for the low-modes (ℓ = 1, 2) of the SASI. Note here that the normalized amplitude of the shock (the value in the vertical axis of Figure 8) shortly after bounce is actually so small as 10 −6 to 10 −5 . Deduced from the linear growth rate above, the amplification due to the SASI in the linear phase is at most ∼ 20 within 100 ms after bounce. On the other hand, the amplitudes do increase more than about 10 −1 /10 −5 ∼ 10 4 for the epoch. So the SASI is not the only agent for the shock deformation. In fact the amplitudes are observed to sharply increases from 10 −5 to ∼ 5 × 10 −3 within 10 ms after bounce, which is predominantly driven by the Rayleigh-Taylor instability behind the shock. To summarize, our results suggest that the rapid deformation after bounce is triggered by the Rayleigh-Taylor instability and the subsequent deformation with much milder growth rates is predominantly determined by the SASI.
Concerning the saturation levels of the dominant mode of (ℓ, m)=(2,0) between 2D and 3D, it is slightly larger for 2D than 3D (top panels, green line). The dominance of this bipolar mode can be also seen in the blast morphology ( Figure 6). The second-order mode of (ℓ, m)=(1,0) is shown to be much smaller for 3D than 2D (red lines in the top panels in Figure 8). This is qualitatively consistent with Nordhaus et al. (2010) who did not observe the dominance of the (ℓ, m)=(1,0) mode in their 3D models. Note that this agreement might be just by chance. In Iwakami et al. (2008), they observed the dominance of (ℓ, m)=(1,0) mode in the saturation phase (see their Figure 12). From 3D results reported so far (Iwakami et al. 2008(Iwakami et al. , 2009Wongwathanarat et al. 2010;Fernández 2010), it seems almost certain that the low modes (ℓ=1,2) are dominant in the 3D SASI-aided neutrino-driven explosions. However it might be rather uncertain which of the two modes (ℓ=1 or 2) becomes dominant. This may reflect the fact that the explosion dynamics in 3D proceeds totally stochastically.
Convective activities in 2D and 3D
To discuss convective activities, we compute the Brunt-Väisälä (B-V) frequency which is defined as (e.g., Buras et al. (2006)), with dΦ/dr being the local gravitational acceleration. C L is the Ledoux-criterion, which is given by with Y l being the lepton fraction. It predicts instability in static layers if C L > 0. The B-V frequency denotes the linear growth rate of fluctuations, if it is positive (instability). If it is negative (stable), it denotes the negative of the oscillation frequency of stable modes.
The left panel of Figure 9 shows the profile of the B-V frequency for our 3D model at 10 ms after bounce. The negative entropy gradient (bottom left panel) between the gain radius (∼ 100 km in radius) and the stalled shock (∼ 160 km) makes there convectively unstable. The region in the vicinity of the PNS (10 − 20 km in radius, bottom right panel) has a negative lepton gradient, which can make there convectively unstable (the top right panel). However this region turns out to be convectively stable due to positive entropy gradient (compare bottom left panel). Both in our 2D and 3D models, the convectively unstable regions persist only behind the stalled shock triggered by the negative entropy gradient. Figure 10 shows evolution of convective activities for the 3D (left) and 2D (right) model, respectively. To measure the strength of convective activities, we define the anisotropic velocity as, By this definition, higher anisotropy comes from greater deviation in the radial motions (v r − v r ) or larger non-radial (v θ , v φ ) motions.
From the top left panel of Figure 10, the initial formation of convectively unstable regions is shown to be around 15 ms after bounce (seen as a sudden formation of the non-zero v aniso ). Subsequently, the convectively unstable regions are shown to advect to the center. At around 20 − 30 km in radius, the anisotropic velocities are strongly suppressed (seen as a change from yellowish to bluish region at ∼ 30 ms after bounce) due to the stabilizing positive entropy gradient (see the left panel of Figure 9). As a result, the convective overturns are shown to persistently stay in the regions above the PNS (∼ 20 − 30 km in radius) and below ∼ 50 km in radius. This is seen as a (horizontal) yellow stripe in the bottom left panel. Since the infalling velocities below the gain region (dotted gray line) are so high that the convectively unstable material cannot stay there for long. This may be the reason that the anisotropic velocity becomes relatively low (seen as greenish in the bottom left panel) between the gain radius ( 100 km) and the upper position of the yellow strip (∼ 50 km). These overall trends obtained in the 3D model are common to 2D (right panels). In 2D, a more drastic overshooting of the convectively unstable material to the convectively stable region is seen (compare the bottom panels for 20 -40 ms after bounce). The area of the brought-in convectively unstable region (equivalently the yellowish stripe) is shown to be larger for 3D than 2D. Such a vigorous convective overturn in our 3D model becomes essential in analyzing the neutrinoheating timescales later in section 3.3.4.
Having referred to the SASI and convective activities in 2D and 3D, we are now ready to perform analysis of the residency and neutrino-heating timescales. First of all, we discuss the residency timescale in the next section. Figure 11 depicts the streamlines of tracer particles advecting from the outer boundary of the computational domain through the shock wave down to the PNS. The number of the tracer particles that we actually injected is ∼ 10 6 , however only the trajectories of selected particles are shown in Figure 11 (not to make the figure filled with particles). As seen from the left panel, the tracer particles first go down to the shock wave, which is shown by the radial straight lines. Later on, as indicated by the tangled streamlines, they experience turbulence before falling to the PNS. The low-modes (here, ℓ = 2) oscillation of the accretion shock due to SASI activities (as discussed in section 3.3.1, e.g., right panel in Figure 11) make the residency timescales much longer for multi-D models than 1D. If the right panel were for 2D models, the streamlines would be seen as a superposition of circles with different diameters. In contrast, the non-axisymmetric matter motions can be clearly seen, which is a genuine 3D feature.
Residency timescales in 2D and 3D
The left panel of Figure 12 compares the number of the tracer particles vs. their individual residency timescales between the 2D and 3D model (for the same snapshot in Figure 11). As seen, the maximum residency time is longer for 3D (t res ∼ 92 ms) than 2D (∼ 80 ms), which is most likely to be the outcome of the non-axisymmetric matter motions in the gain region. As well known, the longer residency time is good for producing neutrino-driven explosions because of the long exposure to the neutrino heating in the gain region.
The right panel of Figure 12 shows the comparison of the advection timescale, conventionally employed in literatures (e.g., Equation (4) in Marek & Janka (2009), that is (R s − R g )/| v r | with R s , R g , and v r representing the angle average shock radius, gain radius, and postshock radial velocity, respectively. Against our anticipation, the averaged advection timescale is not always longer for 3D. Before ∼ 70 ms after bounce, our 3D model (red line) has generally longer advection timescales. However the advection timescale later on can be longer for 2D.
At around ∼ 70 ms after bounce, the revived shock wave has already reached at a radius of ∼ 400 km for 2D and ∼ 320 km for 3D, respectively (see the right panel of Figure 7). In such a shock expansion phase, the definition of the "advection timescale" would become rather vague. For example, the advection timescale is longer for 2D than 3D at t = 100 ms (right panel of Figure 12), however this simply reflects larger shock radii for 2D than 3D (e.g., right panel of Figure 7). 9 Also in the above residency-time analysis, the longer residency time for 2D can be seen around t res = 70 − 80 ms in the left panel of Figure 12 (seen as a dominance of the green line over the red line). If the onset time of explosion could be much delayed after bounce (such as ∼ 600 ms as in Marek & Janka (2009)), the advection(or residency)-timescale analysis between 2D and 3D could have been made clearer in the long-lasting bubbling phase. In order to see the 3D effects more clearly, we plan to employ a more massive progenitor (such as 15M ⊙ ) as a follow-up of this study.
Neutrino-heating timescales in 2D and 3D
Now we compare the neutrino-heating timescales between our 2D and 3D models. From the left panel of Figure 13, the heating timescale is shown to be longer for 3D (red line). As seen from the right panel, this is because the total net rate of the heating rate is generally smaller for our 3D model. To understand this feature, we analyze the neutrino luminosities (L ν ) and mean energies ( ǫ ν ), since the neutrino heating rate can be symbolically expressed as Q + ν ∝ L ν ǫ 2 ν (e.g., Equation (23) in Janka (2001)).
From the left panel of Figure 14, the neutrino luminosities regardless of electron or anti-electron type are shown to be generally larger for 3D than 2D. On the other hand, the mean neutrino energies are lower for 3D (right panel). Although the higher neutrino luminosity is advantageous for producing neutrino-driven explosions, the lower neutrino energies predominantly make the heating rate smaller, thus leading to the longer heating timescale in our 3D model.
The higher neutrino luminosity in 3D is due to the stronger convective activities as discussed in section 3.3.2. The left panel of Figure 15 compares the velocity dispersion (top panel) and the average radial velocity (bottom panel) between 2D and 3D. Figure 15 is at 50 ms after bounce, when the convection and SASI are actively in operation.
The top left panel in Figure 15 shows that convective motions are much more vigorous for 3D in a radius of 30 − 50 km (see also the yellowish stripe in Figure 10). The top right panel shows that the neutrino cooling rate (top) as well as its dispersion there (σ Q , bottom panel) is larger for 3D than 2D. For 3D models computed in this work, these features are generally maintained before the revived shock expands further out (typically ∼ 100 ms after bounce). The bottom panel of Figure 15 shows that the entropy above 30 km is generally smaller for 3D (red line) than 2D. This is as a consequence of the weaker neutrino-heating in 3D than 2D. For the time snapshot in Figure 15 (at 50 ms after bounce), the position of the electron (energy-averaged) neutrinosphere is about 75 km. So the convection deep below the neutrinosphere (30 − 50 km in radius) is the agent to affect the neutrino luminosity and the mean neutrino energy. This trend is akin to the one observed in the 2D simulations by Buras et al. (2006).
In Figure 16, we proceed to perform a more detailed analysis on matter mixing behind the shock and its impact on the emergent neutrino luminosity. Top panels show radial velocities for our 3D (left) and 2D (right) model within a radius of 100 km, in which downflows and upflows are colored by blue and red, respectively. The central whitish regions correspond to the PNS, which is convectively stable (hence, with small radial velocities) due to the positive entropy gradient (e.g., section 3.3.2 and Figure 10). In the vicinity of the PNS, downflows and upflows are visible near the equator and pole, respectively in the 3D model (e.g., back left and back right panels in Figure 16 (top left)). The bottom left panel is same as the top left panel but for the normalized angle variation of δYν. Here we define the normalized angle variation of quantity A as We focus on anti-electron neutrino (ν e ), because the luminosity ofν e dominates over that of ν e during the simulation time (e.g., left panel in Figure 14). Comparing the top left to the bottom left panel in Figure 16, it can be seen that the positive sign of δYν (reddish region in the bottom left panel) tends to have a correlation with the downflows (bluish region in the top left panel), which is vice versa for the upflows. This is because material with larger Yν e in the outer layers is mixed down to the vicinity of the PNS that possesses smaller Yν e due to convective overturns. This can be a possible explanation of the correlation between the gain (loss) in Yν e and the upflows (downflows) to the PNS. Note that this relation is also visible in our 2D model (right panels). Figure 17 depicts angular variations of the flow patterns in Figure 16. The Mollweide projection (or 4π-map) of various quantities is taken at a radius of 50 km. From the top left panel (δv r ), downflows are shown to flow from the pole (colored by blue), while upflows are rather uniformly distributed in the equator (seen like a horizontal red belt). From the bottom left panel, the color pattern of blue and red reverses with that of the top left panel. As already mentioned, this reflects the correlation between downflows (upflows) and gain (loss) in Yν e . Reflecting the gain or loss, the neutrino cooling rate (δQν e ) has a positive correlation with Yν e (compare the top right and the bottom left panel). The variation in the neutrino luminosity that is measured at the outermost boundary of the computation domain (bottom right panel) has a rough positive correlation with the neutrino cooling rate (top right panel), which may agree with one's intuition.
Finally we show the cross correlation that we take between the time evolution of the mass accretion rate to the PNS and the neutrino luminosity ( Figure 18). As seen, a positive correlation is commonly seen during the simulation time. This plot may carry a message that it is important to go beyond the light-bulb scheme, in which the input neutrino luminosity is usually kept constant with time (e.g., Iwakami et al. (2008Iwakami et al. ( , 2009Nordhaus et al. (2010)). To take into account the feedback between the mass accretion and the neutrino luminosity, the spectral IDSA scheme, which is beyond the grey transport scheme (e.g., Fryer et al. (2002);Fryer (2004)), sounds quite efficient in the first-generation 3D simulations.
As suggested in the right panel of Figure 7, 3D explosions are more easily obtained for models with finer numerical resolutions 10 . Our results would indicate whether the advantages for driving explosions mentioned above could or could not overwhelm the disadvantages should be tested by the next generation 3D simulations with much more higher numerical resolutions. Needless to say, the 3D results (not to mention 2D results) should depend on the sophistication of the employed neutrino transport scheme. Regarding the gravity, we should first go over the monopole approximation. This may not be so easy task from a technical point of view, because we need to implement a multigrid approach to obtain a high scalability in the MPI computing. To go beyond the Newtonian gravity is also a challenging task (Müller et al. 2010). Our 3D results are only the very first step towards a more realistic 3D supernova modeling.
SUMMARY AND DISCUSSION
We have presented numerical results on 3D hydrodynamic core-collapse simulations of an 11.2M ⊙ star. By comparing our 1D and 2D results, we have studied how the increasing spatial multi-dimensionality affects the postbounce supernova dynamics. The calculations were performed with an energydependent treatment of the neutrino transport based on the isotropic diffusion source approximation scheme. In agreement with previous study, our 1D model does not produce explosions for the 11.2 M ⊙ star, while the neutrino-driven revival of the stalled bounce shock is obtained both in 2D and 3D models. We showed that the SASI does develop in the 3D models, however, their saturation amplitudes are gener- FIG. 16.-Analysis of flow patterns and matter mixing for our 3D (right) and 2D (left) model, respectively. Top panels show radial velocities, in which bluish and reddish regions correspond to downflows and upflows that are distinguished from their local radial velocities (negative or positive). Similar to Figure 1, the contours on the cross sections in the x = 0 (back right), y = 0 (back left), and z = 0 (back bottom) planes are, respectively projected on the sidewalls of the graphs to visualize 3D structures. Bottom panels show the relative angle variation of Yν e (δYν , see text for definition). In the regions with downflows (bluish in the top panels), the sign of δYν tends to be positive (colored by red in the bottom panels). All the panels are at 50 ms after bounce (same as Figure 15). ally smaller than 2D. By performing a tracer-particle analysis, we showed that the maximum residency time of material in the gain region is shown to be longer for 3D due to nonaxisymmetric flow motions than 2D, which is one of advantageous aspects in 3D to obtain neutrino-driven explosions.
Our results showed that convective matter motions below the gain radius become much more violent in 3D than 2D, making the neutrino luminosity larger for 3D. Nevertheless the emitted neutrino energies are made smaller due to the enhanced cooling. Our results indicated whether these advantages for driving 3D explosions could or could not overwhelm the disadvantages is sensitive to the employed numerical resolutions. An encouraging finding was that the shock expansion tends to become more energetic for models with finer resolutions. To draw a robust conclusion, 3D simulations with much more higher numerical resolutions and also with more advanced treatment of neutrino transport as well as of gravity is needed. Finally we refer to the approximations adopted in this paper. As already mentioned, the omission of heavy lepton neutrinos, the inelastic neutrino scattering, and the ray-byray approach should be improved. The former two, should act to suppress the explosion. The ray-by-ray approach may lead to the overestimation of the directional dependence of the neutrino anisotropies (see discussions in Marek & Janka (2009)). Although it would be highly computationally expensive, the full-angle transport will give us the correct answer (e.g., Ott et al. (2008);Brandt et al. (2011)). Our numerical grid in the azimuthal direction is only 32 to cover 360 degrees. Such a low resolution could lead to a large numerical viscosity. The numerical viscosity is expected to be large especially in the vicinity of the standing accretion shock, which may affect the growth of the SASI. It could also affect the growth of the turbulence in the postshock convectively active regions, which is very important to determine the success or failure of the neutrino-driven mechanism. To clearly see these effects of numerical viscosity, we need to conduct a convergence test in which a numerical gridding is changed in a parametric way (e.g. Hanke et al. (2011)).
A number of exciting issues are remained to be studied in our 3D results, such as gravitational-wave signatures (e.g., Kotake et al. (2009aKotake et al. ( , 2011Müller et al. (2011)), neutrino emission and its detectability (e.g., Kistler et al. (2011)), possibility of 3D SASI flows generating pulsar kicks and spins (Wongwathanarat et al. 2010). The dependence of progenitors (e.g., Buras et al. (2006); Burrows et al. (2007b)) and equations of state (e.g., Marek & Janka (2009)) are important to be clarified in 3D computations. We are going to study these items one by one in the near future.
As of July 2011, the K supercompter in Kobe city of Japan is ranked as the top on the "TOP 500 list of World's Supercompters" 11 . From early next year, we are fortunately allowed to start using the facility for our 3D supernova simulations. Keeping our efforts to overcome the caveats mentioned above, we plan to improve the numerical resolutions as much as possible in the forthcoming run, by which we hopefully gain a new insight into the long-veiled explosion mechanism. | 2012-01-25T23:55:29.000Z | 2011-08-19T00:00:00.000 | {
"year": 2012,
"sha1": "8c2a4e1d12c3713fada136564dc054be23872580",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1108.3989",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "8c2a4e1d12c3713fada136564dc054be23872580",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
149518275 | pes2o/s2orc | v3-fos-license | Latin America’s Decentred Economic Regionalism: From the FTAA to the Pacific Alliance
: In this article, I examine Latin American regionalism from the collapse of the Free Trade Area of the Americas (FTAA) to the emergence and development of the Pacific Alliance (PA) in the period 2005 to 2015. For most of the research, I use the main economic blocs in the region, Mercosur as well as the PA, as the units of analysis. The main findings are that since the FTAA’s collapse, integration processes have become more heterogeneous; that Mercosur and the PA contrast with one another in political-economic terms; that the Brazilian project of establishing a post-liberal/ post-hegemonic regionalism in South America has not succeeded; and that regional demand for Brazilian products is at risk of shifting to other markets in the medium to long term, thus further undermining its aspirations towards regional leadership. All of this is evidence of a decentred economic regionalism – that is, a form of regionalism in which no single state is in central command, or has enough followers to assume leadership and establish a dominant conception of integration and regional cooperation. Other factors contributing to this decentralisation are the poor economic performance of Brazil and Mexico, and the US government’s changed attitude towards trade relations with Latin America. Despite this, I argue that Latin American countries do need to strengthen cooperation within and among these regional blocs, aimed at promoting their joint global competitiveness. This will require cooperation rather than coercion, and networks and connectivity rather than hierarchies.
Introduction
Since the collapse of the Free Trade Area of the Americas (FTAA), Latin America 1 has undergone profound changes that have been reflected in the character of regionalism and regional integration.Relations with the USA have naturally played a central role in the politics and economics of the subcontinent, and the shift from an open regionalism in the 1990s to post-liberal/post-hegemonic regionalism in the 2000s provided Brazil with an attractive opportunity to assume a leadership role in South America.
Overall, 'regionalism fuels regionalism, ' since, for the nations that fall outside the main processes, the costs of exclusion seem to increase.The new era on the continent provides us with a compelling opportunity to describe and analyse the political economy of contemporary Latin American regionalism, from the proposed hemispheric integration to the current regional heterogeneity.
I examine the heterogeneous Latin American regionalism in a mainly economic but also political sense, from the collapse of the FTAA to the emergence and initial development of the Pacific Alliance (Alianza del Pacífico), largely in the period from 2005 to 2015.For most of the research, I refer to the main economic blocs of the region, the Common Market of the South (Mercosur) and the contrasting PA as the units of analysis.Although there are a number of other experiences of regional integration in Latin America, Mercosur and the PA together cover most of the region's economy, and have economic and trade interests as the main pillars of integration, albeit with different ambitions.Other organisations such as the Community of Latin American and Caribbean States (CELAC) and the Union of South American Nations (UNASUR) 2 have more diverse interests, seeking to serve as mechanisms for regional cooperation and political dialogue, including the resolution of conflicts among countries.
I argue that the heterogeneity of integration processes, the contrast in the political economies of various regional blocs, the unaccomplished Brazilian project of establishing a post-liberal/post-hegemonic regionalism, and the risk of a longer-term shift of regional demand for Brazilian goods and services to other markets are all evidence of a decentred economic regionalism -i.e., a form of regionalism without a single state assuming central control, or with enough followers to assume leadership and establish a predominant conception of integration and regional cooperation.There is a noticeable historical heterogeneity within the region in terms of the different models of external engagement, development strategies, and relations between states and markets.Should they be poorly managed, these differences might further hamper efforts to address the historical lack of economic regional integration.I argue that, despite this heterogeneity, the adverse international outlook for Latin American economies implies a greater need for cooperation and integration on the subcontinent and among the various regional blocs, aimed at enhancing the global competitiveness of Latin America as a whole.
I do not argue that the PA and the economic profiles of its member countries constitute models that other Latin American countries should ideally pursue, or that this kind of integration will make its members more successful.However, it is clear that the Mercosur model should allow its members to benefit individually from their national endowments as well as from pooled regional capacities.All Latin American countries should improve their internal productive capacities through long-term policies, utilising global integration as a means of expanding their competencies, know-how, and markets.This is a way of combining regionalism with the dynamics of globalisation, which has increasingly preoccupied scholars and decision-makers alike.
The article engages with the international political economy of regionalism as well as (largely economic) cooperation and integration in Latin America. 3It is also relevant to the field of 'comparative regionalism, ' an avenue of enquiry that has been consolidated in studies of International Political Economy (IPE), aimed at understanding the diverse range of regionalisms that have followed the 'new regionalism' of the 1990s and 2000s (Acharya 2012).
According to Söderbaum (2015: 23), comparative regionalism recognises regions as porous, overlapping and plural; identifies emerging dialogues among them; compares them to others; and acknowledges the emergence of non-Eurocentric concerns, which, in his view, is contextually more sensitive and conceptually less rigid.Besides comparing regions, comparative regionalism lends itself to studying regionalisms 'within' regions, thereby aiding an understanding of the complex and volatile Latin American political and economic spectrum.
This article is structured as follows.First, following some conceptual remarks, it examines the collapse of the FTAA (ALCA) as well as the -mainly Brazilian -attempt to establish a post-liberal/post-hegemonic regionalism in Latin America to supersede the project of hemispheric integration.Second, it examines the emergence of the PA, paying particular attention to the challenges surrounding the integration of the bloc and the model of open regionalism.Given that open regionalism is to some extent opposed to 'post-liberal' or 'post-hegemonic' regionalism, the comparison between Mercosur and the PA allows inferences about decentred economic regionalism in Latin America.
Third, the article examines the extent to which the political-economic models underpinning the PA and the economies of its member states contrast with those of Mercosur and its member states, defying Brazilian regional objectives in the process.
Fourth, it examines shifts in Brazilian export preferences by recording Brazilian exports to PA and Mercosur member states in three selected years (2005,2010,2015), compared to Chinese and US exports.The expansion of a country's trade relations with its neighbours plays a key role in establishing regional leadership and interdependence.This analysis shows that Brazil has failed to consolidate its role as primary exporter to other countries in the region, which has undermined its attempt to assume a leading regional role.
Fifth, it analyses important international trends, including the end of the commodity boom and the revival of protectionism, which generates a greater need for integration, even in a decentred form.Some final considerations follow.
The collapse of the FTAA and the emergence of post-liberal/posthegemonic regionalism
I need to start with some conceptual remarks.I understand 'regional integration' as the process of lowering or eliminating barriers to the flow of capital, goods and services, people, and all other factors of production among countries.I understand 'bloc' as a group of states with common objectives, including regional integration.I understand 'regionalism' as a broader movement or phenomenon, involving, in the words of Soares de Lima (2013: 178), 'processes of cooperation in diverse areas, including the military, political, economic, energy, and technical fields, which reflect foreign policy priorities, including the geostrategic dimension.' It is also common to recognise 'waves' of regionalism, which helps to situate the formation of integration blocs in time.The first wave started in the wake of World War Two, continued through the 1970s, and spanned treaties and organisations until the early 1980s.In Latina America, this wave was characterised by the promotion of closed regionalism, notably the adoption of protectionist practices as a strategy for regional economic development (Devlin and Giordano 2004).
The second wave started in the late 1980s, when the Cold War was about to end, and globalisation accelerated.Regional integration gained further momentum, spurred by the search for improved international economic insertion.This wave was marked by the promotion of open regionalism, with regional liberalisation seen as a step towards multilateral liberalisation as well as inter-regional negotiations, even though this model was again broadly questioned in the early 2000s (Herz and Hoffman 2004).
Key moments in the second wave were the establishment of Mercosur in 1991, the adoption of the Constitution of the European Union in 1992, the establishment of a free trade area under the Association of Southeast Asian Nations (ASEAN), also in 1992, and the adoption of the North American Free Trade Agreement (NAFTA) in 1994.
In this context, the USA began to push for renewed regional integration.The government of George H W Bush (1989)(1990)(1991)(1992)(1993) launched an 'Initiative for the Americas' that sought to promote integration, market economies, and political democracy.Commercial integration was instrumentalised through the Free Trade Area of the Americas (FTAA), as it was labelled by the Clinton government, which was aimed at gradually eliminating tariff barriers on the American continent as a whole, and possibly establishing the world's largest free trade zone, encompassing 34 countries.
With this goal in mind, the first meeting between American heads of state, the First Summit of the Americas, was held in Miami in 1994.The first major international meeting of heads of state in the post-Cold War period, its main objective was to promote negotiations leading to hemispheric integration.It was a good moment for advancing regional negotiations, as many Latin American leaders were now more economically orthodox and liberal than earlier ones.In the course of the negotiations, the USA proposed moving beyond liberalisation of the exchange of goods towards 'second-generation' issues such as services, intellectual property rights, and public purchases.Ambitious ideas were also raised about transnational regulation, including systematising and harmonising rules of origin.
In 2005 -the year in which the FTAA was meant to be established -the project failed at the Fourth Summit of the Americas in Mar del Plata, led jointly by the USA and Brazil.Differences in negotiating positions had already become apparent at the XVII meeting of the Committee for Commercial Negotiations (CNC) in Puebla in Mexico in 2004, exemplified by the Brazilian proposal for an 'FTAA lite.' Various reasons for terminating the negotiations emerged.Following September 11 and the declaration of the US-led 'War against Terror, ' the Bush government was less in-terested in negotiating with Latin American countries, and less inclined to lock horns with the country's strong agricultural sector, which was reluctant to liberalise markets.
Conditions in South America also did not favour the negotiations, and key interests were opposed and even explicitly hostile to the project.New leftist leaders had risen to power in the region.More heterodox or nationalist than those in the 1990s, they sought to strengthen the economic role of the state rather than the market, and aspired towards increased intervention, both domestically and internationally, 'in search of a regional affirmation in the South American realm, and greater autonomy in relations with the United States' (Ayerbe 2008: 9).They included Hugo Chavez in Venezuela (1999); Lula da Silva in Brazil (2003), even though economic policies during his first term continued some of the previous Cardoso government; Nestor Kirchner in Argentina (also in 2003); and Evo Morales in Bolivia (2005).
Their governments promoted the principle of national sovereignty, which became even more evident with the use of the terms 'post-hegemonic regionalism' or 'post-liberal regionalism.' According to Riggirozzi and Tussie (2012), this type of regionalism works against the hemispheric integration spearheaded by the USA, which was viewed as 'neoliberal, ' as well as the 'open regionalism' of the 1990s.
Post-hegemonic regionalism and the ascent of governments with a nationalist-progressive orientation is largely derived from the anti-globalisation movements at the end of the 1990s, following successive financial crises in developing countries such as Russia, Argentina, Brazil, and the Asian Tigers.There were many different international reactions to the supposed diverse consequences of globalisation and the influence of the USA and IMF.One example was the 'Battle of Seattle' on 30 November 1999 when 40 000 demonstrators confronted leaders of the developed and industrialised world at the WTO Ministerial in the USA in protest against economic globalisation.As noted by Estevadeordal (2012: 23), '... some governments in Latin America, pressured by a public backlash against globalization, turned their backs on open trade policies.' As a result, Latin American regionalism became 'less focused on economic liberalisation, and more political in its orientation' (Nolte and Wehner 2013: 3).
In 2010, the Brazilian minister of foreign affairs, Celso Amorim, declared that the terms of the FTAA negotiations did not chime with Brazilian interests.This included favouring negotiations about services, government purchases, and foreign investment, rather than agricultural subsidies and anti-dumping measures (Amorim 2010).As a result, numerous Latin-American countries came to prefer more autonomous processes of integration.Brazil opted for a multilateral strategy via cooperation with its neighbours, with South American integration a primary foreign policy objective (Garcia 2008). 4 Brazil's new-found aversion to hemispheric integration might also have been driven by its desire to become a regional leader.While, for political and diplomatic reasons, it did not say so openly, the Brazilian government believed the failure of the FTAA negotiations worked in favour of South American integration under Brazilian leadership.Other South American leaders also adopted a more hostile attitude towards the USA.Given this, efforts began to establish a Union of South American Nations (UNASUL).
As noted by Saraiva and Velasco Júnior (2016: 301), 'Lula's foreign policy prioritized a South American order under Brazilian leadership, where Brazil would assume the central responsibility for the integration and regionalization process.' Mercosur also began to extend its agenda beyond the purely commercial one of the 1990s, resulting in the establishment of the Fund for Structural Convergence (FOCEM) and the Parliament of the Mercosul (PARLASUL).
While there are objective reasons for Brazilian regional leadership, including its huge size, and it status as the largest economy in Latin America, subjective elements related to the recognition and broad acceptance of its leadership are lacking (Malamud 2011).For example, Argentina and Colombia have not supported Brazil's efforts to gain a seat in the UN Security Council.Moreover, as Spektor (2010: 29) points out, other governments and people in the region do not necessarily believe it would be a 'friendly leader.' While Dilma Rousseff 's government sought to maintain the institutions for regional governance formulated and/or consolidated by the Lula government (Saraiva and Velasco Júnior 2016: 301), it increasingly failed to do so.Faced with serious internal economic and political crises, as well as international conditions which were less favourable to Brazilian insertion, it became more engaged with domestic issues.Cervo and Lessa (2014) refer to this moment (2011-2014) as 'the decline of Brazil within international relations, ' pointing to a growing inability to maintain external linkages and sustain international relations.
Given this, I argue that conditions did not favour Brazil's ascent to regional leadership in the context of more heterogeneous and autonomous regional integration, reaching beyond Mercosur.I also argue that the constitution and development of the PA express this heterogeneity, as well as the failure of Brazilian aspirations.
The emergence of the Pacific Alliance
The collapse of the FTAA seemed to reduce prospects for introducing and regulating continent-wide 'rules of origin' and for USA ties in Latin America.However, the USA still managed to keep up or conclude trade agreements with more globalist and market-oriented Latin American countries, such as Costa Rica, Panama, Colombia, Peru and Chile.
Following the proliferation of trade agreements in Latin America, along with foreign policy shifts, it becomes possible to identify three types or models of regional integration (Nolte and Wehner 2013; Riggirozzi and Tussie 2012).The first is 'trade-driven, ' and features agreements with a strong emphasis on commerce and investment.While this model does not imply deeper levels of political integration, it has the potential to develop into commercial multilateralism.Examples are NAFTA, the PA, and the failed FTAA.
The second is a 'hybrid' model, also regarded as 'state-driven' or 'state-led, ' which combines commercial cooperation with greater state economic intervention, as well as goals stretching beyond trade.Examples are UNASUL (partly in crisis due to the voluntary suspension of some states), the Andean Community (CAN) (partly obsolete due to the exit of Venezuela in 2006 and the creation of the PA), the Central American Integration System (SICA), the Caribbean Community (Caricom), and Mercosur.
The third model emphasises social and political bonds among member states, involves considerable state economic intervention, and is driven by socialist, anti-imperialist, or anti-hegemonic ideas.The main example is the Bolivarian Alliance for the Peoples of Our America (ALBA), led by Venezuela, and created in explicit opposition to the FTAA (Nolte and Wehner 2013: 3; Riggirozzi and Tussie 2012: 11).
In principle, at least, South America has moved towards becoming a free trade area, most notably through the Economic Complementation Accords among member states of Mercosur and the Andean Community, negotiated under the Latin American Integration Association (ALADI).However, the new commercial issues, such as the non-tariff barriers and trade in services, and possible 'trade and investment diversion' among subregional blocs (see Viner 1950) have increased the complexity of regional integration, and relativised the notion of an authentic free trade area by simply reducing tariffs.
In the context of global economic shifts and new regionalisms, the PA has gained prominence as a renewed attempt to establish a commercial bloc.Based on a conjunction of objectives with a focus on market economics, and incorporating a large number of previous Free Trade Agreements (FTAs), it has attracted growing attention from markets, the media, and critics.
The initiative began in 2010 when the former Peruvian president Alan García Pérez invited the presidents of Chile, Colombia, Ecuador, and Panama to establish an Area for Deep Integration, encompassing political, economic and technical cooperation.Essentially, it was aimed at enabling Latin America to compete more effectively towards the Asia-Pacific region, one of the most economically and financially dynamic regions in the world (Ministry of Economy (Mexico) 2012).Ecuador did not respond to the invitation, emphasising its proximity to other blocs such as ALBA and Mercosur.Panama participated as an observer, with the idea that it would eventually become a full member.Chile and Colombia expressed their desire for Mexico to become part of the initiative, which shares a certain degree of economic and pro-market policy inclinations with the three other countries, which was accepted by Peru (Ministry of Economy (Mexico) 2012).
In June 2012, the Alliance gained a legal personality when the presidents of Chile, Colombia, Mexico and Peru signed a Framework Agreement, supported by the Presidential Declaration of Paranal, in which they reaffirmed their intention to promote mutual trade and investment.A free trade and economic integration agreement was signed in 2014, and took effect in 2016.
The formation of the bloc has attracted significant international attention, including more than 50 observer states from five continents.It has also notched up some achievements in its relatively short life.This includes the Latin American Integrated Market (Mercado Integrado Latino Americano, or MILA) of 2011, which integrates the stock markets of the four founding members.The Pacific Alliance Business Council provides business people with a platform for discussion and negotiation, besides cooperation among export promotion agencies.Joint embassies and business offices have been established in third countries.However, further advances will depend on political, economic and institutional progress in the medium and long terms.
All four member countries have commercial agreements with the USA: an agreement with Mexico under NAFTA became effective in 1994; an agreement with Chile in 2004, with Peru in 2009, and with Colombia and Panama in May and December 2012.The USA officially supported the project.
In 2013, when the USA became an observer, the US Department of State lauded the initiative as an example for Latin American countries, stressing shared values such as a commitment to commercial liberalisation, and the extension of existing economic bonds among Chile, Peru and Mexico within the Trans-Pacific Partnership (TPP) negotiations (US Department of State 2013).In January 2017, however, President Donald Trump withdrew the USA from this mega-regional agreement, marking a turning point for the US government's stance on international trade. 5 Consolidating the PA as a project for 'deep integration' is an enormous challenge, and there are no guarantees that this ambitious agenda will succeed.Earlier treaties among member states and among these members and third countries may limit the rate of growth of interregional trade.Furthermore, if productive integration is understood as one of the central characteristics of regional integration, this appears to be compromised by the lack of productive complementarily among its members.Chile, Colombia, and Peru largely export primary commodities and natural resource-intensive goods, while Mexico largely exports manufactured goods, and is inserted into North American value chains.Moreover, the geographic distance between Mexico and the three South American countries also functions as a natural obstacle to the development of joint production chains.
Mercosur versus the Pacific Alliance
The PA differs from Mercosur both in terms of member states as well as its objectives.Mercosur is commonly referred to as a 'commercial fortress, ' with highly restricted access to international markets, given the limited number of external trade agreements.Members may not negotiate individual trade agreements, as the bloc is actually an (imperfect) customs union.Moreover, the decision 'Conselho do Mercado Comum' Nº 32/00 has reaffirmed that member states may only negotiate joint trade agreements with third countries.
By contrast, the PA is an example of 'open regionalism, ' which, ironically, was one of Mercosur's founding principles in the 1990s.This means that member states have greater freedom to formulate external commercial policies, whether on a multilateral, preferential (regional or bilateral) or unilateral basis.At the same time, this degree of freedom may compromise economic and political integration, as well as joint international commitments.
Mercosur has been criticised on the grounds that it reflects a bygone era.Its regulations need to be amended in order to overcome the main obstacles to economic integration, which are more complex than barriers to the flow of goods.The Common External Tariff (CET) is complex and problematic, especially in respect of the double taxation of goods imported internally from other member states, as well as the extensive list of exceptions.Moreover, Mercosur lags behind other blocs in taking account of the newer aspects of global trade, like investments, intellectual property, trade in services, and governmental purchases, and when it does, levels of commitment by member states are relatively low (Pereira 2013;Thorstensen and Ferraz 2014).
According to Lia Pereira (2013: 5), the failure of the FTAA negotiations and the Argentinian crisis of 2001 go some way towards explaining why Mercosur has delayed addressing these aspects, which are 'present in trade agreements of the new generation.' She also highlights that agreements between Mercosur and other South American countries -Chile, Bolivia, Colombia, Ecuador, Peru and Venezuela -are restricted to trade in goods (Pereira 2013: 5).However, the newer aspects of trade have become even more important due to the progress of the digital economy and the fourth industrial revolution, which requires greater regulatory efficiency, and the need for trade in advanced services.
Some international economic indices corroborate the contrast between the two economic blocs.Table 1 compares the PA and Mercosur in terms of two indicators, namely ease of doing business, and global competitiveness.PA members outperform Mercosur members in terms of both indicators.Uruguay is Mercosur's best performer, 6 and Venezuela its worst (in fact, in terms of ease of doing business, it is only ahead of Liberia, Eritrea and Somalia, which are affected by long-term conflicts and a loss of state control over their national territories).However, we need to note that, while protection can undermine competitiveness, radical liberalisation can also hinder deep integration with neighbouring countries, or even mask the failure of other domestic policies.
Regional integration and development are directly influenced by the macroeconomic policies of individual Latin American countries, as they tend to treat regional integration as a second or third priority in times of domestic convulsions.Table 2 lists annual GDP growth per capita and foreign trade as a percentage GDP of the nine Mercosur and PA member states over the period from 2005 to 2015.It shows that there is no clear correlation between foreign trade (as % of GDP) and economic growth.For example, while Brazil was the second-worst performer in terms of income growth, and the most closed economy, Mexico was the worst performer in terms of income growth, while trade accounted for more than 60% of GDP.However, Brazil's poor economic performance may be one of the reasons for its poor trade performance, and vice versa, and Mexico might have experienced specific problems surrounding its exports, notably in improving its positioning in regional and global value chains.
As regards trends, while Mexico's overall growth performance was the worst, it achieved 1,8% growth in the post-crisis period of 2010-2015, compared to close to zero in the first five years.Brazil, on the other hand, did better in 2005-2009 (2,5% average growth) than in 2010-2015 (1,2 % average growth), largely due to the biggest economic crisis in its history.Countries tend to encourage or discourage regional integration in line with domestic conditions, or perceptions of what would favour domestic growth.
The low growth levels in Brazil and Mexico, in parallel with their political and institutional problems, may reduce their willingness or ability to effectively lead Latin American integration, even though they are the largest economies in Mercosur and the PA respectively.In fact, major economies, which should be more mature, displayed lower rates of growth than smaller economies.In Colombia, an exception to the general pattern, higher levels of institutional stability were rewarded by higher rates of economic growth.
Table 3 reflects GDP growth per capita for Mercosur and the PA in the decade from 2005 to 2015.It shows that average growth for both blocs over the whole period was practically the same.However, while 'Mercosur without Venezuela' grew more rapidly over the whole period than the PA, the PA grew more rapidly in the second period, while Mercosur, especially 'without Paraguay and Uruguay, ' grew more slowly.This shows that PA members did better during and after the international financial crisis than Mercosur members, which contradicts the notion that they should have been more vulnerable as they were more exposed to global flows of trade.Overall, the tables show that the different approaches embedded in these two integration projects did not result in significantly different levels of per capita growth.International experience suggests that they should seek to improve levels of cooperation.Moreover, the convergence in performance terms between the PA and Mercosur may enhance opportunities for integration, even though political differences may remain.
Brazilian exports to Mercosur and the PA
Internationally, Latin American countries continue to compete against China, although less so than in the 2000s.According to Ray and Gallagher (2015: 2), in 2008-2013, 75% of the region's exports of manufactured goods faced threats from China, compared to 83% in 2003China, compared to 83% in -2008.Nonetheless, we argued that, given Chinese trade and investment agreements with those countries, China's competition with regional suppliers of industrialised products, such as Brazil and Mexico, was set to intensify.However, Baumann (2013) has argued that the search for productive complementarities in Latin America may well help the region to become more competitive internationally, and achieve higher levels of economic growth.
Latin America has been an important market for Brazilian value-added goods, absorbing some 44% of total Brazilian exports of manufactured goods in 2017.But this is partly due to Mercosur's CET, trade preferences within ALADI, as well as Brazil's lack of competitiveness in other markets.Some analysts argue that the Mercosur CET has created a 'reserved market, ' not least with regard to motor vehicles, and that high levels of regional trade in Brazilian goods do not necessarily reflect their international competitiveness.Moreover, even though Latin America may be Brazil's most important market, there is nothing that stops other countries outside Mercosur from concluding trade agreements with third countries.
Cheap Chinese products have already threatened Brazil's traditional regional market.Moreover, China may well use its agreements signed with Argentina since 2015 in various fields, including the telecommunications, agricultural and hydroelectric sectors, to push the latter country towards preferential trade, which would intensify competition with Brazilian exports.
If other Mercosur member states start importing more goods from China rather than from Brazil, this may also happen in other regions, where Brazilian goods are less protected.Table 4 reflects imports by Mercosur and PA member states from Brazil, China and the USA in three five-year periods.Even though 2015 was a year of crisis in Brazil, which impacted significantly on foreign trade, it is still possible to draw some meaningful conclusions. 7 Mercosur and PA imports from China rose massively over the whole period, far more so than from Brazil.Imports from Brazil also flattened out significantly from 2010 to 2015, with all countries except Uruguay importing fewer Brazilian goods.
Mercosur and PA imports from the USA also rose significantly over the whole period, more so than Brazilian imports, and more so for the PA than for Mercosur.From 2010 to 2015, US exports to three Mercosur countries -Paraguay, Brazil and Venezuela -diminished, reflecting the economic crises in the two last-named countries, while exports to Argentina and Uruguay increased.As can be expected, USA exports to all four PA countries grew significantly.
The most notable feature may be Brazil's trade with Argentina.While in 2010 to 2015, imports from Brazil dropped massively, Chinese and US imports increased by 44% and 26% respectively.
The need for regional integration and decentred economic regionalism Commodity markets are very volatile, which makes it difficult for countries that depend upon selling primary products to plan their economic futures.A drop in the prices of manufactured goods -possibly due to improved productivity, and to low production costs in East Asia -and an increase in the prices of primary goods due to the global commodity boom contributed to the trade surpluses of Mercosur and PA member states.
However, following the reasoning of Raúl Prebisch in respect of the 20 th century, even when countries exporting primary products benefit from increases in international demand and therefore higher prices, the terms of trade of primary goods (whose prices would show a smaller marginal increase) will continue to deteriorate in the longer term, relative to those of manufactured goods (CEPAL 2017).Moreover, it could make these countries more dependent on primary exports.By contrast, promoting the integration of Latin American markets could promote industrialisation in the region, improving their insertion into the global economy as well.
In fact, as noted by Feenstra (1998), global trade is largely concentrated within relatively similar industrialised economies, and takes place through inter-and intra-industrial trade and exchanges of intermediate products.By contrast, specialising in the production of primary commodities without significant links to the industrial and services sectors tends to reduce a given country's ability to insert itself into regional and global value chains, which increases the costs of acquiring new knowledge, know-how, and production models.Put differently, the extraction of primary commodities tends to provide less 'production slicing' than sectors with a higher technological content.
Mercosur and the PA could reduce the region's exposure to exogenous price fluctuations by becoming more integrated.According to the WTO (2015), intra-regional trade in South and Central America amount to about 25% of total trade, which is very low.In regions where the bulk of value chains are concentrated -notably Europe (68,5%), Asia (52,3%), and North America (50,2%) -levels of intra-regional trade are far higher.
Successful regional complexes tend to have lead countries that play a central role in the entry of intermediate goods from other members of the same integration scheme.This process is marked by feedback loops, which make it possible to reduce production costs and to benefit from reduced distances for commercial exchanges.In South America, however, these dynamics are still at an early stage.As we noted in a previous work (Viola and Lima 2017), the Mercosur integration model is 'introspective, ' with little effort made towards encouraging productive and technological complementarity among member states in order to improve their global engagement, which would make trade integration more sustainable in the longer term.Mercosur has also few agreements and few productive connections with the rest of the world.
Following Trump's rise to the US presidency, and the new emphasis on economic nationalism and mercantilism, the USA is increasingly questioning its trade relations with Latin American countries, especially with Mexico under NAFTA.Even the Obama government no longer saw Latin America as a priority. 8Given this, it has become even more important for Latin American countries to cooperate with one another, as US investment and development assistance may diminish.Moreover, regional integration could play a vital role in promoting the growth of Latin American economies.
Following Buzan (2011), the contemporary international order is marked by a plurality of capitalisms, the end of the age of superpowers, a more regionalised order, and a dense interdependence, which he describes as 'decentred globalism.' He goes on to note that: '[A] world with only great powers is likely to take a more regionalized form; this might produce a quite workable, decentralized, coexistence of international society with some elements of cooperation' (Buzan 2011: 3).
He goes on to say that 'the social foundations for a regionalized order start from a strong anti-hegemonism […] expressed in widespread calls for a more multipolar international system ' (2011: 16), and that 'tensions over hegemonic interference would decline if regions were, for better or worse, more in charge of their own affairs' (Buzan and Lawson 2014: 91).Thus, regionalism also becomes a strategy for global engagement, with powerful nations such as Brazil, Russia and China also identifying opportunities for seizing or reinforcing their roles as regional powers or leaders.Often, however, their smaller neighbours have been less enthusiastic about these aspirations.
By contrast, regional integration in Latin America is heterogeneous, with no single state leading or controlling the integration process, or persuading neighbouring states to accept its integration strategy.This contrasts with much of what Brazil wanted in the first decade of the 21 st century.The new era or post-hegemonic or post-liberal regionalism emphasised the pursuit of autonomous regional integration without interference from outside powers, especially the USA.Given its economic growth in the early 2000s, and the collapse of the FTAA, Brazil saw an opportunity to establish itself as a regional leader, albeit in the context of post-hegemonic regionalism.The very emergence of the PA -whose members conclude free trade agreements with the USA, adopt more liberal political economic models than their Atlantic neighbours, and have partly reinvigorated open regionalism -suggests that Brazil lacks regional followers for this strategy.
There is, in fact, an anti-hegemonic sentiment in Latin America, manifested not only towards developed countries, but also towards more powerful countries in the region.That is, the asymmetries in Latin America, notably the status of Brazil and Mexico as the largest countries with the biggest economies and the most prominent members of two regional integration schemes, have become obstacles to more effective regional integration.Their poor economic growth in the period under review and the improved growth of smaller countries have worked to empower the latter politically, and fostered a more democratic and decentralised regionalism.
As regards productive complementarity -which is still poorly explored and exploited in Latin America compared to East Asia, for example -'core-periphery' relations become important.Unlike traditional patterns of power and coercion, such a relationship should be arranged around virtuous networks, flows and connections, with bigger economies in the region fostering the performance of smaller ones, and with regional integration seen as means of making its countries more globally competitive.In this perspective, commercial, productive, and, above all, knowledge flows among nations become far more important than positive trade balances.They could even help countries to advance in terms of high technology and artificial intelligence, the sources of future economic growth (Ovanessoff and Abbosh 2017).
It is clear that Latin America exhibits a 'decentred regionalism, ' due to the lack of a single nation with the desire, power, and followers to assume leadership and establish a predominant conception of regional integration.Despite this heterogeneity, there is a need for higher levels of cooperation among its regional blocs, aimed at enhancing the global competitiveness of the region as a whole.This requires cooperation rather than enforcement, and networks and connectivity rather than hierarchies.
Final considerations
Latin America is a remarkable example of how the concept of a region is not geographically determined, but shaped and reshaped by interactions among various regional actors, subject to international influences.The diverse regionalisms present in Latin America illustrate this vividly.Therefore, this article also serves as a historical record of the place of Latin America in a world in constant transformation, and at a particular moment of diversity.
Some economic logics of regionalism must be highlighted, although they are not often present in political logics.If trade agreements have the potential to 'divert' trade and investment from other countries and other regions, while not necessarily creating new trade flows, cooperation and even integration among regional blocs become important issues.It is clear that a lack of cooperation among developing countries such as those in Latin America makes them more vulnerable to and dependent on international trade dynamics.
Coordinating collaboration among countries in diverse and heterogeneous regions is not an easy task, especially when they harbour divergent ideas about their national and regional goals, and how to achieve them.However, it is even more important to govern regionalism for its developmental benefits than its political ones.All integration initiatives face these challenges.When Trump speaks about the need to change international migration and labour regimes, he constantly speaks about NAFTA.Although he has stated that joining NAFTA would harm the USA, it is Mexico that remains at the lowest end of the North American value chain, with far-reaching social consequences.Following the 2007-2008 financial crisis and the subsequent Euro crisis, the European Union has had to deal with numerous challenges including Brexit, debt renegotiations in peripheral European countries, and monetary integration without fiscal integration.
Political coordination is needed to address these challenges, and to reduce the endemic asymmetries among neighbouring countries.While centripetal forces have grown too, regional concerts are a good way to deal with globalisation, confronting its adverse effects and capturing its benefits, and to assist qualified regional insertion into a rapidly changing global economy.
Notes
1. 'Latin America' is taken to mean South and Central America plus Mexico.While this term is ideologically laden, and sometimes contested, its scope is embedded in the Pacific Alliance as well as organisations such CELAC, which dictates its use for practical reasons.
2. The suspension of some Member States and the lack of consensus in the organisation reinforce the evidence of uncoordinated regional heterogeneity.
3. This has been the subject of some notable books, including Riggirozzi and Tussie (2012) and Vivares (2018).
4. Besides the preferences for South American integration one of the reasons why Brazil did not support the advance and consolidation of the FTAA was the emphasis on multilateral negotiations through the Doha Round under the World Trade Organization which was launched in 2001.
5. In April 2018, adding another chapter to the story, Republican senators stated that the USA might reconsider its stance on the TPP, which could be seen as a a way of exerting pressure on China on negotiations over trade and technology policies.
6.It is clear from the indicators that Uruguay is the most open country in Mercosur.Interestingly, in 2007, Uruguay signed a Trade and Investment Framework Agreement with the USA, which could be a precursor to a free trade agreement, which would conflict with the Mercosur Customs Union in turn.
7. A three-year analysis is used to outline and evaluate Brazil's trade preferences, but a more detailed analysis of a broader historical series including other variables may be needed to corroborate it.
8. This was mainly due to the USA's 'Pivot to Asia' policy, a major initiative under which it sought to build closer relations with India and the countries of East Asia.This was aimed at helping to diversify US interests beyond Europe and the Middle East, and possibly to counterbalancing Chinese influence in Asia.On the other hand, the Obama administration promoted the historic US rapprochement with Cuba, even though this has been reviewed by the Trump government.O Regionalismo Econômico Descentrado na América Latina: Da ALCA à Aliança do Pacífico Resumo: Nesse artigo, investigo o regionalismo Latino Americano do colapso do Projeto da ALCA à emergência da Aliança do Pacífico, no período de 2005 a 2015.Para a maior parte da pesquisa, utilizo os principais blocos econômicos da região, o Mercosul e a Aliança do Pacífico, como unidades de análise.Os principais resultados identificados são de que, desde o colapso da ALCA, os processos de integração se tornaram mais heterogêneos; o Mercosul e a AP contrastam um com o outro em termos político-econômicos; o projeto brasileiro de estabelecer um regionalismo pós-liberal/pós-hegemônico na América do Sul não teve sucesso; e que a demanda regional por produtos brasileiros está em risco de deslocamento a outros mercados no médio para longo prazo, debilitando ainda mais suas aspirações de liderança regional.Tudo isso evidencia o regionalismo econômico descentrado -isto é, uma forma de regionalismo na qual não há um único Estado no seu comando central, ou com seguidores suficientes para assumir uma liderança e estabelecer uma concepção predominante de integração e cooperação regional.Outros fatores que contribuem para essa descentralização são o baixo desempenho econômico de Brasil e México, e a mudança de postura do governo dos EUA em relação às relações comerciais com a América Latina.Apesar disso, argumento que os países latino-americanos precisam fortalecer a cooperação dentro e entre esses blocos regionais, focando na promoção da sua competitividade global conjunta.Isso requer mais cooperação e democracia do que coerção, e mais redes e conectividade do que hierarquias.
Figures 1
Figures 1 and 2 record the trade balances for both PA and Mercosur member states from 2000 to 2015.Up to the financial crisis in 2007-2008, most countries showed healthy and growing trade surpluses, largely due to growing Chinese demand for primary products.From then onwards, trade balances declined, largely due to reduced demand for metals and minerals and lower prices for agricultural commodities as well as oil (World Bank 2017).
Figure 1 :
Figure 1: External balance of goods and services for PA member states, 2000-2015 (% of GDP)
Figure 2 :
Figure 2: External balance of goods and services for Mercosur member states, 2000-2015 (% of GDP)
Table 1 :
Ease of doing business and global competitiveness in thePA and Mercosur, 2016-2017 Source: Compiled by the author, based on World Bank (2017) and World Economic Forum (2017).
Table 2 :
Average growth per capita and foreign trade as % of GDP in PA and Mercosur member states, 2005 to 2015 Compiled by the author, based on World Bank data.Data for Venezuela stretches up to 2014.
Table 3 :
GDP growth per capita in thePA and Mercosur, 2005-2015Source: Compiled by the author, based on World Bank data.Data for Venezuela stretches up to 2014.
Table 4 :
Imports by Mercosur and PA member states from Brazil, China and the USA in 2005, 2010 and 2015
Jean Santos Lima is a PhD candidate in International Relations at the University of Brasilia (IREL/UnB), where he has also lectured in Contemporary International Politics and Methods & Techniques of Research.He holds a CAPES doctoral scholarship.He has worked at the Institute of Applied Economic Research (IPEA), and has worked at the Brazilian Agency for Industrial Development (ABDI) on a major EU technical cooperation project in Latin America, supporting the internationalisation of small and medium-sized Brazilian firms.His main research interests include Development, International Political Economy, Knowledge Economy Advancement, Regional Integration, and Latin American challenges in the middle-income trap. | 2019-05-12T14:23:34.008Z | 2018-09-03T00:00:00.000 | {
"year": 2018,
"sha1": "6e196bf28a086eccb8ce76f3693d67fea1763baa",
"oa_license": "CCBYNC",
"oa_url": "http://www.scielo.br/pdf/cint/v40n2/0102-8529-cint-2018400200001.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "82bdead7d8cbabcb948cdc6bb72cf3b1ca22d239",
"s2fieldsofstudy": [
"Political Science",
"Economics"
],
"extfieldsofstudy": [
"Political Science"
]
} |
104042815 | pes2o/s2orc | v3-fos-license | Effect of Phosphorus, Zinc and Iron on Yield and Quality of Wheat in Western Rajasthan, India
Wheat [Triticum aestivum (L.)] is the second most important food grain crop in India ranking next to rice (Oryza sativa L.) contributing about 35% of the food grain production in India. India occupies second position next to China in the World with regard to area 30.96 million hectares and production 88.94 million tones with average productivity of 28.72 q ha -1 of wheat (Anonymous, 2014-15). In India, main wheat growing states are UP, Punjab, Haryana, M.P., Rajasthan and Bihar. In Rajasthan, wheat has an area of 2.94 million hectares with the production of 9.86 million tonnes. The average productivity of wheat in the state is 33.65 q ha -1 (Anonymous, 2014-15). This clearly indicates that in spite of considerable improvement in genetic potential of the crop, productivity is still very poor in the country as well as in the state of Rajasthan. The high International Journal of Current Microbiology and Applied Sciences ISSN: 2319-7706 Volume 7 Number 03 (2018) Journal homepage: http://www.ijcmas.com
productivity of wheat can only be achieved by the adoption of suitable variety and improved agronomic practices with balanced and judicious use of chemical fertilizers in an integrated way.
Among the essential nutrients, phosphorus occupies a key place in intensive agriculture and is considered as a backbone of any fertilizer management programme. Application of phosphorus not only increases the crop yield but also improves crop quality and imparts resistance against diseases. It is involved in wide range of plant processes as permitting cell division, development of sound root system and ensuring timely and uniform ripening of crop. It participates in metabolic activities as a constituent of nucleoprotein and nucleotides and also plays a key role in the formation of energy rich bond phosphate like Adenosine diphosphate and Adenosine triphosphate. It plays a vital role in virtually every plant process like photosynthesis, energy storage and transfer, stimulating root development and growth, giving plant rapid and vigorous start leading to better tillering in wheat, encouraging earlier maturity and seed formation. Therefore, sufficient quantity of soluble form of phosphorus fertilizers is applied to achieve maximum plant productivity. However, the applied soluble forms of phosphatic fertilizers rapidly become unavailable to plants by conversion into inorganic P fractions that are fixed by chemical adsorption and precipitation. Similarly, organic P fractions are immobilized in soil organic matter (Sanyal and De Dutta, 1991).
Micronutrients were first recognized as a limiting factor in crop production in United States in Florida during the 1920's. Micronutrients play a vital role in enhancing crop productivity. Intensification of agriculture with high yielding varieties, continuous use of high analysis fertilizers, restricted supply of organic manures and negligible crop residue return to soil led to micronutrient deficiency. The overall deficiency of micronutrient in Indian soil was found to be 47 per cent for Zn, 2 per cent for Cu, 13 per cent for Fe and 4 per cent for Mn (Sakal and Singh, 2001). The present investigation was carried out to evaluate and describe the fertilizer phosphorus, zinc and iron application on growth attributes and yield attributes of wheat in Loamy sand soils of Western Rajasthan.
Materials and Methods
The experiment was conducted at the Agronomy farm, College of Agriculture, Swami Keshwanand Rajasthan Agricultural University, Bikaner during rabi seasons of 2009-10 and 2010-11. The experimental site is located at 28.01 0 N latitude and 73.22 0 E longitude at an altitude of 234.7m above mean sea level and falls under Agroecological region No. 2 (M9E1) under Arid ecosystem (Hot Arid Eco-region), which is characterized by deep, sandy and coarse loamy, desert soils with low water holding capacity and hot and arid climate.
The field experiment on wheat in rabi seasons of 2009-10 and 2010-11 was laid out comprising 4 levels of phosphorus (0, 20, 40 and 60 kg ha -1 ) and zinc (0, 3 and 6 kg ha -1 ) in main plots and 3 levels of iron (0, 3 and 6 kg ha -1 ) in sub plots. A total of 36 treatment combinations were tested in split plot design with three replications. The treatment details are follows:
Iron levels
Fe 0 = Control, Fe 1 = 3 Kg ha -1 and Fe 2 = 6 Kg ha -1 Nitrogen was applied @ 120 kg N ha -1 was applied RDF. Half dose was applied as basal through urea after adjusting the quantity of N supplied by DAP. Remaining half dose of N was applied through broadcasting of urea in two equal split doses just after irrigation at 25 and 75 DAS. Potassium was applied @20 kg K 2 O ha -1 was applied through muriate of potash before sowing. Phosphorus: Phosphorus was applied through DAP, zinc was applied through zinc sulphate and iron was applied through ferrous sulphate before sowing as per treatment. Seeds were treated with thiram (2 g kg -1 seed) as prophylactic measures against seed borne diseases. The wheat variety 'Raj-3077' was sown by "kera" method at a depth of 5 cm in rows spaced at 22.5 cm apart on 25 th and 28 th November in the years 2009-10 and 2010-11, respectively using seed rate of 120 kg ha -1 .
The grain yield of each net plot was recorded in kg plot -1 after cleaning the threshed produce and was converted as kg ha -1 . Straw yield was obtained by subtracting the grain yield (kg ha -1 ) from biological yield (kg ha -1 ). The harvest index was calculated by using following formula and expressed as percentage (Singh and Stoskoof, 1971).
Harvest index (%) =
Economic yield x 100 Biological yield Fresh leaves collected at flowering stage from each plot were washed twice with water and once with distilled water. Treatment wise fresh leaf sample of 0.1 g was taken and ground in 80 per cent acetone, filtered from filter paper No. 42 and volume was made upto 25 ml. The resultant intensity of colour was measured in UV-VIS spectrophotometer 118 (systronics) at specific wave length (645 m and 663 m) to estimate chlorophyll 'a' and chlorophyll 'b' content (Arnon, 1949).
Chlorophyll 'a' content (mg g -1 ) = 12.7 A 663 -2.69 A 645 x V a x 1000 x w Chlorophyll 'b' content (mg g -1 ) = 22.9 A 645 -4.68 A 663 x V a x 1000 x w Where, a = Length of light path in cell (usually 1 cm) w = Fresh weight of leaf samples (g) v = Volume of extract (ml) Total chlorophyll content (mg g -1 ) = Chlorophyll 'a' + Chlorophyll 'b' The protein content in grain was calculated by multiplying per cent nitrogen content with a factor of 6.25 (A.O.A.C., 1970). The sugar content was determined by the method described by AOAC (1970). The crude fiber content was calculated by using following formula and expressed in percentage as described by AOAC (1970).
Crude fiber (%) =
Weight of residue -Weight of ash ________________________________________ x 100 Amount of substance taken
Effect of phosphorus
Application of phosphorus at 40 kg P 2 O 5 ha -1 significantly increased the grain yield, straw yield and biological yield over control during both year of experimentation and pooled analysis ( Table 1). The significant increase in grain yield of wheat due to application of phosphorus up to 40 kg P 2 0 5 ha -1 was largely a function of improved growth and the consequent increase in different yield attributes. The grain yield of wheat increased by 762 kg ha -1 due to application of 40 kg P 2 O 5 ha -1 over control. Jain and Dahama (2006) and Jat et al., (2007) also recorded significant improvement in wheat grain yield with increase in phosphorus levels. The significant increase in straw yield due to application of phosphorus could be attributed to the increased vegetative growth as evident from dry matter production and CGR possibly as a result of the effective uptake and utilization of nutrients absorbed through its extensive root system developed under phosphorus fertilization (Rathi and Singh, 1976).
The biological yield is a function of grain and straw yields. Thus, significant increase in biological yield with the application of phosphorus could be ascribed to the increased grain and straw yields. The faster rate of improvement in grain yield as compared to straw yield to phosphorus fertilization led to significant improvement in biological yield thereby suggesting better source and sink relationship. These results are in conformity with those of Jat et al., (2007) and Sepat and Rai (2013). Table 2 revealed that application of phosphorus @ 40 kg P 2 O 5 ha -1 significantly increased the chlorophyll content of wheat at flowering stage during both the years of investigations and in pooled analysis. This may be attributed to increased N content in grain and its uptake by the crop and role of P in energy conservation and transformation. Higher nitrogen content in grain due to P fertilization resulted in higher crude protein content as nitrogen is an integral part of protein. Such increase in protein content is due to the reduction of nitrates to ammonia by the activities of complex enzymes resulting in production of more amino acids, which are main constituents of protein. These results are corroborates with the findings of Azad et al., (2010) and Pingoliya et al., (2015).
Effect of zinc
Application of zinc at 3 kg ha -1 significantly increased the number of yield of wheat over control during both the years (Table 1). The increase in the yield due to zinc application may be attributed to the fact that the initial status of available zinc in the experimental soil was low. The increase in yield attributes may be due to increased supply of available zinc to plants by way of its addition to soil which resulted in proper growth and development. The increase in the yield attributes might be due to role of zinc in biosynthesis of indole acetic acid (IAA) and especially due to its role in initiation of primordia for reproductive parts and partitioning of photosynthates towards them, which resulted in better flowering and fruiting. The significant increase in straw yield due to zinc fertilization could be attributed to the increased plant growth and biomass production, possibly as a result of the uptake of nutrients. Similar results were reported by Singh et al., (2015) and Arshad et al., (2016). Application of zinc @ 3 kg ha -1 significantly increased the chlorophyll content of wheat at flowering stage during both the years of study as well as on pooled basis. Experimental results also showed a significant increase in protein content in grain of wheat due to the application of zinc up to 3 kg ha -1 . Application of zinc in soil increased the availability of zinc in the rhizosphere. The role of zinc in increasing the metabolic and physiological activity of the plants is of great significance as it influences the activities of hydrogenase and carbonic anhydrase, stabilization of ribosomal fractions and synthesis of cytochrome (Tisdale et al., 1984). Similar results were also been reported by Shivay et al., (2014) and Paramesh et al., (2014).
Effect of iron
Application of 3 kg Fe ha -1 significantly increased the grain yield (Table 1) over control but it was found statistically at par with 3 kg Fe ha -1 . An increase in grain yield may be attributed to the significant increase in number of effective tiller per plant and number of grains per ear. Further, increase in grain yield due to iron application in the soil could possibly be due to the enhanced metabolites of carbohydrates and protein and their transport to the site of grain production. Since iron is a constituent of ferrodoxin and cytochromes, which involved in photosynthesis, the increase in iron supply could result in enhanced synthesis of carbohydrates. Similarly, significant increase in straw yield was recorded with the application of 3 kg Fe ha -1 . This might be due to increased crop growth and development viz. dry matter accumulation and yield attributes of plants under better nutritional environment, under the application of iron. Significant increase in grain and straw yield due to iron application has also been reported by Habib (2009). The biological yield is a function of grain and straw yields. Thus, increase in biological yield with the application of iron could be ascribed to increase grain and straw yields. These results are in line conformity of findings of Gill and Walia (2014).
Application of iron @ 6 kg ha -1 significantly increased the chlorophyll content of wheat at flowering stage during both the years of study as well as on pooled basis. Application of Fe @ 6 kg ha -1 significantly increased mean chlorophyll content by 22.20 per cent and 5.77 per cent over control and 3 kg Fe ha -1 , respectively. Data perusal in table 2 revealed that application of Fe @ 6 kg ha -1 significantly increased mean chlorophyll content by 22.20 per cent and 5.77 per cent over control and 3 kg Fe ha -1 , respectively. Whereas, application of iron @ 3 kg ha -1 increased the protein content of wheat significantly during both the years of experimentation as well as in the pooled analysis. However, total sugar content of grain and crude fiber content of grain did not affected significantly during both the years of study as well as in the pooled analysis. Iron might have helped in greater nitrogen uptake by plant and translocation in various plant parts including grain. Since nitrogen is essential constitute of protein, increased nitrogen content led to higher protein content. These results corroborative with the findings of Pingoliya et al., (2015). | 2019-04-09T13:08:52.066Z | 2018-03-20T00:00:00.000 | {
"year": 2018,
"sha1": "01d015e52842cd7f04c3853475dd5ff875c9def1",
"oa_license": null,
"oa_url": "https://www.ijcmas.com/7-3-2018/Ram%20Chandar%20Jat,%20et%20al.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "983890ccaf5fe68ffeb7673b733a8950c2003165",
"s2fieldsofstudy": [
"Agricultural And Food Sciences",
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
8485597 | pes2o/s2orc | v3-fos-license | Monitoring specific antibody responses against the hydrophilic domain of the 23 kDa membrane protein of Schistosoma japonicum for early detection of infection in sentinel mice
Background Schistosomiasis remains an important public health problem throughout tropical and subtropical countries. Humans are infected through contact with water contaminated with schistosome cercariae. Therefore, issuing early warnings on the risk of infection is an important preventive measure against schistosomiasis. Sentinel mice are used to monitor water body infestations, and identifying appropriate antibody responses to schistosome antigens for early detection of infection would help to improve the efficiency of this system. In this study we explored the potential of detecting antibodies to the hydrophilic domain (HD) of the 23-kDa membrane protein (Sj23HD) and soluble egg antigen (SEA) of Schistosome japonicum for early detection of schistosome infection in sentinel mice. Results Development of IgM and IgG antibody levels against Sj23HD and SEA in S. japonicum infected mice was evaluated over the course of 42 days post-infection by enzyme-linked immunosorbent assay (ELISA) and immunoblotting. The Sj23HD and SEA specific IgM and IgG levels in mice all increased gradually over the course of infection, but IgM and IgG antibodies against Sj23HD presented earlier than those against SEA. Furthermore, the rates of positive antibody responses against Sj23HD were higher than those against SEA in the early stage of schistosome infection, suggesting that the likelihood of detecting early infection using anti-Sj23HD responses would be higher than that with anti-SEA responses. The use of immunoblotting could further improve the early detection of schistosome infection due to its greater sensitivity and specificity compared to ELISA. Additionally, the levels of Sj23HD and SEA specific antibodies positively correlated with the load of cercariae challenge and the duration of schistosome infection. Conclusions This study demonstrated that antibody responses to the Sj23HD antigen could be monitored for early detection of schistosome infection in mice, especially by immunoblotting which demonstrated greater sensitivity and specificity than ELISA for detection Sj23HD antibodies.
Background
Schistosomiasis is an important tropical parasitic disease, with more than 200 million people currently infected among the 779 million people at risk of infection worldwide [1]. In the People's Republic of China (P.R. China), schistosome infection mainly occurs in the marshland and lake regions of Hunan, Hubei, Jiangxi, Anhui and Jiangsu provinces and in the hilly and mountainous regions of Sichuan and Yunnan provinces where the interruption of schistosomiasis transmission has been proven particularly difficult to achieve [2]. At present, approximately 65 million individuals are still at risk of infection in eastern Asia, including P.R. China [3,4], despite significant efforts to control the disease over the past 60 years [5].
Schistosome infections generally peak during the flood season (from May to October) along the Yangtze River, especially in the middle and lower reach of the Yangtze River [6][7][8]. Humans become infected in P.R. China mainly through contact with water infested with Schistosome japonicum cercariae [9]. Reducing the incidence of infection remains an ongoing aim for schistosomiasis control. Identifying infested water areas, issuing timely infection risk warnings as well as instituting intervention measures are helpful for preventing infection and controlling the prevalence of schistosomiasis. Traditionally, sentinel mice are used to monitor schistosome infested water bodies [10]. However, the maturation of schistosomes from schistosomula to adult worms takes approximately 22 days [11], and relying on counting worm burden to determine infection in sentinel mice makes it difficult to provide early warnings to people on the risk of schistosome infection for any given infested water body.
Fortunately, schistosome antibodies (IgM or IgG) to schistosome antigens present in the serum of the host within 1-2 weeks post-infection [12] and, therefore, would be more efficient as markers of infection than adult worms in sentinel mice. Although the presence of the circular antigen of the schistosome would better reflect the infection status of a host [13] than that of the antigenspecific antibody response, the efficiency of existing detection methods for the circular antigen are low and cannot be used for diagnosis of schistosomiasis [14,15]. A series of tests for detecting schistosome specific antibodies have been developed, such as the cercarien-huellen reaction (CHR), circumoval precipitin test (COPT), enzyme-linked immunosorbent assay (ELISA), indirect hemagglutination assay (IHA) and dipstic dye immunoassay (DDIA) [16]. These methods are commonly used for diagnosis and surveillance of schistosome infection, but none of them can be adequately used for early detection. As the protein expression profiles in different developmental stages of schistosome are different, the antibodies against schistosomula antigens should present first in the sera of sentinel mice after infection and may be used as potential markers for early diagnosis of schistosome infection. It has been shown that the 23 kDa membrane protein of S. japonicum (Sj23) exists in all stages of the parasite but the egg and is notably detected in the lung stage [17]. Sj23 plays an important role in maintaining growth and development of S. japonicum and is of interest as a potential vaccine candidate [18]. Therefore, detecting antibody responses to the Sj23 protein may be promising for early diagnosis of schistosome infection.
To develop a method for early detection of S. japonicum infection in sentinel mice, the dynamics of specific IgM and IgG antibodies responses to the hydrophilic domain (HD) of the Sj23 membrane protein (Sj23HD) and soluble egg antigen (SEA) in mice over the course of 42 days post-infection were systematically investigated in this study. These antibody levels were correlated with the load of cercariae used for infection and with the infection period. The efficiencies of ELISA and immunoblotting methods for detecting antibodies against Sj23HD and SEA were also compared.
Snails and cercariae
Snails (Oncomelania hupensis) infected with S. japonicum (typical schistosome species found in China) were provided by the Department of Snail Biology, Jiangsu Institute of Parasitic Diseases, Wuxi, P. R. China. S. japonicum cercariae were induced to hatch from infected snails by immersion in de-chlorinated water with illumination at 25°C for 2.5 h.
Animals
ICR mice (female, 20 g, 6 weeks old) were purchased from the Experimental Animal Center of Yangzhou University, Yangzhou, P.R. China. Japanese white rabbits were provided by the Experimental Animal Facility of Nanjing General Hospital of Nanjing Military Command, Nanjing, P.R. China, and raised at the Department of Experimental Animal, Jiangsu Institute of Parasitic Diseases, Wuxi, P.R. China. The experiments were carried out with approval from the Animal Research Advisory Committee of the Jiangsu Institute of Parasitic Diseases.
Animal infection and serum collection
Experiment I ICR mice were divided into a control group (n = 5) which received no treatment and an experimental group (n = 10) in which each animal was infected with 50 cercariae of S. japonicum by abdominal skin exposure [19]. Mouse sera were collected from the tail veins on days 7, 10, 14, 18, 21, 28, 35 and 42 post-infection and stored at -20°C for subsequent analysis. All mice were sacrificed on day 42 post-infection. The adult worms in the mesenteric veins and egg granuloma in the livers were surveyed to confirm that schistosome infections of the mice were successful.
Experiment II
ICR mice were divided into nine groups (A to I; n = 5 per group). Group A served as the control without any treatment. Mice in groups B to I (experimental groups) were infected with 5, 10, 15, 20, 25, 30, 35 and 40 cercariae of S. japonicum, respectively, by abdominal skin exposure. Mouse sera were collected on days 21 and 28 post-infection and stored at -20°C for subsequent analysis. All mice were sacrificed on day 42 post-infection. Schistosome infections of mice were confirmed as described for Experiment I.
Preparation of antigens
The Sj23HD/pGEX-5X-1 plasmid expressing recombinant fusion protein (GST-HD) of the HD of the 23 kDa membrane protein and glutathione S-transferase (GST) of S. japonicum was constructed by our laboratory and transformed into Escherichia coli BL21 (DE3). The GST-HD fusion proteins expressed from the transformed E. coli were purified by using Glutathione Sepharose 4B based affinity chromatography following a previously described protocol [20]. S. japonicum SEA was prepared as a described elsewhere [21]. Briefly, Japanese white rabbits were infected with 1,500 cercariae by abdominal skin exposure and dissected on day 45 post-infection. The rabbit livers were homogenized in cold saline (1.2% sodium chloride), and the S. japonicum eggs were collected by filtrating the liver tissue homogenates through a 240 nylon mesh net. The liver tissues surrounding the egg surfaces were digested with typsin. The eggs were then washed with cold PBS and manually homogenized using a glass homogenizer. The egg homogenates were centrifuged at 25,000 × g at 4°C for 20 min. The supernatants containing SEA were collected, and the protein concentrations were determined using a BCA kit (Pierce Biotechnology, Inc., Rockford, IL, USA) and stored at -80°C.
ELISA detection of S. japonicum antigen specific IgM and IgG antibodies in sera of infected mice One hundred microliters of GST-HD or SEA proteins suspended in coating buffer (0.05 M sodium carbonate solution, pH 9.6) at a concentration of 10 μg/ml were added to each well of a microtiter plate and incubated at 4°C overnight. The wells were blocked for 1 h with 350 μl of PBS containing 0.05% Tween 20 (PBST) with 5% BSA at 37°C. After the plates were washed three times with PBST, 100 μl of serum diluted 1:100 in PBS containing 1% BSA were added to each well and incubated at 37°C for 1 h. After the plate was washed three times as above, the HRP-labeled goat anti-mouse IgM secondary antibody diluted 1:3,000 or IgG secondary antibody diluted 1:10,000 was added into each well, and the plate was incubated at 37°C for 1 h. The plates were then washed, and the reaction was developed by adding 100 μl of TMB substrate for 5 min in the dark at room temperature. The color development was stopped by adding 100 μl/well of 2 M sulfuric acid solution (H 2 SO 4 ), and the optical density (OD) in each well was measured at 450 nm using an ELISA plate reader. The OD 450 values of sample wells above 2.1 times that of negative control wells were judged as positive.
Detection of S. japonicum specific IgM and IgG antibodies in sera of infected mice by immunoblotting
Approximately 100 μg of recombinant GST-HD fusion proteins or SEA were loaded on a 12% SDS-polyacrylamide gel with a strip format comb with 1 reference well (1.0 mm thick, Bio-Rad) and transferred onto a nitrocellulose membrane under constant voltage (20 V) for 30 min using a Trans-Blot Semi-Dry Electrophoretic Transfer Cell. The nitrocellulose membrane was blocked with 5% skimmed milk powder in PBST at room temperature for 1 h or at 4°C overnight. The blocked membrane then was cut longitudinally into strips of 3 mm width each. The strips were incubated for 1 h at room temperature with mouse sera (diluted 1:500) collected at different times post-infection. After three washes with TBST (10 mM Tris, 150 mM NaCl, PH 7.6, 0.05% Tween-20) (10 min each time), the strips were incubated with HRP-conjugated goat anti-mouse IgG (diluted 1:10,000) or HRPconjugated goat anti-mouse IgM (diluted 1:3,000) for 1 h at room temperature. After washing the strips 3 times in TBST, the color reaction was developed by incubating the strips with DAB substrate for 2 min at room temperature and then terminated by washing the strips with distilled water.
Statistical analysis of data
Values for antibody responses levels were compared and analyzed with SPSS statistical package for social sciences, version 13.0 software (Chicago, IL, USA). All data were expressed as means ± standard deviations.
Dynamics of Sj23HD and SEA specific antibody responses detected by ELISA
The Sj23HD and SEA specific IgM and IgG antibodies in the sera of mice were detected by ELISA and found to increase gradually over the course of infection. Overall, the levels of IgG against SEA were lower than those against Sj23HD (Figure 2). Although the immune responses of individual mice to the schistosome antigens were obviously diverse, the Sj23HD specific IgM and IgG antibodies appeared earlier in mice than those specific for SEA (Table 1). These results demonstrated that anti-Sj23HD responses could potentially be used for early detection of schistosome infection.
Dynamics of SEA and Sj23HD specific IgG antibodies detected by immunoblotting
The SEA and Sj23HD specific IgG in pooled sera of infected mice from day 0 to 42 were detected by immunoblotting. While no protein bands were recognized on day 0, the two SEA proteins of 73 and 78 kDa were
Dynamics of SEA and Sj23HD specific IgM antibodies detected by immunoblotting
The SEA or Sj23HD specific IgM in pooled sera of infected mice from day 0 to 42 were detected by immunoblotting. The mouse sera did not recognize any protein bands on day 0, while the two SEA proteins of 73 Table 2). The positive rates of Sj23HD specific antibodies at different times post-infection were higher than those of the SEA specific antibodies. Compared to the positive rates obtained by ELISA, the efficiency of detection by immunoblotting was significantly higher. (The immunoblotting profiles of serum reactivities against the SEA or Sj23HD protein bands of individual mice at different times post-infection are shown in Additional file 1 Figure S1; Additional file 2 Figure S2; Additional File 3 Figure S3 and Additional File 4 Figure S4.).
Correlation between the cercariae load used for infection and serologically positive rate of Sj23HD specific antibodies
To determine the relationship between antibody responses and the cercaria load used for infection, Sj23HD specific antibodies in mice infected with different numbers of cercariae were detected on days 21 and 28 by immunoblotting. As shown in Table 3 the positive rates of Sj23HD specific IgM and IgG on day 21 post-infection were 40% and 20% in mice infected with 5 cercariae (group B); 80% and 60% in mice infected with 10 cercariae (group C); both 80% in mice infected with 15 cercariae (group D); and 100% and 80% in mice infected with 20 cercariae (group E), respectively. At the same time, the positive rates of Sj23HD specific IgG and IgM were 100% in all mice infected with more than 20 cercariae (groups F-I). On day 28 post-infection, the positive rates of Sj23HD specific IgG and IgM were both 60% in mice infected with 5 cercariae (group B); and 100% and 80%, respectively, in mice infected with 10 cercariae (group C). Meanwhile, all groups infected with more than 15 cercariae (groups D-I) were 100% positive on day 28 for both Sj23HD specific IgG and IgM. These results indicated that the frequency of animals developing antibodies against Sj23HD were positively associated with the load of cercariae used for the challenge.
Discussion
The current emphasis of schistosomiasis management in P.R. China is to survey areas where the transmission has been stopped and to control schistosomiasis epidemics in the areas where the prevalence of this parasitic infection has not been controlled effectively. Finding the susceptible, high risk environments for schistosome infection, issuing warnings of infection risk rapidly and taking more targeted measures for prevention and intervention to eliminate the incidence of infection are key goals for effective schistosomiasis control [22]. Therefore, developing an early diagnostic technique to meet the requirements of the current schistosomiasis prevention and control program is paramount [23,24]. The rapid development of modern genetic engineering techniques [25] makes it possible to isolate and produce specific antigens for detection of schistosome infections, and recombinant antigens can be used to improve the sensitivity and specificity of diagnostic methods. The recombinant Sj23 membrane protein has been shown to induce strong humoral immune responses and thus can be used as a diagnostic antigen to detect schistosome specific antibody responses [26,27]. Lu et al. used the recombinant large HD of Sj23 to diagnose S. japonicum infection in buffalo [28]. Yu et al. had used the recombinant fusion protein of GST and large HD of Sj23 (GST-HD) to diagnose S. japonicum infection in humans [21]. The Sj23 membrane protein is present in all stages of the schistosome life cycle [17]; therefore, Sj23 specific antibodies should theoretically appear in the early stage of schistosome infection and would be valuable for early clinical diagnosis, as well as for determining the infection status of sentinel mice.
In this study, the utilities of Sj23 and SEA, a widely used diagnostic antigen, were compared for early detection of schistosome infection in sentinel mice. The recombinant fusion protein GST-Sj23HD and SEA were used to detect specific serum IgM and IgG antibodies in S. japonicum infected mice at different times post-infection by ELISA and immunoblotting. The results showed that both anti-SEA and anti-Sj23HD IgM and IgG could be found at days 7 to 10 post-infection, and the antibody titers increased gradually over the course of infection. However, the titers of Sj23HD antibodies increased more quickly than those of the SEA specific antibodies. The positive rates of Sj23HD antibodies at different times post-infection were higher than those of the antibodies against SEA. At day 21 post-infection, the positive rates of Sj23HD specific IgG and IgM reached 80 and 90%, respectively, but the positive rates of both SEA specific IgG and IgM only reached 70%. These results suggested that the antibody responses against Sj23HD would be more valuable for early diagnosis of schistosome infection than those against SEA, which may be attributed to the fact that Sj23 is a dominant schistosomula antigen. We speculate that there are epitopes within SEA which can induce specific antibodies in the early stage of infection in mice; however, since these are non-dominant antigens in schistosomula, they are unable to induce high titer antibodies in the early stage of schistosome infection and result in only weak immuno-reactivity and a low positive rate of detection.
The specificity of an antibody detection method depends on the nature of the antigen used to capture the antibody. If the non-specific epitopes in the antigen are dominant, it would result in non-specific reactivity. In this study, the recombinant antigen Sj23HD was prepared from E. coli, and SEA was prepared from infected rabbit liver tissues, which could be contaminated with non-specific proteins. Thus, these pooled antigens used to detect antibodies by ELISA may result in cross-reactive or false positive responses. However, the immunoblotting assay can separate various components of the pooled antigens, and the specificity of a reaction can be evaluated based upon whether the serum antibodies recognize the protein band(s) of expected size. When using the purified GST-HD fusion protein, only the expected 33.5 kDa protein recognized by sera of mice post-infection could be considered as a positive response. For SEA, five main bands of 55, 73, 78, 84 and 121 kDa recognized by sera of mice post-infection could be judged as positive, especially the 73 and 78 kDa protein bands, which are present in nearly every stage of schistosome infection. Thus, the ability of the immunoblotting method to separate the antigens by size can help to avoid or reduce false positives due to non-specific protein reactions. For these reasons, the specificity of immunoblotting was determined to be higher than that of ELISA for detection of schistosome specific antibodies.
The results of this study also showed that the sensitivity of immunoblotting to detection of specific IgM and IgG was higher than that of ELISA, especially for detecting serum antibodies of mice in the early stage of infection. The lower detection efficiency of ELISA may be ascribed to the low titers of specific antibodies against schistosome antigens in the early stage of infection as well as the low antigen amounts coated on the wells of the microtiter plates. However, the amount of antigen transferred to the nitrocellulose membrane could be adjusted in accordance with the requirement of the immunoblotting conditions. For example, appropriately increasing the amount of antigen could be helpful in detecting low-titer schistosome specific serum antibodies. Therefore, the more efficient immunoblotting method would be more suitable than ELISA for early detection of schistosome infections.
Antibody levels are generally positively correlated with the amount and duration of antigen stimulation [29]. Indeed, the anti-Sj23 antibody levels in serum of schistosomiasis patients have been positively associated with the schistosome adult burden and infection duration [30][31][32][33]. Similarly, the results of this study also showed that the positive rates of anti-Sj23HD antibodies in mouse sera positively correlated with the load of S. japonicum cercariae used for the challenge and the duration of infection.
Conclusions
This study demonstrated that the antibody responses to the 23 kDa membrane protein of S. japonicum are useful for early detection of schistosome infection in mice. Compared to the ELISA based method, immunoblotting using the recombinant Sj23HD antigen has improved sensitivity and specificity for detecting specific IgM or IgG antibodies. | 2014-10-01T00:00:00.000Z | 2011-09-10T00:00:00.000 | {
"year": 2011,
"sha1": "53d508a900e0074467c19094cb7f3dfaaeb0d5c2",
"oa_license": "CCBY",
"oa_url": "https://parasitesandvectors.biomedcentral.com/track/pdf/10.1186/1756-3305-4-172",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "53d508a900e0074467c19094cb7f3dfaaeb0d5c2",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
253498397 | pes2o/s2orc | v3-fos-license | Matrix metalloproteinases and their gene polymorphism in young ST-segment elevation myocardial infarction
Background Genetic polymorphism in MMPs are associated with multiple adverse CV events. There is little evidence regarding role of MMPs and their genetic polymorphisms in young (<50 years) ST-segment elevation myocardial infarction (STEMI) patients. Methods This study included 100 young (18–50 years) STEMI patients and 100 healthy controls. Serum levels of MMP-3, MMP-9 and TIMP were estimated for both patients as well as controls. Additionally, genetic polymorphisms in the MMP-9 gene (−1562 C/T and R279Q) & MMP-3 gene (5A/6A-1612) was evaluated. All these patients were followed up for one year and major adverse cardiac events (MACE) were determined. Results Serum levels of MMP-3 (128.16 ± 115.81 vs 102.3 ± 57.28 ng/mL; P = 0.04), MMP-9 (469.63 ± 238.4 vs 188.88 ± 94.08 pg/mL; P < 0.0001) and TIMP (5.84 ± 1.93 vs 2.28 ± 1.42 ng/mL; P < 0.0001) were significantly higher in patients as compared to controls. Additionally, patients with genetic polymorphisms in the MMP genes (5A/5A, 6A/6A and the AG genotypes) had an increased risk of STEMI. Patients with MACE had significantly higher levels of MMP-9 (581.73 ± 260.93 vs 438.01 ± 223.38 pg/mL; P = 0.012). A cutoff value of 375.5 pg/mL of MMP-9 was best able to discriminate patients with STEMI and MACE with sensitivity of 77.3% and specificity of 57%. Conclusion Novel biomarkers such as MMP-3, MMP-9 and TIMP and their genetic polymorphism are associated with the susceptibility for STEMI in young individuals. Higher MMP-9 levels in STEMI patients with MACE suggests its potential role in predicting cardiac remodeling and left ventricular dysfunction.
Introduction
The burden of atherosclerotic cardiovascular diseases (ASCVD) continues to be significant in low-and middle-income countries. The prevalence of acute coronary syndrome (ACS) in developing countries has been on a rise especially among individuals <50 years of age. 1 A majority of these ACS events occur due to the rupture of an atheromatous plaque. Vulnerable plaques which are prone to rupture are characterized by the presence of a lipid rich necrotic core, a thin fibrous cap covering the core along with inflammatory cell infiltrates and reduced collagen content in the fibrous cap. 2 Matrix metalloproteinases (MMPs) which belongs to the family of zinc dependent protease are a group of proteolytic enzymes. These are produced by the inflammatory cells in the atheromatous plaque leading to the degradation of the extracellular matrix, weakening of the cap and its subsequent rupture. MMPs also enable the easy migration of the inflammatory cells across the tissues thereby increasing the risk for development of atheromatous plaques. 3 Two important members of this family include MMP-3 and MMP-9 both of which play an important role in plaque formation, smooth muscle cell migration and proliferation. 3 The activity of MMPs is tightly controlled by the tissue inhibitor of metalloproteinase's (TIMPs). These TIMPs regulate the connective tissue metabolism through formation of irreversible complexes with the MMPs rendering them inactive. In the inflammatory milieu within an atheromatous plaque, there occurs an imbalance between the MMPs and TIMPs leading to plaque destabilization and prone to rupture. 4 Coronary artery disease (CAD) is polygenic in nature with multiple genes linked to its occurrence. Recent evidences have highlighted that MMP gene polymorphism are associated with adverse cardiovascular (CV) events. 5 Levels of different MMPs are affected at transcription levels by various genetic polymorphisms. Studies have reported that MMP-3 and MMP-9 gene polymorphisms are associated with CAD and stroke events. 6,7 However, there is little evidence regarding the role of MMPs and their genetic polymorphisms in the occurrence of STEMI in young (<50 years) individuals. This study aimed to determine the levels of novel biomarkers such as MMP-3, MMP-9 and TIMP in young patients with STEMI within three days of symptom onset. Additionally, the distributions of respective gene polymorphism in the MMP-9 gene (À1562 C/T and R279Q) & MMP-3 gene (5A/6A-1612) too was evaluated.
Study design
This was a prospective, single center caseecontrol study in a tertiary care center in Delhi, India over a one-year period. A total of 100 young patients (18e50 years of age) with ST-segment elevation myocardial infarction (STEMI) presenting within 3 days of symptom onset were enrolled. Patients aged <18 years, ST segment elevation due to non-ischemic causes (myocarditis, pericarditis), chronic kidney/liver disease, malignancy and acute as well as chronic inflammatory conditions were excluded. Age and gender matched healthy controls were also included in this study. All patients underwent a detailed clinical evaluation, routine blood investigations, electrocardiography and 2D echocardiography following a written informed consent. Five milliliters of peripheral venous blood were collected of which one ml in EDTA vacutainer for DNA isolation while the remaining blood volume in plain vacutainer was used for serum separation for biomarker analysis at the time of index event.
Biomarker analysis
Serum was separated following clot retraction and centrifugation at 3000 rpm for 10 min. The serum was aliquoted and stored in deep freezer at À20 C for batch analysis of the ELISA based tests. Analysis of MMP-3, MMP-9 and TIMP levels were done on fully automated analysers based on the principle of chemiluminescence. The concentration of MMP-3, MMP-9 and TIMP level in the samples was then determined by comparing the optical density of the samples to the standard curve.
Genomic analysis
DNA was extracted from the peripheral lymphocytes using commercially available nucleic isolation kit QIA amp DNA Mini and Blood Kit (Qiagen, Chatsworth, CA, USA) and stored at À20 C. The single nucleotide polymorphisms (SNPs) of R279 Q and À1562 C/T variant of MMP-9 gene and 5A/6A À1612 of MMP 3 were amplified by PCR using specific forward and reverse primers as shown in Supplementary Table 1. The PCR amplified products for each genotype with base pairs 277 bp, 436 bp, 130 bp were analyzed for genotyping by restriction fragment length polymorphism (RFLP) using the restriction enzyme Sami, SphI and PsyI respectively.The band pattern observed after RFLP for individual genotypes for R279 Q variant of MMP-9 gene, 1562 C/T of MMP-9 gene and 5A/6A-1612 of MMP 3 were separated by gel electrophoresis in 2% agarose gel and visualized in Gel documentation system (Fig. 1A, B and C). The various bands visualized for R279Q variant of MMP-9 gene for homozygous AA was only a single band i.e. 277 bp band app, for homozygous GG two bands i.e. 96 bp band, 181 bp band appeared on the gel and for heterozygous AG all 3 bands i.e. 277 bp,181 bp, 96 bp appeared on the gel. The various bands visualized for À1562 C/T variant of MMP-9 gene for homozygous allele, i.e. TT, a 242bp and 194bp appeared for homozygous CC, a 436bp band appeared on the gel and for heterozygous CT, all 194bp, 242bp, 436bp band appeared on the gel. The band pattern observed after RFLP for individual genotypes for MMP-3 5A/6A À1612 were single band of 130bp 6A/6A, 110-bpband for homozygous 5A/5A (a) and all 2 bands, i.e. 130-bp, & 110-bp, band for heterozygous 5A/6A. HardyeWeinberg equilibrium was violated for genotype MMP-3 in cases and genotype MMP-3 as well as genotype MMP-9 R279Q in controls.
Follow-up
All enrolled patients were followed-up for a period of one year both telephonically as well as in-person for determination of major adverse cardiovascular events (MACE) which was defined as a composite of total death, myocardial infarction (MI), stroke and hospitalization due to heart failure.
Statistical analysis
Continuous data was expressed as mean ± standard deviation (SD) and categorical data was represented as proportions. Normality of distribution of continuous variables were assessed using the KolmogoroveSmirnov test. Comparison of means of continuous variables was done using Student's t-test or ManneWhitney U test as appropriate while Fisher exact test or c2 test was used for categorical variables. Univariate and multivariate logistic regression analysis were done to determine the independent predictors of STEMI in these patients. Diagnostic sensitivity and specificity of MMP-9 in predicting MACE were calculated by the receiver operating characteristics (ROC) curve. Kaplan-Meir (KM) curve was plotted for survival analysis. A two-sided P value of <0.05 was considered to be statistically significant. SPSS version 24.0 (IBM Corp, Armonk, NY) software were used for statistical analysis.
Results
A total of 100 patients with STEMI and 100 age and gender matched controls were enrolled in the study. The mean age of the study population was 38.3 ± 6.6 years while that of the control group was 37.6 ± 6.3 years (P ¼ 0.444). Majority of the enrolled subjects were males (93%) with co-morbidities such as hypertension (16%) and diabetes (13%). Among the predisposing risk factors for CAD, tobacco use in form of smoking was significantly higher in STEMI group as compared to controls (70% vs 48.2%; P ¼ 0.002). Patients with STEMI had significantly higher levels of total cholesterol, LDL-C and triglycerides as compared to the control group. Additionally, STEMI had significantly lower levels of HDL-C as compared to healthy controls. The demographic features of the enrolled subjects has been enlisted in Table 1. MMP 3e5A/5A, 6A/6A genotypes were significantly higher in young STEMI patients as compared to controls (5A/5A: 17% vs 10%; 6A/6A: 78% vs 71%; P ¼ 0.04). The proportion of subjects with genotype MMP 3e5A/6A was significantly lower in young STEMI patients as compared to controls (5A/6A: 5% vs 19%; P ¼ 0.006). The MMP9 279Q-AG genotype was the predominant genotype among enrolled STEMI patients as compared to controls (AG: 42% vs 22%; P ¼ 0.002) while MMP9 279Q-AA and MMP9 279Q-GG genotypes were lower in subjects as compared to controls (AA: 36% vs 46%; P ¼ 0.151; GG: 22% vs 32%, P ¼ 0.111). The genotypes MMP9 1562C/T was comparable between subjects and controls (CC: 66% vs 60%, P ¼ 0.380, CT: 33% vs 37%, P ¼ 0.553; TT: -1% vs 3%, P ¼ 0.621) (Supplementary Table 2).
Follow-up
Over a period of one year of follow-up, there were 22 MACE (Supplementary Fig. 1) with majority of them being heart failure hospitalizations (n ¼ 18). The mean duration of follow-up was 1.06 ± 0.12 years. STEMI patients with MACE had significantly higher levels of MMP-9 as compared to those without MACE (581.73 ± 260.93 vs 438.01 ± 223.38 pg/mL; P ¼ 0.012). However, there was no significant difference in terms of TIMP (5.65 ± 1.62 vs 5.89 ± 2 ng/mL; P ¼ 0.60) and MMP-3 (122.09 ± 131.15 vs 129.87 ± 111.97 ng/mL; P ¼ 0.78) levels in subjects with or without MACE. The ROC curve analysis revealed that the best cutoff value of 375.5 pg/mL of MMP-9 was able to discriminate patients with STEMI and MACE as compared to those without MACE with a sensitivity of 77.3%, specificity of 57% and an AUC of 0.693 (Fig. 2).
Discussion
The present study showed that young patients with STEMI had significantly higher levels of MMP-3, MMP-9 and TIMP as compared to healthy controls. Findings of our study also revealed that increased levels of MMP-3, MMP-9 and TIMP and their genetic polymorphisms i.e. 5A/5A, 6A/6A and the AG genotypes were more common in STEMI patients as compared to controls. Additionally, subjects with MACE especially heart failure hospitalizations had significantly higher levels of MMP-9 thereby suggesting its role in cardiac remodelling and impact on left ventricular (LV) functions.
One of the key pathophysiological mechanisms of ACS include activation of major MMP's such as MMP-3 and MMP-9. Serum levels of MMP's reflect vulnerability of the atheromatous plaque for rupture. 3 In our study, serum levels of MMPs such as MMP-3 and MMP-9 were significantly higher in patients with STEMI as compared to controls. Similar findings were reported in smaller observational studies evaluating role of MMP's in ACS. 6,8 MMPs such as MMP-9 not only plays an important role in development of ACS but also in the healing process post-acute MI. MMP-9 has been implicated in the LV remodelling post-acute MI. 9 Studies have shown a correlation between serum MMP-9 levels and echocardiographic parameters of LV dysfunction post MI. 9 In a study among 75 patients with ACS, MMP-9 levels were significantly higher among those with poor disease outcome in terms of recurrent ischemic attacks, heart failure, or death. 10 Similarly, in the biomarker sub-study of the VIP trial among 225 patients with ACS, MMP-9 levels were the most powerful predictor for MACE. 11 In our study too, MMP-9 levels were an important predictor of MACE with a cut-off value of 375.5 pg/mL being established to differentiate patients with or without MACE. Serum MMP-9 levels can thereby be considered as one of the potential biomarkers of poor outcomes in STEMI.
In terms of genetic polymorphism in the MMP-3 gene, in our study, the 5A/5A and the 6A/6A genotype was significantly higher in subjects with STEMI as compared to control subjects. The high activity 5A allele has been shown to be associated with an increased predisposition to plaque rupture among Chinese 12 and Japanese 13 young STEMI patients. Similarly, the 6A allele has been associated with increased risk for MI as reported in a cohort of 4152 Japanese subjects with ACS. 14 Other studies have established the role of 5A and 6A alleles in the development of coronary as well as carotid atherosclerosis. 15 Studies have also shown that in patients with 5A/ 5A and 6A/6A genotypes, there is a heightened MMP-3 expression in serum and tissues thereby increasing predisposition to plaque rupture and ACS. 16 Studies evaluating role of genetic polymorphism in the MMP-9 gene in CAD have revealed contradictory findings. In a meta-analysis from China, the authors concluded that risk of MI was significantly higher in subjects with T allele (TC and TT genotypes) as compared to those with CC genotype of the MMP-9 gene. 17 Moreover, the increased susceptibility was only seen in those with white ethnicity and not the Asian population. Contrarily, a recent meta-analysis concluded that MMP-9 (C1562T) SNP led to greater susceptibility risk for CAD among Asians. 18 Similarly, for the R279Q SNPs of the MMP-9 gene too, there have been contradictory results with published reports from Iran 6 and United States 19 suggesting its role in increasing susceptibility for MI. In our study among the South-East Asian population, R279Q SNPs of the MMP-9 gene was associated with an increased risk for STEMI as compared to controls. However, the 1562C/T SNPs was comparable between subjects and controls. These contrasting results are often due to demographic and ethnic differences, study design, sample size along with other confounding factors.
One of the important limitations is that it is a single centre study with relatively small sample size thereby the findings cannot be generalised to the entire population group. Another limitation is the relatively shorter duration of follow-up. Additionally, a comparative evaluation between young and old STEMI patients was not established in this study. The present study depicted the association between various MMP's and their genetic polymorphisms in ACS patients as well as the utility of MMP-9 in prediction of MACE following STEMI. It is very difficult to say whether the elevated MMP levels in our study population is related to disease process per se or to the genetic polymorphisms. To the best of our knowledge, this is one of the first studies evaluating role of MMP's and their genetic polymorphisms in young patients with STEMI in South-East Asian population group. However, there is a need for large scale, multi-centre randomised clinical studies to evaluate the role of these novel biomarkers and associated genetic polymorphism on cardiovascular disease outcomes.
Funding
None.
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | 2022-11-14T06:17:36.057Z | 2022-11-09T00:00:00.000 | {
"year": 2022,
"sha1": "b1dd5aa380b811d8442f18537f2f92ec82cc4dda",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.ihj.2022.11.001",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6cc441ea0224a325bb13e739c4165a4331ba61cd",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
198913139 | pes2o/s2orc | v3-fos-license | Synergy of therapeutic heterologous prime-boost hepatitis B vaccination with CpG-application to improve immune control of persistent HBV infection
Therapeutic vaccination against chronic hepatitis B must overcome high viral antigen load and local regulatory mechanisms that promote immune-tolerance in the liver and curtail hepatitis B virus (HBV)-specific CD8 T cell immunity. Here, we report that therapeutic heterologous HBcore-protein-prime/Modified-Vaccinia-Virus-Ankara (MVA-HBcore) boost vaccination followed by CpG-application augmented vaccine-induced HBcAg-specific CD8 T cell-function in the liver. In HBV-transgenic as well as AAV-HBV-transduced mice with persistent high-level HBV-replication, the combination of therapeutic vaccination with subsequent CpG-application was synergistic to generate more potent HBV-specific CD8 T cell immunity that improved control of hepatocytes replicating HBV.
Results
In high-titer HBV transgenic mice, CpG-injection improves immune control after heterologous prime-boost vaccination. We used HBV-transgenic mice as a well-characterized preclinical model of HBV persistence 15 to evaluate whether the combination of therapeutic heterologous prime-boost vaccination directed against HBcAg synergizes with CpG-treatment to control HBV infection. We used HBV-transgenic mice with intermediate to high HBsAg levels (50 to 450 IU/ml), where prime-boost vaccination fails to elicit strong HBVspecific CD8 T cell immunity 10 . CpG-application always led to formation of iMATEs, in wildtype C57Bl/6 mice, in HBV-transgenic mice or in vaccinated HBV-transgenic mice with high-level HBV replication (Fig. 1a). First, we determined the optimal time point for CpG application after heterologous prime-boost vaccination. CpGapplication within three days after heterologous prime-boost vaccination, but not later, resulted in increased numbers of vaccination-induced HBcAg-specific CD8 T cells (Suppl. Fig. 1, for gating strategy see Suppl. Fig. 2), which led us to apply CpG within the first three days after vaccination in all further experiments (see scheme in Fig. 1b). Since HBcAg-specific CD8 T cells are key for control of HBV infection 10,16 , we determined numbers of vaccination-induced HBcAg-specific CD8 T cells in the presence or absence of CpG-application. We detected an increase in the numbers of HBcAg-specific CD8 T cells after heterologous prime-boost vaccination in HBVtransgenic mice and a further increase upon CpG-application (Fig. 1c). Of note, we also found HBcAg-specific CD8 T cells in the spleen, but these cells were not further increased in numbers after CpG-application (Fig. 1d). This finding suggested hepatic expansion of already activated HBcAg-specific CD8 T cells generated by vaccination, but not local priming of naïve CD8 T cells after CpG application 14 . We found expression of Ki67, a marker for cell proliferation, in hepatic mononuclear cells within iMATEs after CpG-injection by immunohistochemistry in vaccinated mice (Suppl. Fig. 3a). Furthermore, we detected Ki-67 expression in the majority of hepatic CD8 T cells at d3 after CpG injection by flow cytometry (Suppl. Fig. 3b). Consistent with previous reports 14, 17 , application Figure 1. Intravenous CpG-injection leads to iMATE-formation in HBV-transgenic mice and expansion of HBcAg-specific CD8 T cells in the liver. (a) H&E staining of liver tissue slices detecting iMATEs at d3 after CpG-injection (white circles). (b) Time scheme for therapeutic heterologous prime boost vaccination against HBcAg and CpG-injection. (c) HBcAg-specific CD8 T cells from liver detected by HBc-specific multimerstaining using flow cytometry at d3 after CpG-injection, numbers denote percentage of total liver CD8 T cells. (d) Absolute numbers of total and HBcAg-specific CD8 T cells in liver and spleen at d3 after CpG-injection. Bars in (d) indicate mean value of n ≥ 3 mice per group + SEM. Statistical analyses were performed using Kruskal-Wallis test with Dunn's multiple comparison correction. Asterisks mark statistically significant differences: *p < 0.05; ns -not significant; n.d. -not detectable www.nature.com/scientificreports www.nature.com/scientificreports/ of CpG resulted in increased expression of costimulatory molecules like CD80 and CD86 as well as increased MHC-II expression in infiltrating monocytes (Suppl. Fig. 4a).
Following heterologous prime-boost vaccination and subsequent CpG-injection in HBV-transgenic mice, we observed development of increased serum ALT levels compared to mice receiving vaccination alone, indicating increased HBcAg-specific CD8 T cell immunity in the liver (Fig. 2a). Consistent with previous results 10 , in the presence of high HBV expression levels in HBV-transgenic mice, heterologous prime-boost vaccination alone did not reduce serum HBeAg levels, neither did CpG application alone (Fig. 2b). After sequential application of heterologous prime-boost vaccination and CpG injection, however, we found a significant decline in serum HBeAg-levels (Fig. 2b). These findings suggested that HBcAg-specific CD8 T cells locally expanded in the liver within iMATEs after CpG-injection, and controlled HBV replication in infected hepatocytes. We next determined the functionality of these HBcAg-specific CD8 T cells from liver or spleen following ex vivo restimulation. HBcAg-specific CD8 T cells isolated from liver or spleen of HBV-transgenic mice after heterologous prime-boost vaccination expressed IFNγ following re-stimulation with an HBcore peptide (Fig. 2c). After www.nature.com/scientificreports www.nature.com/scientificreports/ sequential application of vaccination and CpG, numbers of IFNγ-producing HBcAg-specific CD8 T cells were 4-fold increased, as were numbers of GzmB-expressing CD8 T cells, now constituting almost 30 percent of all hepatic CD8 T cells (Fig. 2c,d). Of note, vaccine-induced CD8 T cells expressed higher levels of GzmB but not IFNγ per cell (Fig. 2e) suggesting enhanced cytotoxic effector function.
Consistent with enhanced effector function and release of anti-viral cytokines of HBcAg-specific CD8 T cells, liver immune-histochemistry revealed that sequential application of vaccination together with CpG but not vaccination alone led to a 6-fold reduction in numbers of HBcAg-positive hepatocytes (Fig. 3a,b). The relatively low sALT values (see Fig. 2a) support the notion that reduction of HBcore-positive hepatocytes may also have been achieved by non-cytolytic activity of cytokines. Together, these results indicated that CpG-application acted in a synergistic fashion with therapeutic vaccination to augment both, numbers and functionality of vaccine-induced HBcAg-specific CD8 T cells in HBV-transgenic mice.
Control of infection in high-titer AAV-HBV infected mice after heterologous prime-boost vaccination followed by CpG injection.
Since elimination of an HBV-transgene from hepatocytes is not possible, we employed AAV-HBV infection to deliver HBV genomes into murine hepatocytes in vivo 18 , which leads to persistent HBV replication in hepatocytes 18 . This preclinical model system of persistent HBV replication can be used to characterize whether vaccine-induced HBV-specific CD8 T cells successfully eliminated HBV-replicating hepatocytes in vivo. After AAV-HBV-transduction, high serum HBeAg and HBsAg levels were detected, that remained stable until the end of the observation period twelve weeks later indicating persistent HBV replication (Fig. 4a). These HBsAg levels (200-1200 IU/ml) were comparable to the intermediate to high antigen level groups previously reported 10 where heterologous prime-boost vaccination fails to elicit strong HBV-specific CD8 T cell immunity. Of note, no anti-HBs or anti-HBe antibodies were detected in these mice before start of vaccination (Suppl. Fig. 5a). In AAV-HBV-transduced mice, CpG-application triggered iMATE-formation independent of heterologous prime-boost vaccination (Fig. 4b). As in HBV-transgenic mice, therapeutic vaccination induced HBcAg-specific CD8 T cells in AAV-HBV transduced mice (Fig. 4c,d). Consistent with an hepatic expansion of vaccine-induced T cells, we found more HBcAg-specific CD8 T cells in the livers but not spleens in AAV-HBV transduced mice after sequential application of heterologous prime-boost vaccination and CpG injection, although this did not reach statistical significance (Fig. 4c,d). Expansion of CD8 T cells after CpG injection in AAV-HBV transduced mice was confirmed by detection of Ki67 expression in hepatic CD8 T cells (Suppl. Fig. 3c,d). Compared to prime-boost vaccination in HBV-transgenic mice, that yielded only very few HBcAg-specific CD8 T cells (see Fig. 1), we detected more HBcAg-specific CD8 T cells in AAV-HBV-transduced mice after vaccination (Fig. 4c,d). This may be the result of the more stringent deletion of HBV-specific T cells through central tolerance in HBV-transgenic mice 19 , rather than changes in expression of co-stimulatory molecules on monocytes infiltrating the liver and forming iMATEs after CpG injection (Suppl. Fig. 4b). Again, CpG-application without prior vaccination, did not lead to any measurable increase in HBcAg-specific CD8 T cells (Fig. 4c,d).
Only the sequential application of heterologous prime-boost vaccination and CpG injection but neither CpG-injection nor vaccination alone resulted in increased sALT levels nine weeks after AAV-HBV transduction (Fig. 5a). Numbers of IFNγ-producing and GzmB-expressing HBcAg-specific CD8 T cells were higher after heterologous prime-boost vaccination in AAV-HBV infected mice, were further increased after CpG-injection, www.nature.com/scientificreports www.nature.com/scientificreports/ and were higher in liver than in spleen (Fig. 5b,c). Furthermore, GzmB but not IFNγ levels per HBcAg-specific CD8 T cell were significantly higher when heterologous prime-boost vaccination was followed by CpG-injection (Fig. 5d). Thus, also in AAV-HBV-transduced mice with high-level HBV replication, CpG-application after vaccination increased both, numbers and of functionality of vaccine-induced HBcAg-specific CD8 T cells.
We then followed AAV-HBV infected mice after vaccination ± CpG-injection for five weeks. Heterologous prime-boost vaccination alone led to a transient decrease of serum HBeAg-levels that was not very much different from non-vaccinated mice (Fig. 6a). In contrast, when vaccination was followed by CpG-injection serum HBeAg levels remained low over the entire observation period (Fig. 6a). Similarly, HBsAg levels were controlled after the combination of vaccination and CpG-injection (Fig. 6a), even in the absence of seroconversion to anti-HBs or anti-HBe (Suppl. Fig. 5b,c). Liver immunohistochemistry revealed that shortly after CpG-injection in vaccinated AAV-HBV infected mice (d32 after initial start of vaccination) there was no difference in numbers of HBcAg-expressing hepatocytes compared to mice that did not receive CpG (Fig. 6b,c). However, more than four weeks later (i.e., d63 after start of vaccination) following sequential application of vaccination and CpG-injection, numbers of HBcAg-expressing hepatocytes were reduced by 80%, while no control of HBcAg-expressing hepatocytes was observed after vaccination alone (Fig. 6b,c). At this late time point, only GzmB-expressing HBcAg-reactive CD8 T-cells were still increased in AAV-HBV-infected mice that received a combinatorial treatment of therapeutic vaccination followed by CpG-injection (Fig. 6d), suggesting that detection of GzmB expression in virus-specific CD8 T cells may allow for prediction of immune control of HBV-replicating hepatocytes. www.nature.com/scientificreports www.nature.com/scientificreports/
Discussion
Here we report that therapeutic heterologous protein-prime/MVA-vector-boost vaccination against hepatitis B synergizes with CpG-application to enhance numbers and functionality of HBcAg-specific CD8 T-cells in presence of high HBV-antigen levels in two preclinical models of persistent HBV infection, i.e. the AAV-HBV model and HBV-transgenic mice. High viral antigen levels prevented formation and expansion of HBcAg-specific CD8 T cells, that are required to gain control of HBV infection 11 . Cytokines released from T cells like IFNγ/TNF control HBV-antigen expression through degradation of the HBV persistence form in infected hepatocytes 20,21 . Cytokines also exert non-cytolytic control of HBV antigen expression in preclinical models such as the HBV transgenic mouse 22 and may thereby account in part for the rapid decline of serum HBeAg-levels after vaccination in combination with CpG-injection, which leads to release of cytokines from T cells 14 .
Long-term control of persistent HBV-infection, however, relies on reduction of the numbers of HBV-infected hepatocytes, which may either be achieved through degradation of the HBV persistence form or elimination of infected and HBV-replicating hepatocytes. Here, we demonstrate that the combination of therapeutic heterologous prime-boost vaccination with CpG-injection led to strong reduction of the numbers of HBcAg-expressing hepatocytes together with local hepatic expansion of vaccination-induced HBcAg-specific CD8 T cells. This www.nature.com/scientificreports www.nature.com/scientificreports/ suggests that CpG-injection, which functions to induce formation of iMATEs and to locally expand already activated CD8 T cells in the liver by providing costimulatory signals 14 , can be employed to increase the efficacy of therapeutic vaccination against HBV even in the presence of high-level viral antigens, which otherwise prevent induction of protective immunity. However, it is also possible that cytokines, e.g. interferons or TNF, induced by CpG-injection, that are known to have anti-viral effects 20,21 , further support the therapeutic effect of the prime-boost vaccination. Since cytokines and in particular interferons are important for induction of CD8 T cells 23,24 , it is not possible to mechanistically dissect non-cytolytic cytokine-mediated from cytolytic control of HBV infection in this experimental setting of therapeutic vaccination in combination with CpG application. Induction of iMATEs through CpG-injection has also been shown to improve CD8 T cell immunity against high-level expression of a transgenic model antigen (ovalbumin) in the liver 25 , to increase anti-viral immunity during lymphocytic choriomeningitis virus infection 14 and even increase anti-tumor immunity against liver cancer 26 . Such induction of increased T cell immunity is linked to hepatic recruitment of inflammatory monocytes that are functionally distinct from liver-resident macrophages 14,17 .
The combination of heterologous prime-boost vaccination to generate HBcAg-specific CD8 T cells in the first place is therefore nicely complemented by the subsequent local hepatic increase of HBcAg-specific CD8 T cells by CpG-mediated iMATE induction. Thus, our approach to apply first therapeutic vaccination followed by CpG injection combines two synergistic principles of immune therapy, i.e. first increasing numbers of HBcAg-specific CD8 T cells through potent cross-priming in the setting of a therapeutic vaccination and second amplifying these vaccine-induced HBcAg-specific CD8 T cells locally in the liver by CpG-mediated iMATE induction. Thus, iMATE-induced expansion of vaccine-induced HBV-specific CD8 T cells a may provide a liver-specific mean to increase the efficiency of therapeutic vaccination beyond currently available strategies 27 .
Although we cannot provide formal evidence for the role of GzmB expressed in HBcAg-specific CD8 T cells for elimination of HBV-infected hepatocytes, our finding that GzmB levels are increased in HBcAg-specific CD8 T cells after successful control of HBV-infected hepatocytes supports the notion that GzmB could be employed as biomarker for immune control of HBV-infection in future immune monitoring strategies during therapeutic vaccination against chronic hepatitis B. GzmB has been identified as essential downstream effector molecule for T cells to control viral infection 28 . GzmB expression has been shown to be higher in CX 3 CR1 + CD8 T cells with strong effector function 29,30 . The identification of GzmB + HBV-specific CD8 T cells after therapeutic vaccination in combination with CpG injection therefore supports the notion that those T cells may have strong anti-HBV activity.
Taken together, our results demonstrate in two preclinical models of persistent HBV infection that the sequential combination of therapeutic vaccination followed by CpG-application was superior to vaccination alone in controlling persistent HBV infection by increasing numbers and functionality of HBcAg-specific CD8 T cells. Experimental animals and AAV-HBV transduction. HBV transgenic mice (strain HBV1.3.32) carrying 1.3 overlength HBV (genotype D, serotype ayw) genome were created on C57BL/6 background (haplotype H-2 b/b ). Fourteen to sixteen weeks old female and male HBV transgenic mice were bred at the AVM Animal Facility, Helmholtz Center Munich. Wildtype C57BL/6 mice (haplotype H-2 b/b ) were purchased from Charles River Laboratories, Schulzfeld, Germany. Persistent HBV replication in wildtype mice was established by intravenous injection of 1 × 10 10 genome equivalents (geq) of the AAV-HBV1.2 vector encoding 1.2-fold overlength HBV genome of genotype D, as reported prevsioulsy 18 . Mice were kept under pathogen-free (SPF) conditions at the Animal Facility, University Hospital Rechts der Isar, Technical University of Munich, or the Helmholtz Center Munich following institutional guidelines. Experiments were performed during the light phase of the day. Animals were bled one day before treatment and allocated into groups with comparable HBsAg and HBeAg levels.
Materials and Methods
Heterologous protein prime MVA boost vaccination. Mice were immunized with a particulate protein prime followed by recombinant Modified Vaccinia Ankara virus (MVA) vector boost vaccination scheme described previously 10 . Briefly, mice were immunized intramuscularly (i.m.) into the quadriceps muscles of both hind limbs in isoflurane anesthesia. Protein immunization with 10 µg HBcAg expressed in E. coli (APP Latvijas Biomedicinas, Riga, Latvia) adjuvanted with 10 µg cyclic di-adenylate monophosphate (c-di-AMP) (InvivoGen, San Diego, CA) was given twice at 2-week interval. Two weeks after the second protein immunization, mice received 1 × 10 7 particles of recombinant MVA expressing HBcAg intramuscularly (i.m.).
Isolation of lymphocytes from spleen and liver and non-parenchymal liver cells. Preparation
of single-cell suspensions of splenocytes was performed as described previously 31 . Liver-associated lymphocytes (LAL) were isolated by density gradient centrifugation as described 31 . Briefly, mouse liver was perfused with pre-warmed PBS (to flush blood from the hepatic vasculature) and forced through a 100 µm nylon cell strainer (BD Falcon, Franklin Lakes, NJ). After washing, cell pellets were suspended in 10 ml of prewarmed enzyme solution containing 1 mg/ml of collagenase type IV (Worthington, Lakewood, NJ) in RPMI 1640 medium supplemented with 10% fetal bovine serum (Gibco, Thermo Fischer Scientific, Darmstadt, Germany) and digested for www.nature.com/scientificreports www.nature.com/scientificreports/ 30 min at 37 °C. Cell pellets were then resuspended in 40% Percoll (GE Healthcare, Munich, Germany), layered on 80% Percoll solution and centrifuged at 1600 × g for 20 minutes without brakes for density separation. Non-parenchymal liver cells were obtained after liver perfusion, mechanical separation of liver tissue and gentle collagenase digestion for 10 minutes followed by percoll gradient centrifugation and washing steps as outlined in detail in 14 . Cell yield and viability were determined by trypan blue exclusion. Immunohistochemistry. Liver tissue samples were fixed in 4% buffered formalin for 48 h and paraffin-embedded. Then, 2-μm-thin liver sections were prepared with a rotary microtome (HM355S, ThermoFisher Scientific, Waltham, USA). Immunohistochemistry was performed using a Bond Max system (Leica, Wetzlar, Germany, all reagents from Leica) with primary antibodies against Ki67 (clone SP6, Abcam ab16667) or rabbit anti-HBcAg primary antibody (Diagnostic Biosystems, Pleasanton, CA; 1:50 dilution) and horseradish peroxide coupled secondary antibody. Briefly, slides were deparaffinized using deparaffinization solution, pretreated with Epitope retrieval solution 1 (corresponding to citrate buffer pH6) for 20 minutes. Antibody binding was detected with a polymer refine detection kit without post primary reagent and visualized with DAB as a dark brown precipitate. Counterstaining was done with hematoxyline. Slides were scanned using a SCN 400 slide scanner (Leica Biosystems) and HBcAg-positive hepatocytes were determined based on localization, intensity and distribution of the signal in random 10 view fields (40x magnification). The mean numbers of HBcAg-positive hepatocytes were quantified per mm 2 . statistical analyses. Statistical analyses were performed using GraphPad Prism version 5 (GraphPad Software Inc., San Diego, CA). Statistical differences were analyzed using Kruskal-Wallis test with Dunn's multiple comparison correction, 2-way ANOVA with Tukey's multiple comparison correction, Mann-Whitney test and unpaired or paired t test. P-values < 0.05 were considered significant. | 2019-07-27T13:05:06.905Z | 2019-07-25T00:00:00.000 | {
"year": 2019,
"sha1": "5f6d7ecf98a83460f535317784a4af3976cb9e6b",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-019-47149-w.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5f6d7ecf98a83460f535317784a4af3976cb9e6b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
18742491 | pes2o/s2orc | v3-fos-license | Astrocyte-to-neuron conversion induced by spinal cord injury
Spinal cord injury (SCI) triggers pronounced astrocyte reactivity (astrogliosis) including astroglial proliferation and migration toward the injury site participating to the formation of a glial scar. Since the mid-20 th century, SCI-induced astrogliosis was mainly regarded as detrimental for successful axonal regeneration. However, more recent studies have shown astrogliosis as a multifactorial phenomenon involving specific morphological, molecular and functional alterations in astrocytes that can also exert beneficial effects [1, 2]. It was suggested, although not proven, that SCI-induced astrogliosis depends on multiple factors such as time after lesion, injury severity and distance to the lesion site. In a recent study we had attempted to uncover the molecular involvement of astrocytes after SCI by studying their transcriptomic alterations at different stages after moderate and severe lesions [3]. Aldehyde dehydrogenase 1 family member L1 (Aldh1l1) is a pan-astrocytic marker, hence using the Aldh1l1-EGFP transgenic mice, combined with fluorescence-activated cell sorting (FACS), we isolated pure astrocyte population at different stages following SCI. Choosing lateral hemisection and complete section of the spinal cord, as moderate and severe injury models, we investigated astrocytic response at 1 and 2 weeks after lesion. We subsequently carried out astrocyte-specific RNA-sequencing and pathway analyses to unveil the molecular signature of injuries-induced astrogliosis. Our transcriptomic analyses demonstrated a dual astrocytic response depending on time post-injury and lesion severity. Following moderate SCI, astrocytes displayed a protective role and showed no changes (1 week) and even down-regulated (2 weeks) expression of transcripts involved in immune response. On the other hand, astrocytes response after severe SCI seems to be detrimental by an upsurge expression of inflammatory genes (1 week) and prevention of extracellular re-modeling (2 weeks) (3). These are the first concrete evidence of a heterogeneous astrocytic response that is driven not only by lesion severity but also time after injury (Figure 1). In parallel, using pathway analyses, we also identified in astrocytes the induction of the neural stem cell lineage and the over-expression of the neuronal progenitor gene βIII-tubulin (Tubb3, also known as TUJ1). We confirmed βIII-tubulin protein expression at tissue level using immunohistochemistry and at single cell level using FACS analyses. The sub-population of astrocytes that express βIII-tubulin was only found within 750µm distance to the lesion epicenter. Astrocytes co-expressing βIII-tubulin, also displayed alterations in their morphology from typical stellate shape to classical neuronal progenitor cells with bipolar or multipolar processes. Given that SCI induces astrocytic proliferation, we injected BrdU in Aldh1l1-EGFP mice after injury to determine the origin of eGFP/βIII-tubulin co-expressing cells. BrdU incorporation was observed into newly formed astrocytes but not in eGFP/βIII-tubulin-expressing astrocytes. This suggests that it is the resident mature astrocytes, rather than newly formed astrocytes, that undergo transdifferentiation towards neuronal lineage (Figure 1). Time-dependent analyses revealed that astrocytic conversion towards neuronal lineage starts as early as 72 hours, peaking between 1-2 weeks and continues to a lower degree up to 6 weeks after both moderate and severe SCI. Further immunostaining, using mature neuronal markers, showed that transdifferentiating astrocytes eventually express GABAergic, but not glutamatergic, markers. Moreover, we identified the fibroblast growth factor receptor 4 (Fgfr4) as a potential player responsible for SCI-induced astrocytic transdifferentiation towards neuronal lineage. Fgfr4 indeed promotes embryonic stem cell differentiation towards neuronal lineage [4] and showed pronounced over-expression from 72 hours following lesion at both RNA and protein level. Although other recent studies had shown limited astrocytes conversion towards neuronal lineage upon enforced expression of neurogenic factors, none had Editorial Figure 1: Schematic cartoon displaying summary of astrocytic responses following SCI
Harun Najib Noristani and Florence Evelyne Perrin
Spinal cord injury (SCI) triggers pronounced astrocyte reactivity (astrogliosis) including astroglial proliferation and migration toward the injury site participating to the formation of a glial scar. Since the mid-20 th century, SCI-induced astrogliosis was mainly regarded as detrimental for successful axonal regeneration. However, more recent studies have shown astrogliosis as a multifactorial phenomenon involving specific morphological, molecular and functional alterations in astrocytes that can also exert beneficial effects [1,2]. It was suggested, although not proven, that SCI-induced astrogliosis depends on multiple factors such as time after lesion, injury severity and distance to the lesion site. In a recent study we had attempted to uncover the molecular involvement of astrocytes after SCI by studying their transcriptomic alterations at different stages after moderate and severe lesions [3].
Aldehyde dehydrogenase 1 family member L1 (Aldh1l1) is a pan-astrocytic marker, hence using the Aldh1l1-EGFP transgenic mice, combined with fluorescence-activated cell sorting (FACS), we isolated pure astrocyte population at different stages following SCI. Choosing lateral hemisection and complete section of the spinal cord, as moderate and severe injury models, we investigated astrocytic response at 1 and 2 weeks after lesion. We subsequently carried out astrocyte-specific RNA-sequencing and pathway analyses to unveil the molecular signature of injuries-induced astrogliosis.
Our transcriptomic analyses demonstrated a dual astrocytic response depending on time post-injury and lesion severity. Following moderate SCI, astrocytes displayed a protective role and showed no changes (1 week) and even down-regulated (2 weeks) expression of transcripts involved in immune response. On the other hand, astrocytes response after severe SCI seems to be detrimental by an upsurge expression of inflammatory genes (1 week) and prevention of extracellular remodeling (2 weeks) (3). These are the first concrete evidence of a heterogeneous astrocytic response that is driven not only by lesion severity but also time after injury (Figure 1).
In parallel, using pathway analyses, we also identified in astrocytes the induction of the neural stem cell lineage and the over-expression of the neuronal progenitor gene βIII-tubulin (Tubb3, also known as TUJ1). We confirmed βIII-tubulin protein expression at tissue level using immunohistochemistry and at single cell level using FACS analyses. The sub-population of astrocytes that express βIII-tubulin was only found within 750µm distance to the lesion epicenter. Astrocytes co-expressing βIII-tubulin, also displayed alterations in their morphology from typical stellate shape to classical neuronal progenitor cells with bipolar or multipolar processes. Given that SCI induces astrocytic proliferation, we injected BrdU in Aldh1l1-EGFP mice after injury to determine the origin of eGFP/βIII-tubulin co-expressing cells. BrdU incorporation was observed into newly formed astrocytes but not in eGFP/βIII-tubulin-expressing astrocytes. This suggests that it is the resident mature astrocytes, rather than newly formed astrocytes, that undergo transdifferentiation towards neuronal lineage (Figure 1). Time-dependent analyses revealed that astrocytic conversion towards neuronal lineage starts as early as 72 hours, peaking between 1-2 weeks and continues to a lower degree up to 6 weeks after both moderate and severe SCI. Further immunostaining, using mature neuronal markers, showed that transdifferentiating astrocytes eventually express GABAergic, but not glutamatergic, markers. Moreover, we identified the fibroblast growth factor receptor 4 (Fgfr4) as a potential player responsible for SCI-induced astrocytic transdifferentiation towards neuronal lineage. Fgfr4 indeed promotes embryonic stem cell differentiation towards neuronal lineage [4] and showed pronounced over-expression from 72 hours following lesion at both RNA and protein level.
Although other recent studies had shown limited astrocytes conversion towards neuronal lineage upon enforced expression of neurogenic factors, none had Editorial Figure 1: Schematic cartoon displaying summary of astrocytic responses following SCI reported a spontaneous injury-induced astroglial transdifferentiation in vivo [5][6][7][8]. Our results show, that following SCI, resident astrocytes have an intrinsic capacity to undergo transdifferentiation towards neuronal lineage. Further studies aimed at stimulating this intrinsic pathway in astrocytes to convert a larger population towards neuronal phenotype may provide a new therapeutic strategy to replace demised neurons and improve functional outcomes after SCI. Our ongoing work involves in depth investigation of molecular pathways involved in this intrinsic injury-induced astrocytic conversion towards neuronal linage as well as its functional role in SCI pathophysiology. | 2018-04-03T00:51:50.385Z | 2016-12-02T00:00:00.000 | {
"year": 2016,
"sha1": "a17d8c3adc4971abcd85d638ef04d6c4f3a47076",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.18632/oncotarget.13780",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a17d8c3adc4971abcd85d638ef04d6c4f3a47076",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
216198772 | pes2o/s2orc | v3-fos-license | Development and Optimization of Ready to Serve (RTS) Beetroot Drink
Beetroot is rich in various nutrients. Hence the present study was conducted to develop a ready to serve (RTS) drink using beetroot juice. The juice of beetroot was extracted and added with different concentrations of sugar and citric acid to optimize the best-suited combination of ingredients. Standardization of RTS was done using ranking sensory evaluation test. Two variants of ginger and black pepper flavor were also prepared and standardized. The standardized amount of ingredients after sensory analysis for RTS was found to be 17.7% juice content, 7.5% sugar and 0.1% citric acid. The black pepper variant was standardized at 0.4% black pepper in the original product and the ginger variant standardized at 1.5% ginger extract . The beetroot drink and its variants having an optimized amount of ingredients were analysed for their physico-chemical properties. Shelf life analysis for a period of one month was also carried out.
INTRODUCTION
Red Beetroot (Beta Vulgaris) is a popular vegetable throughout the world (Latorre et al. 2011). Beetroot grown throughout the America, Europe and Asia; is a cultivated form of Beta Vulgaris Subsp. Vulgaris (U.S. Department of Agriculture). Beetroot belongs to Chenopodiaceae family having four basic varieties Crosby Egyptian, Early Wonder, Detroit Dark Red and Crimson Globe (Chawla et al., 2016). Beets are composed of 87.57% water, 9.56% carbohydrates (29.3% fiber and 70.7% sugar), 1.61% protein and 0.17% lipids in addition to being a source of potassium, choline, vitamin C and niacin (Varner 2014). Beetroot juice contains minerals such as magnesium, calcium, iron, phosphorous, sodium and zinc, vitamins like biotin, folic acid, niacin, vitamin B6 (Wootton-Beard and Ryan, 2011). Small amounts of hydroxycinnamic acids such as gallic, syringic, caffeic acids and flavonoids have been identified in beetroot (Kazimierczak et al. 2014). Sucrose is the main sugar found in beetroot along with small amounts of glucose and fructose (Bavec et al. 2010). Beetroot also contains raffinose (Mahn et al. 2001). High concentrations of betalains, a group of phenolic secondary plant metabolites and a water-soluble pigment is responsible for the intense red color of beetroot (Georgiev et al. 2010).Betalain is divided into betaxanthins giving yellow-orange color and betacyanins giving purple color to beetroot.
Recently, beetroot has gained attention as a healthpromoting functional food product (Clifford et al., 2015). Carotenoids, betalains, polyphenols, flavonoids and saponins are the active compounds found in red beetroot (Figiel, 2010). The quantity of these active compounds is influenced by species, variety, cultivation area and ripening period and storage. It shows antibacterial and antiviral activity due to strong antioxidant potential (Kowalski and Szadzińska, 2014) and thus can be considered as a factor preventing cancer (Figiel, 2010). It also shows antiinflammatory and hepatoprotective activities (Khan et al. 2011). Beetroot also helps in increasing resistance to oxidation of low-density lipoproteins (Tesoriere et al. 2003).
Beetroot is used in tomato paste, jam and jellies, ice cream, sweets, sauces, etc as a red food colorant to improve the color (Gokhale and Lele 2011). Beetroot commercial products such as juices and powders act as performanceenhancing legal nutrition supplements for athletes, especially those in endurance sports due to the presence of inorganic nitrate (NO 3 -), bacteria reduces it to nitric oxides (NO) which shows positive effects on muscle efficiency and fatigue resistance also causing reduction in resting blood pressure (Bailey et al. 2009). Thus it is suggested that it can prevent and treat hypertension and cardiovascular diseases (Lundberg et al. 2010).
In the food industry convenience is considered as a marketing tool (Drewnowski and Darmon 2005 Development and Optimization of Ready to Serve (RTS) Beetroot Drink juices, has a pleasant taste due to relatively high sugar content (Thakur and Gupta, 2005). There is a paucity of literature available that confirms the use of beetroot for development of a ready to serve drink. Tapping the potential of the beetroot to be used as a source of various nutrients, the present study was conducted to develop a RTS drink from beetroot. These products contribute positively toward increasing consumption of polyphenols with ease and convenience.
Raw Material
Fresh beetroot, black pepper powder and ginger rhizomes were purchased from local market of Narela, Delhi. All the additives namely citric acid (saby make), sugar (Madhur make) and gum arabic (Aksharchem) were food grade and were purchased through an online shopping portal. The beetroot had moisture content 83.60± 0.1% and ash content of 0.65± 0.01%. Sugar had a moisture content of 0.05 ± 0.41% and ash content of 0.02±0.03%.
Extraction of beetroot juice
Procured beetroot and ginger were washed thoroughly with clean water followed by peeling and cutting them into small pieces. The juice was extracted from beetroot and ginger with the help of domestic juicer (USHA Juicer Mixer Grinder, JMG 3442), strained separately with muslin cloth and stored in bottles.
Optimization of the beetroot RTS and its variants
Standardization of the RTS was done by stepwise standardisation of beetroot juice, citric acid and sugar (Table 1-3) by using ranking sensory evaluation test. The samples were evaluated by 15 semi-trained panelists (Age group 20-35) consisting of students, faculty members and lab technicians from the National Institute of Food Technology Entrepreneurship and Management (NIFTEM). Since the panelists were not professional sensory analysts, therefore, they were made familiar with the procedure of ranking method. The panelists were asked to rank the product on various parameters and also on overall acceptability (Rank 1 for the most preferred sample, 2 for the moderately preferred and rank 3 for the least preferred sample (Sharif et al. 2017).
Ranking test for intensity is a product oriented sensory test. This method of testing can be used to gather preliminary product difference details or to screen panelists who can distinguish between samples with established differences. Ranking test is able to find out noticeable differences between the samples, but does not tell about how much difference exists between the samples (Watts et al. 1989).
Ginger extract was added in different amounts to prepare the ginger variant of the beetroot RTS (Table 4). Black pepper was added in different amounts and later strained to prepare the black pepper variant of Beetroot RTS Table 4: Amount of ginger extract varied to select the best fit, keeping the amount of beetroot juice, sugar andcitric acid to be added constant for the ginger variant of beetroot RTS.
Sensory analysis of the final standardised product and the variants
Beetroot RTS and its flavroured variants were prepared using the optimized quantities of various ingredients and sensory analysis was carried out by using hedonic scales that is a 9 point scale used as a consumer oriented sensory test. It helps in measuring degree of liking of product by the consumer (Watts et al. 1989). The samples were evaluated by 30 semi-trained panelists (Age group 20-35) consisting of students, faculty members and lab technicians from the National Institute of Food Technology Entrepreneurship and Management (NIFTEM). Since the panelists were not professional sensory analysts, therefore, they were made familiar with the procedure of hedonic scale testing.The members were made familiar with the parameters to be analysed namely taste, aroma, appearance, after taste, overall acceptability. Panelists evaluated the RTS and its variants for the parameters using a 9-point hedonic rating scale (1 = dislike extremely and 9 = like extremely).
Physico-chemical analysis beetroot RTS and its variants
The RTS samples stored at room temperatures were analyzed for various physicochemical parameters as follows:
Water activity
Digital benchtop water activity meter (chilled mirror dewpoint)(AQUA LAB, S40002534) was used to measure the water activity. The sample was placed in a small sample cup in the temperature controlled chamber. The instrument was switched on and was allowed to run, till the dew point was reached. Readings were observed from the digital display.
Total Soluble Solids (TSS)
Digital bench top refractometer was used to measure the TSS content (ATAGO, 140404N). The calibration was done using distilled water, by placing it over the prism in the chamber. Then the sample was placed on the prism and the chamber was closed. The instrument was allowed to run and the readings were observed from the digital display.
Titratable acidity
The titratable acidity was determined by titration with 0.1N sodium hydroxide (NaOH) according to (AOAC 1995)
pH
A digital pH (LABINDIA, PN13330213) was used to determine the pH of the samples. The pH meter was standardized and calibrated with pH 4.0, 7.0 and11.0 standard solutions.
Microbiological parameters
Total plate count (TPC) and coliform were analyzed for Beetroot RTS and both the flavor variants. The testing was done by an external NABL accredited lab; Appex lab, Ramesh Nagar, New Delhi. Using IS: 5402 -2012 and IS: 5401(P-1) 2012 as references methods for TPC and Coiiform.
Shelf Life Study of RTS samples
The Beetroot RTS and its variants were prepared according to the above mentioned procedure. Being a moderate acidity product, the RTS was pasteurized and stored in glass bottles for the shelf life analysis. Glass bottles of 200ml were thoroughly cleaned and boiled in water for 15 minutes to sterilize them. Beetroot RTS and both the variants were pasteurized individually by heating to 90-100 C in a pan for 30 minutes. They were hot filled in different bottles, capped and stored at room temperature for 4 weeks. Physicochemical analysis (water activity, TSS, Titrable acidity, color, pH) and microbial analysis (total plate count and coliform) were done on weekly basis for 4 weeks.
Statistical analysis
For the statistical analysis of the sensory evaluation, nonparametric test for the samples was carried out using Chisquare test (Goodness to Fit) on the parameter overall acceptability.
Preliminary sensory analysis
The result for the preliminary sensory analysis carried out to determine the best fit of various ingredients by using ranking test is presented in Table 6-11.
Final composition and sensory analysis of beetroot RTS and its variants
Based upon the inferences of the preliminary sensory analysis, the final standardised composition of the RTS and its variants is mentioned in Table *The values depict number of people who ranked a particular sample. The Figs 3 to 5 shows the number of people and their hedonic scale ratings for the various sensory parameters. Out of 30 panelists, 18 of them ranked the beetroot RTS above 7 in a 9 point hedonic scale ranking for overall acceptability as depicted in Fig 3. The black pepper flavoured Beetroot RTS was ranked above 7 on a 9 point hedonic scale by 18 people for the parameter overall acceptability (Fig 4). 17 panelists ranked the Ginger flavoured beetroot RTS above 7 for the overall acceptability of the product (Fig 5).
Physico-chemical characteristics of beetroot RTS and its variants
The optimized beetroot RTS and its variants were analyzed for TSS, acidity, pH, colour and water activity and the results are given in Table 13.
Shelf life study of RTS and its variants
Beetroot RTS and its variants were analyzed for TSS, acidity, pH, color and Total plate count (TPC) on a weekly basis and the results are given in Fig 6 to Fig 10.
Changes in acidity
Fig 6 shows the changes in the acidity of Beetroot RTS and its variants for the storage period of 4 weeks. When stored at room temperature, the acidity for Beetroot RTS increased from 0.26% to 0.53% in the period of 4 weeks. Similarly the acidity for Black pepper variant increased from 0.27% to 0.62% and that of ginger variant increased from 0.23% to 0.42%. A sharp notable increase is seen in all the variants after 2 nd week. This may be due to the fermentation by microorganisms. Similar results were quoted by (Kohli et al. 2019) in case of storage of sugarcane j uice at refrigerated conditions.
Changes in TSS
The TSS decreased from 11.21% to 11.06% in the case of Beetroot RTS over the period of 4 weeks storage at room temperature as shown in Fig 7. The similar result was observed in the case of black pepper variant where the TSS decreased from 11.08% to 11.00% and ginger variant where it decreased from 12.70% to 12.42%. The micro-organisms use sugars present in the RTS as the raw material for fermentation which leads to the decrease in the TSS.Similar results were also seen by Amaravathi et al. (2014) in the case of spiced pineapple juice storage and also by (Sri Vidhya and Sri 2018) in the case of stored Beetroot juice.
Changes in pH
The pH decreased from 3.95 to 3.03 for Beetroot RTS over
Changes in colour
As shown in Fig 9, (Herbach et al. 2006). Colour of RTS juice might have degraded mostly due to the factors like light and temperatures as betalain is very heat liable compound and thus shows degradation even at room temperature as observed by (Woo et al., 2011).
Microbiological analysis
All the three samples of beetroot drink were analyzed for Total Plate count and Coliform count. TPC was found gradually increasing in all the samples with time. The total plate count was 3210 CFU/ml, 2421 CFU/ml and 3920 CFU/ ml at the end of 04 weeks of storage in case of plain beetroot RTS, ginger flavoured drink and black pepper flavoured drink, respectively. Less increase in case of ginger flavoured drink could be due to its antimicrobial properties (Nwachukwu and Ezejiaku 2014). As all the samples were only pasteurized not sterilized, the available fermenting
Statistical analysis
For the statistical analysis of the sensory evaluation, nonparametric test for the samples was carried out using Chi square test (Goodness to Fit) on the parameter overall acceptability for beetroot RTS and its two variants. The results are presented in Table 14. Null hypothesis was assumed that the overall acceptability may be accepted. The results were calculated at p<0.05%. It was seen from the results that the tabulated value was more than the calculated value thus the null hypothesis may be accepted.
CONCLUSION
The beetroot RTS and its flavored variants were developed successfully with a fixed composition and preferred by many consumers. Sensorial evaluations by a semi-trained panel was used at each and every step of product development to ensure the best fit of ingredients to achieve the most desirable product. Physico-chemical analysis showed that the product is of low pH preventing the spoilage by the diversity of organisms. However, the product may still be susceptible to acid tolerant bacteria and wild yeasts. RTS pasteurized and packed in glass bottles didn't show much change in the physicochemical parameters until 2 nd week of the storage at the room temperature. Cfu/ml | 2020-04-16T09:06:26.029Z | 2020-03-19T00:00:00.000 | {
"year": 2020,
"sha1": "f2fda16468b8680a53aac5df81aabff07d6dda53",
"oa_license": null,
"oa_url": "https://doi.org/10.18805/ajdfr.dr-1504",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "80c0c6af851f77866af189047b46c43890f884d2",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
219449941 | pes2o/s2orc | v3-fos-license | Bioactive Glasses and Glass / Polymer Composites for Neuroregeneration: Should We Be Hopeful?
: Bioactive glasses (BGs) have been identified as highly versatile materials in tissue engineering applications; apart from being used for bone repair for many years, they have recently shown promise for the regeneration of peripheral nerves as well. They can be formulated in di ff erent shapes and forms (micro- / nanoparticles, micro- / nanofibers, and tubes), thus potentially meeting the diverse requirements for neuroregeneration. Mechanical and biological improvements in three-dimensional (3D) polymeric sca ff olds could be easily provided by adding BGs to their composition. Various types of silicate, borate, and phosphate BGs have been examined for use in neuroregeneration. In general, BGs show good compatibility with the nervous system compartments both in vitro and in vivo. Functionalization and surface modification plus doping with therapeutic ions make BGs even more e ff ective in peripheral nerve regeneration. Moreover, the combination of BGs with conductive polymers is suggested to improve neural cell functions at injured sites. Taking advantage of BGs combined with novel technologies in tissue engineering, like 3D printing, can open new horizons in reconstructive approaches for the nervous system. Although there are great potential opportunities in BG-based therapies for peripheral nerve regeneration, more research should still be performed to carefully assess the pros and cons of BGs in neuroregeneration strategies.
Introduction
The nervous system is vital in the human body and is made from two distinct components, the central nervous system (CNS) and the peripheral nervous system (PNS). Both parts are potentially subjected to various acute and chronic diseases and injuries, causing reduced quality of life [1][2][3]. In the case of peripheral nerve injuries, mechanical stress; exposure to heat, cold, and irradiation; tumors; and focal inflammation are the main reasons for partial or total loss of motor, sensory, and autonomic functions [4]. For example, cutting of the nerve as a result of traumatic injuries may result in complications like inability to move muscles, dysfunctional or no feeling of normal sensations, and eventually painful neuropathies [5]. To date, huge numbers of experimental studies have been [31].
Neurons are specialized cells of the nervous system serving as compartments for receiving and sending signals to other cells and tissues. Transfer of the electrical signals is conducted via neuron extensions, i.e., axons and dendrites. A peripheral nerve contains both dendrites and axons in its structure [32]. Histologically, peripheral nerves are complex tissues which are composed of parenchyma (Schwann cells and axons) and a stroma (a "natural scaffold" made of connective tissue elements) [33]. The nerve fibers are the smallest functional units of a peripheral nerve and can be categorized as myelinated or unmyelinated depending on the presence of Schwann cells. They are responsible for conducting information to (afferent) or from (efferent) the CNS. Due to their extension in the body, peripheral nerves are exposed to a broad range of diseases and injuries, such as neuropraxia, axonotmesis, neural mutilation, and nerve defects [34,35]. For instance, accidental trauma may result in physical and sensory malfunctions of the extremities and cause loss of movement and loss of skeletal muscle mass (muscle atrophy) [36]. Suturing nerve stumps together is commonly used for managing small defects, while the implantation of a substitute is necessary to make a bridge between the broken ends and support naturally regenerating axons to the distal segment. Most in vivo studies focused on peripheral nerve regeneration have been conducted in animal models of the sciatic nerve in the hindlimb; however, reconstruction of cranial nerves is of great importance, too, as their injuries might lead to permanent facial disfigurement [37].
Post-Injury Scenario-A Short Overview
Following the cutting of a nerve (axotomy), a series of molecular and cellular mechanisms, including the activation of cell survival pathways and up-regulation of regeneration-associated genes, happen in temporal and spatial coordination to encourage peripheral nerve regeneration in the injured site [38]. The main signaling pathways involved in peripheral nerve regeneration include phosphoinositide-3-kinase (PI3K)/protein kinase B (PKB or Akt) (PI3K/Akt) signaling (increasing survival, growth, and differentiation of cells), Ras-mitogen-activated protein kinase (Ras/MAPK) signaling (enhancing neurite outgrowth and Schwann cell differentiation and myelination), cyclic AMP (cAMP) signaling (improving guidance, differentiation, neurite outgrowth, and neuronal survival), and Rho/Rho-associated coiled-coil containing protein kinase (Rho/ROCK) signaling (modulating neurite outgrowth) [39,40]. Moreover, some studies emphasize the critical role of angiogenic signaling pathways in improving the quality of peripheral nerve repair [41].
Experimental evidence suggests that all these biomolecular events may result in axon regeneration at a rate of 1-3 mm/day, which significantly reduces with increasing time and distance Neurons are specialized cells of the nervous system serving as compartments for receiving and sending signals to other cells and tissues. Transfer of the electrical signals is conducted via neuron extensions, i.e., axons and dendrites. A peripheral nerve contains both dendrites and axons in its structure [32]. Histologically, peripheral nerves are complex tissues which are composed of parenchyma (Schwann cells and axons) and a stroma (a "natural scaffold" made of connective tissue elements) [33]. The nerve fibers are the smallest functional units of a peripheral nerve and can be categorized as myelinated or unmyelinated depending on the presence of Schwann cells. They are responsible for conducting information to (afferent) or from (efferent) the CNS. Due to their extension in the body, peripheral nerves are exposed to a broad range of diseases and injuries, such as neuropraxia, axonotmesis, neural mutilation, and nerve defects [34,35]. For instance, accidental trauma may result in physical and sensory malfunctions of the extremities and cause loss of movement and loss of skeletal muscle mass (muscle atrophy) [36]. Suturing nerve stumps together is commonly used for managing small defects, while the implantation of a substitute is necessary to make a bridge between the broken ends and support naturally regenerating axons to the distal segment. Most in vivo studies focused on peripheral nerve regeneration have been conducted in animal models of the sciatic nerve in the hindlimb; however, reconstruction of cranial nerves is of great importance, too, as their injuries might lead to permanent facial disfigurement [37].
Post-Injury Scenario-A Short Overview
Following the cutting of a nerve (axotomy), a series of molecular and cellular mechanisms, including the activation of cell survival pathways and up-regulation of regeneration-associated genes, happen in temporal and spatial coordination to encourage peripheral nerve regeneration in the injured site [38]. The main signaling pathways involved in peripheral nerve regeneration include phosphoinositide-3-kinase (PI3K)/protein kinase B (PKB or Akt) (PI3K/Akt) signaling (increasing survival, growth, and differentiation of cells), Ras-mitogen-activated protein kinase (Ras/MAPK) signaling (enhancing neurite outgrowth and Schwann cell differentiation and myelination), cyclic AMP (cAMP) signaling (improving guidance, differentiation, neurite outgrowth, and neuronal survival), and Rho/Rho-associated coiled-coil containing protein kinase (Rho/ROCK) signaling (modulating neurite outgrowth) [39,40]. Moreover, some studies emphasize the critical role of angiogenic signaling pathways in improving the quality of peripheral nerve repair [41].
Experimental evidence suggests that all these biomolecular events may result in axon regeneration at a rate of 1-3 mm/day, which significantly reduces with increasing time and distance from injury [42][43][44]. Therefore, peripheral nerves have no adequate regenerative capacity to rebuild their structure and function after severe injuries causing nerve gaps longer than 2 cm [45]. The use of autografts and allografts has a long history in the repair and regeneration of peripheral nerves; however, there are limitations in case of their extensive usage, including the shortage of donors, immune rejection, and the transfer of pathogens [46][47][48]. With the advent of the tissue engineering concept, polymers have achieved great success in peripheral nerve regeneration; however, other types of materials (ceramics and glasses) have also been examined for accelerating the healing process [23,[49][50][51]. Specifically, bioactive materials could be applied alone or in combination with polymers; constructs containing cells (e.g., mesenchymal stem cells) and growth factors (e.g., vascular endothelial growth factor (VEGF)) facilitate the restoration of nerve structure and function via distinct approaches such as increased cell growth, improved angiogenesis, and providing tissue attachment possibilities in either bare or modified types [52][53][54][55]. Moreover, the use of materials capable of reducing inflammation and oxidative stress, such as nanoceria, gives further added value to peripheral nerve injury strategies [56,57].
The applicability of different types of BGs, including silicate-, phosphate-, and borosilicate-based glasses, has been assessed through a series of experimental studies [58][59][60][61]. To date, various forms and shapes of BGs have been produced and assessed in vitro and in vivo for nerve repair and regeneration, including glass tubes, glass powder/polymer tubes, and glass fiber/polymer warps. The concept behind the use of BGs in neuroregeneration relies on their ability to (1) improve cell growth and proliferation, (2) promote angiogenesis, (3) reduce inflammation, and (4) increase mechanical strength [23,[62][63][64][65][66].
Silicate-Based BGs in Peripheral Nerve Regeneration
Looking at the literature, silicate glasses were apparently the first type of BGs to be proposed in the management of peripheral nerve injuries (Table 1). In 2005, Bunting et al. prepared Bioglass ® 45S5 fibers (10 mm diameter) and entubulated them within silastic conduits to fabricate scaffolds with the ability to guide the re-growth of peripheral axons of adult rats [60]. The constructs were successfully implanted and allowed axonal regrowth in the animal model; specifically, the postoperative outcome was considered comparable with that obtained by using an autograft. Over time, ion-doped silicate glasses were also investigated as effective materials for accelerating peripheral nerve regeneration [67]. For instance, SiO 2 -Na 2 O-CaO-ZnO-CeO 2 glasses have been suggested as suitable materials for the repair of peripheral nerve discontinuities due to the beneficial release of Ca 2+ (19.26-3130 ppm) and Zn 2+ (5.97-4904 ppm) ions stimulating cells towards regenerative pathways [59]. In 2014, the usability of a composite made of 45S5 glass nanoparticles and gelatin was evaluated for peripheral nerve regeneration [20]. The sol-gel glass particles were added to the gelatin matrix, and their interfacial bonding interaction was approved in the developed conduits. The in vitro data showed the positive effects of the composite conduits on cell viability, and in vivo implantation of the samples in the sciatic nerve of a male rat resulted in proper nerve regeneration (structurally and functionally) comparable to that of control groups three months after surgery (see Figure 2). More recently, electrospun films made of polypyrrole/collagen polymers and strontium-substituted nano-sized BGs (PPY/Coll/n-Sr@BG) were evaluated for the promotion of sciatic nerve rejuvenation in vivo [68]. These constructs demonstrated a native-mimicking morphology, better porosity, and higher specific surface area than customary nerve channels, thus allowing appropriate transportation of nerve development factor and glucose, as well as hindering the section of lymphatic tissue and fibroblasts. Furthermore, PPY/Coll/n-Sr@BG provided a suitable substrate for the growth and expansion of neurilemma cells. PPY/Coll/n-Sr@BG resulted in viable recovery of sciatic nerve wounds at 24 weeks post-implantation in rats which was comparable to the results of autotransplants and superior to those of PPY/Coll groups.
Electrospun composites based on poly(glycolic acid) (PGA)/collagen/nano-sized BG (NBG) were fabricated as potential nerve conduits by Dehnavi et al. [69]. The obtained data proved the superior features of NBG-containing nanofibrous conduits in comparison to glass-free counterparts (PGA or PGA/collagen groups) in terms of mechanical/chemical properties, biocompatibility, and biodegradability.
Silicate-based BGs have also been applied in other shapes, including aligned microfibers. Souza et al. fabricated double-layered conduits by the incorporation of BG microfibers (SiO 2 -Na 2 O-K 2 O-MgO-CaO-P 2 O 5 system) in nanofibrous poly(ε-caprolactone) (PCL) membranes [70]. The PCL polymer was electrospun upon BG microfibers (20 ± 2.3 µm) to make a two-layer bio-composite. The fabricated samples were permeable to water vapor, thus facilitating the exchange of cell metabolites between the inner portion of the nerve guide and the surrounding environment. In addition, a significant decrease in contact angle and an increase in mechanical properties were recorded as a result of the incorporation of BG fibers into the electrospun polymeric scaffolds.
Phosphate-Based BGs in Peripheral Nerve Regeneration
Although most studies involving phosphate-based BGs have addressed hard tissue engineering [71][72][73][74][75], there is recent evidence supporting the suitability of this kind of glass in peripheral nerve regeneration, too (Table 1). In fact, phosphate glass fibers were shown to play an effective role in promoting neurite outgrowth in vitro and peripheral nerve regeneration in vivo [24]. Kim et al. synthesized 50P 2 O 5 -40CaO-5Na 2 O-5Fe 2 O 3 (mol.%) phosphate BG fibers (14.99 ± 2.77 µm in diameter) by using a melt-spinning method and successfully aligned the fibers on a compressed collagen matrix while rolling them into a nerve conduit. These composite tubular scaffolds showed positive effects on the outgrowth of adult dorsal root ganglion (DRG) neurons in vitro. Moreover, the authors observed the ability of the scaffolds to allow axon extension and the recovery of plantar muscle atrophy at 1 and 8 weeks post-implantation in transected sciatic nerves of rats. However, no significant differences were recorded between the BG-containing scaffolds and their glass-free counterparts regarding the number of axons, muscle atrophy, or motor and sensory functions 12 weeks after implantation (see Figure 3). Appl. Sci. 2020, 10, x FOR PEER REVIEW 7 of 20 Full regeneration of a spinal cord injury is one of the ultimate targets in neuroregenerative medicine strategies. To this end, 3D composite hydrogel scaffolds made of phosphate BG and collagen were evaluated in a rodent model of transected rat spinal cord [76]. The glass formulation was 50P2O5-40CaO-5Na2O-5Fe2O3 (mol.%), and aligned fibers with a diameter of 18.49 ± 4.74 µm were produced to examine the healing properties. High levels of calcium and iron in the glass resulted in less soluble fibers to better support neuronal growth. The prepared constructs were implanted into completely transected rat spinal cords, and the outcomes were compared with those Full regeneration of a spinal cord injury is one of the ultimate targets in neuroregenerative medicine strategies. To this end, 3D composite hydrogel scaffolds made of phosphate BG and collagen were evaluated in a rodent model of transected rat spinal cord [76]. The glass formulation was 50P 2 O 5 -40CaO-5Na 2 O-5Fe 2 O 3 (mol.%), and aligned fibers with a diameter of 18.49 ± 4.74 µm were produced to examine the healing properties. High levels of calcium and iron in the glass resulted in less soluble fibers to better support neuronal growth. The prepared constructs were implanted into completely transected rat spinal cords, and the outcomes were compared with those of glass-free collagen scaffolds. The obtained results revealed axon growth from the proximal and distal stumps to the glass-containing scaffolds after 12 postoperative weeks due to the directional guidance for outgrowing axons (Figure 4). Additionally, an improvement in locomotion ability was observed in the experimental groups receiving the glass-containing implant as compared to the controls after 12 weeks of implantation. At the same time, the mRNA levels of brain-derived neurotrophic factor (BDNF) in the bladder of rats were increased more in the glass-containing hydrogel scaffolds than in the control.
Appl. Sci. 2020, 10, x FOR PEER REVIEW 8 of 20 of glass-free collagen scaffolds. The obtained results revealed axon growth from the proximal and distal stumps to the glass-containing scaffolds after 12 postoperative weeks due to the directional guidance for outgrowing axons (Figure 4). Additionally, an improvement in locomotion ability was observed in the experimental groups receiving the glass-containing implant as compared to the controls after 12 weeks of implantation. At the same time, the mRNA levels of brain-derived neurotrophic factor (BDNF) in the bladder of rats were increased more in the glass-containing hydrogel scaffolds than in the control. Reproduced with some modifications from ref [76].
The use of glass fibers for the reconstruction of injured nerves has been a promising approach in tissue engineering strategies due to the relatively easy and precise tailoring of the fiber diameter and shape. As regards these critical issues, Vitale-Brovarone et al. synthesized titanium-containing phosphate glass fibers in a compositional range of 50P2O5-30CaO-9Na2O-3SiO2-3MgO-(5-x)K2O-xTiO2 mol.% (x = 0, 2.5, 5) to evaluate the materials' efficacy in stimulating neuronal polarization and axonal growth along the fiber direction [77]. The degradation rate of the fibers in vitro was dependent on the glass composition since increasing TiO2 content decreased the glass solubility. In addition, the fiber diameter was reported to be another important parameter in the dissolution kinetics. The cellular assessments showed that aligned configuration of the glass fibers could provide a directional cue for axonal growth due to the formation of an anisotropic environment (see Figure 5). The use of glass fibers for the reconstruction of injured nerves has been a promising approach in tissue engineering strategies due to the relatively easy and precise tailoring of the fiber diameter and shape. As regards these critical issues, Vitale-Brovarone et al. synthesized titanium-containing phosphate glass fibers in a compositional range of 50P 2 O 5 -30CaO-9Na 2 O-3SiO 2 -3MgO-(5-x)K 2 O-xTiO 2 mol.% (x = 0, 2.5, 5) to evaluate the materials' efficacy in stimulating neuronal polarization and axonal growth along the fiber direction [77]. The degradation rate of the fibers in vitro was dependent on the glass composition since increasing TiO 2 content decreased the glass solubility. In addition, the fiber diameter was reported to be another important parameter in the dissolution kinetics. The cellular assessments showed that aligned configuration of the glass fibers could provide a directional cue for axonal growth due to the formation of an anisotropic environment (see Figure 5). Functionalization and surface modification are considered to be very valuable approaches in order to improve the inherent properties of BGs for tissue engineering strategies [78]. In this context, Ahn et al. prepared composites of carbon nanotubes (CNTs) interfaced with phosphate BG microfibers for the regeneration of transected sciatic nerve [79]. For this purpose, the authors chemically bonded aminated CNTs onto the surface of the aligned glasses and subsequently placed them into 3D poly(L/D-lactic acid) (PLDLA) tubes. The in vitro data demonstrated that these tubular scaffolds significantly improved the outgrowth of neurites of dorsal root ganglia and increased the maximal neurite length. In vivo results showed that the implantation of the scaffolds into a 10 mm transected sciatic nerve could successfully promote the tissue healing process at 16 postoperative Functionalization and surface modification are considered to be very valuable approaches in order to improve the inherent properties of BGs for tissue engineering strategies [78]. In this context, Ahn et al. prepared composites of carbon nanotubes (CNTs) interfaced with phosphate BG microfibers for the regeneration of transected sciatic nerve [79]. For this purpose, the authors chemically bonded aminated CNTs onto the surface of the aligned glasses and subsequently placed them into 3D poly(L/D-lactic acid) (PLDLA) tubes. The in vitro data demonstrated that these tubular scaffolds significantly improved the outgrowth of neurites of dorsal root ganglia and increased the maximal neurite length. In vivo results showed that the implantation of the scaffolds into a 10 mm transected sciatic nerve could successfully promote the tissue healing process at 16 postoperative weeks ( Figure 6). The authors suggested that this improvement was due to the grafting of an electrically conductive nanomaterial (i.e., CNTs) to the BG-containing PLDLA scaffolds. Borate glass rods and microfiber/fibrin composites -Borate glasses and the composite scaffolds improved neurite extension comparable to that of control fibrin scaffolds, clarifying the lack of significant effect of the glasses on neuronal health -Aligned glass scaffolds could guide neurite extension in an oriented manner [23] 13-93 B3 borate glass; 45S5 silicate glass; a blend of 13-93 B3 and 45S5 glasses Borate and silicate glasses/PCL composites -The composites containing 13-93 B3 borate glass exhibited a higher degradation rate than their counterparts containing only 45S5 silicate glass -None of the glasses caused adverse effects on neurite extension as compared to PCL alone -Neurite extension was increased in contact with PCL:45S5 PCL:13-93 B3 composites after 24 h of incubation [25] 13-93 B3 borate glass doped with Ag, Ce, Cu, Fe, Ga, iodine (I), Y, and Zn Borate glass/PCL composite sheets -Cu, Fe, Ga, Zn, and Sr-doped glasses promoted the survival and outgrowth of neurons as compared to undoped glasses -The Cu-and Ga-doped glasses showed the lowest average percent survival of support cells weeks ( Figure 6). The authors suggested that this improvement was due to the grafting of an electrically conductive nanomaterial (i.e., CNTs) to the BG-containing PLDLA scaffolds.
Borate-Based BGs in Peripheral Nerve Regeneration
The effectiveness of borate BGs for tissue-engineering-based therapies is completely accepted due to the capability of boron ions to improve cell proliferation, promote angiogenesis, and reduce inflammation [80][81][82]. The therapeutic effects of boric acid on peripheral nerve regeneration were previously studied [83]; the administration of 100 mg/kg of boric acid (four times per day) can reduce axonal and myelin damages in the sciatic nerve injury model in rats. Moreover, it has been reported that piezoelectric stimulation mediated by boron nitride nanotubes can be applied to improve the neurite length of PC12 cells in vitro [84]. Recent studies showed that boron can be easily loaded into chitosan/collagen hydrogels and promotes key biological functions such as angiogenesis [85].
Borate BGs have recently attracted much attention in the context of soft tissue healing applications [86]. In vitro experimental data confirmed that borate-based glasses have the ability to induce stem cells to secrete a series of proteins, including collagen, angiogenin, and VEGF [87]. There is a paucity of studies focused on nerve regeneration, but the early available results are promising (see Table 1) and justify further investigations. The cytocompatibility of 13-93B3 glass rods and microfibers (formulation: 53B2O3-20CaO-6Na2O-12K2O-5MgO-4P2O wt.%) with neuronal cells was documented by Marquardt et al. in 2013 [23]. Similar to silicate-and phosphate-based glasses, the borate glass scaffolds could increase the percentage of living neurons in vitro. Moreover, the aligned glass rods supported oriented neurite growth (see Figure 7).
Borate-Based BGs in Peripheral Nerve Regeneration
The effectiveness of borate BGs for tissue-engineering-based therapies is completely accepted due to the capability of boron ions to improve cell proliferation, promote angiogenesis, and reduce inflammation [80][81][82]. The therapeutic effects of boric acid on peripheral nerve regeneration were previously studied [83]; the administration of 100 mg/kg of boric acid (four times per day) can reduce axonal and myelin damages in the sciatic nerve injury model in rats. Moreover, it has been reported that piezoelectric stimulation mediated by boron nitride nanotubes can be applied to improve the neurite length of PC12 cells in vitro [84]. Recent studies showed that boron can be easily loaded into chitosan/collagen hydrogels and promotes key biological functions such as angiogenesis [85].
Borate BGs have recently attracted much attention in the context of soft tissue healing applications [86]. In vitro experimental data confirmed that borate-based glasses have the ability to induce stem cells to secrete a series of proteins, including collagen, angiogenin, and VEGF [87]. There is a paucity of studies focused on nerve regeneration, but the early available results are promising (see Table 1) and justify further investigations. The cytocompatibility of 13-93B3 glass rods and microfibers (formulation: 53B 2 O 3 -20CaO-6Na 2 O-12K 2 O-5MgO-4P 2 O wt.%) with neuronal cells was documented by Marquardt et al. in 2013 [23]. Similar to silicate-and phosphate-based glasses, the borate glass scaffolds could increase the percentage of living neurons in vitro. Moreover, the aligned glass rods supported oriented neurite growth (see Figure 7). forms. (B) Microscopic images of embryonic chick dorsal root ganglia (DRG) cells alone (control groups a1/a2 and d1/d2) or treated with 13-93 B3 rods (b1/b2 and e1/e2) and microfibers (c1/c2 and f1/f2). Note that the images were captured from the cells after 6 days of culture. The culture medium was exchanged every other day (transient condition) in the two top rows, while no medium was exchanged throughout the experiment (static condition) in the two bottom rows. Scale bar is 100 µm. Reproduced with some modifications from ref [23].
Several experiments have documented that borate BGs show better mechanical, physicochemical, and biological properties after the incorporation of metallic cations (e.g., Ti 4+ , Cu 2+ , Zn 2+ , etc.) into their network [88,89]. These dopants typically increase borate glass durability upon contact with aqueous fluids. Ion-doped borate BGs have also been developed for potential use in peripheral nerve regeneration. Gupta et al. synthesized a series of borate-based BGs containing various therapeutic elements, including Ag, Ce, Cu, Fe, Ga, I, Y, and Zn [58]. The results showed that forms. (B) Microscopic images of embryonic chick dorsal root ganglia (DRG) cells alone (control groups a1/a2 and d1/d2) or treated with 13-93 B3 rods (b1/b2 and e1/e2) and microfibers (c1/c2 and f1/f2). Note that the images were captured from the cells after 6 days of culture. The culture medium was exchanged every other day (transient condition) in the two top rows, while no medium was exchanged throughout the experiment (static condition) in the two bottom rows. Scale bar is 100 µm. Reproduced with some modifications from ref [23].
Several experiments have documented that borate BGs show better mechanical, physicochemical, and biological properties after the incorporation of metallic cations (e.g., Ti 4+ , Cu 2+ , Zn 2+ , etc.) into their network [88,89]. These dopants typically increase borate glass durability upon contact with aqueous fluids. Ion-doped borate BGs have also been developed for potential use in peripheral nerve regeneration. Gupta et al. synthesized a series of borate-based BGs containing various therapeutic elements, including Ag, Ce, Cu, Fe, Ga, I, Y, and Zn [58]. The results showed that Fe-, Ga-, and Zn-doped BGs could successfully promote the survival and outgrowth of neurons, whereas I-doped BGs were the most detrimental to neurons.
Prior studies have shown that borate-based BGs can be mixed with polymers to fabricate electrospun composites as 3D biomimetic structures [90]. In addition, borate glasses have been successfully mixed with different polymers to prepare injectable hydrogels, which can be utilized in tissue engineering approaches [91][92][93]. With respect to neuroregeneration applications, the potential of composite sheets produced by adding borate-based BGs to PCL was examined in vitro [25]. Three different composites were prepared including 50 wt.% of PCL plus (I) 50 wt.% of 13-93 B3 borate BG, (II) 50 wt.% of 45S5 silicate BG, or (III) 50 wt.% of a blend of (I) and (II) in a 1:1 ratio. A higher degradation rate was recorded when 13-93B3 borate BG was added to PCL as compared to the composites containing only 45S5 silicate BGs; however, no adverse effects on neurite extension were observed after culturing of dorsal root ganglia (DRG) cells isolated from embryonic chicks on all the composites.
Conclusions and Future Perspectives
Although the research field of BGs for neuroregeneration is still in its beginning, these bioactive materials have shown great promise for the treatment of peripheral nerve and spinal cord injuries. For this purpose, silicate-, phosphate-, and borate-based BGs have been mainly proposed in the form of fibers and/or combined with soft polymers for the development of tubular devices to promote axonal directional growth. At present, BGs have proved suitable to repair small-to-mid peripheral nerve defects (typically < 2 cm) in animal models, but researchers have not experimented in human patients yet. Looking at the question posed in the manuscript title, the answer is "yes"-we should be hopeful about the use of BGs in this new and partially unexplored scenario.
The challenges to be tackled in the future will concern the improvement of tissue regeneration even for long nerve gaps, as well as investigation of the potential suitability of BG-based biomaterials in the repair of cranial nerves. The implementation of additive manufacturing technologies (AMTs), which have already been widely applied in bone tissue engineering [94], will indeed carry significant added value in this field, too. In fact, AMTs allow for (i) achieving accurate control of the geometry/porosity of the biomaterial/scaffold/fibrous construct and (ii) combining different biomaterials (e.g., BGs, polymers) and even cells, which can be printed simultaneously (biofabrication [95]).
Selection of the basic glass composition along with appropriate therapeutic dopants will be key to postoperative success. Metallic dopants can affect the chemical stability of the glass network and, hence, the BG reactivity in contact with biological fluids. In this regard, the resorption of the BG-based conduit should be comparable to that of nerve regeneration. Furthermore, the ion dissolution products released from BGs can have a direct effect on neural cell activity via pH variations and/or signaling pathway activation. At present, there is a paucity of studies on this topic; some ions, such as Ga 3+ and Zn 2+ , seem to have stimulatory effects on neurons [58], but the biomolecular picture is still profoundly incomplete due to the complexity of interactions among the multiple ions which are typically released from BGs (not only the dopants but also silicate and phosphate species, Ca 2+ , and Na + ).
The common components of BG structures are generally recognized to be safe for the human body. Although there have been no specific studies addressing the neuroregeneration scenario, the available results are quite promising. Lai et al. [96] described the removal pathways of silicon from silicate BG granules implanted in rabbits over a six-month follow-up period and analyzed the animal brain, heart, kidney, liver, lung, lymph nodes, spleen, and thymus by using biochemical and histopathological assays. It was shown that the average excretion rate of silicon was 2.4 mg/day, and all the implanted silicon was excreted by 19 weeks after implantation. No increased concentration of silicon ions was detected at the implant site (rabbit paraspinal muscle) or in the internal organs/structures mentioned above after 24 weeks, and no abnormal histological appearance was detected. It was also reported that both silicon derived from implanted silicate BG and boric acid derived from implanted borate BG are harmlessly excreted from the body through urine in metabolic processes [97,98].
Specific metallic cations with the ability to activate the signaling pathways involved in peripheral nerve regeneration deserve to be incorporated in BGs and investigated in the near future. Mg 2+ ions were reported to promote the expression of nerve growth factor (NGF) and neurotrophic factor 3 (NTF3) in Schwann cells [99]. Strontium [100] and gadolinium [101] can activate the Ras/MAPK and PI3K/Akt signaling pathways, respectively.
It is also worth mentioning that BGs could have a pivotal role in nerve repair due to their proven pro-angiogenic properties [63], since some studies have recently highlighted the critical role of angiogenic signaling pathways in improving the quality of peripheral nerve repair [41].
The optimization of the glass-to-polymer ratio in composite biomaterials is also crucial for tailoring the physical, biological, and mechanical properties of the nerve guidance conduit. The use of electrically conductive biodegradable polymers [102] as matrixes for the production of BG/polymer composites also deserves to be investigated; in this way, the electrical insulation properties of BGs would be somehow compensated.
Another strategy to improve the outcome of nerve guidance conduits could be the incorporation of chemicals, drugs, or exogenous neurotrophic factors to be released once the biomaterial has been implanted [103,104]. Such drugs could be introduced into the lumen of hollow resorbable BG fibers undergoing progressive dissolution after implantation, thus allowing biomolecules to be released. Mesoporous bioactive glasses (MBGs) [105] could also be considered as versatile platforms for the controlled release of neural drugs and growth factor. For example, there is abundant literature showing that MBGs are excellent carriers for ibuprofen, which is commonly used due to its anti-inflammatory properties [106]; furthermore, this drug was reported to improve nerve functional recovery and increase the area and thickness of myelinated axons, too [107,108], and it could therefore be encapsulated in MBG-based drug delivery systems addressing neuroregeneration. | 2020-05-21T09:07:47.072Z | 2020-05-15T00:00:00.000 | {
"year": 2020,
"sha1": "f194bef25dd54c96c39a3ab1dd2c7d311c86dc4f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/10/10/3421/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "92b9e552797ea04385cb81ec80db8c44973c4ae5",
"s2fieldsofstudy": [
"Biology",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
17891975 | pes2o/s2orc | v3-fos-license | A PRIM approach to predictive-signature development for patient stratification
Patients often respond differently to a treatment because of individual heterogeneity. Failures of clinical trials can be substantially reduced if, prior to an investigational treatment, patients are stratified into responders and nonresponders based on biological or demographic characteristics. These characteristics are captured by a predictive signature. In this paper, we propose a procedure to search for predictive signatures based on the approach of patient rule induction method. Specifically, we discuss selection of a proper objective function for the search, present its algorithm, and describe a resampling scheme that can enhance search performance. Through simulations, we characterize conditions under which the procedure works well. To demonstrate practical uses of the procedure, we apply it to two real-world data sets. We also compare the results with those obtained from a recent regression-based approach, Adaptive Index Models, and discuss their respective advantages. In this study, we focus on oncology applications with survival responses.
Introduction
There is an increasing need of developing predictive signatures to identify right patient population for a treatment. By enriching responders in a target population, signature-based patient stratification reduces attrition rate of drug development projects in clinical phases and at the same time helps maximize patients' benefit offered by pharmaceutical intervention. In general, a signature captures some biological or demographical characteristics of patients. A signature-positive group is a population that satisfies certain criteria based on a signature. A population that does not meet the criteria is defined as a signaturenegative group. In this paper, we consider a two-arm design situation where patients in the treatment arm are treated by an investigational treatment and patients in the control arm receive a standard of care (SOC). We say that a signature has predictive value if patients in a signature-positive group respond better in the treatment arm than in the control arm, and the treatment effect for patients in the signature-positive group is greater than the one for signature-negative patients. Therefore, a predictive signature identifies a subset of patients who should be treated by an investigational treatment rather than an SOC and attempts to maximize treatment effect in a signature-positive population. where treatment and control cohorts have no difference in terms of patients' survival. After a predictive signature is learned and applied to patients' data at baseline, as shown in Figure 2, the signature-positive patients in the treatment arm have prolonged survival compared with those in the control arm, but we see a reverse pattern for the signature-negative patients. This example will make a case study discussed in details in this paper (Section 5). A promising method that can be applied to signature discovery is patient rule induction method or PRIM proposed by Friedman and Fisher [1]. PRIM aims at finding bumps in a population 'space'-a bump is defined by a subgroup of the population if the subgroup has a relatively high mean value of an objective function that describes a certain characteristic of the population. When efficacy is the characteristic of interest, bumps or subgroups in PRIM's formulation should correspond to signature-positive groups. The way PRIM naturally addresses the patient-stratification problem makes the method a good candidate approach for learning predictive signatures. Moreover, because PRIM describes a subgroup by a set of decision rules based on variables obtained for the population, these rules directly define a signature-this simplicity makes them easily applicable in clinics, which is another desirable property in signature development. An example of such rules would be as follows: A patient is signature-positive if his or her target gene's expression is greater than a threshold and his or her safety biomarker's protein level is less than a cutoff. Finally, we note that the word 'patient' in PRIM is an adjective, rather than a noun. It indicates that the rule induction method is not hasty or impulsive, in contrast to aggressive behaviors of other methods (for example, classification and regression trees (CART)), which had been discussed and compared with PRIM in [1]. Rather than taking a large step that seems optimal for a current search iteration, PRIM adopts a smaller step that may be less optimal, but by doing so, it increases the likelihood for later steps to compensate previous mistakes or utilize structures discovered by earlier steps. Such patience helps the method induce superior rules than those produced by aggressive approaches.
Many efforts have been made to directly apply or adapt PRIM for finding prognostic rules in different biomedical areas. Because a dose-intensive treatment may only target patients with high risks due to its associated toxicity, LeBlanc et al. [2] tried to identify these patients by a PRIM-based method with survival data and six demographical or biomarker variables. They proposed two major operations beyond PRIM: Additional variable selection and making search follow a pre-determined direction of a variable. The direction indicates whether a variable is positively or negatively correlated with responses based on regression. In our study, we do not either assume that such direction is known a priori or determine it in advance, and we do not impose the constraint that search should follow only one direction of a variable. Later, LeBlanc et al. [3] simplified their algorithm and changed their objective function from previously employed hazard rates to hazard ratios based on Cox proportional hazards regression models. Liu et al. [4] applied PRIM to tissue microarray data on eight biomarkers of patients with renal cell carcinoma for identifying high-risk patients. They proposed to use deviance residuals (based on martingale residuals of an intercept-only Cox regression model) as their objective function for PRIM to optimize. Dyson et al. [5] employed PRIM to choose combinations of genetic and environmental risk factors that define groups of individuals having significantly different risk levels of ischemic heart disease. Using PRIM, Nannings et al. [6] discovered subgroups at a very high risk of dying in the population of very elderly intensive-care patients and revealed important prognostic factors from demographic, diagnostic, physiologic, laboratory, and discharge data. For a modified version of PRIM, Polonik and Wang [7] presented theoretical characterization of its outcomes and derived its convergence rates.
In drug development, the value of a signature substantially increases if it can predict drug response as opposed to just predicting disease risk. However, it has not been well studied how PRIM can be properly applied in predictive-signature development. Kehl and Ulm [8] made an attempt to apply PRIM for identifying such signatures. Nevertheless, their method relies on a strong assumption that a good prognostic model can be built for a control arm. Martingale residuals from fitting the prognostic model in a treatment arm are used to indicate efficacy, which is optimized by PRIM. Our approach employs a different objective function and thus avoids making that assumption. With respect to simulation design and case studies, the previous work was concerned with cardiology while our study will shed light on PRIM application in oncology trials. There are many tree-based methods for patient stratification. They can be better contrasted to our approach after readers have a good understanding of our objective function and search algorithm. Therefore, we defer related discussion to Section 6.
In this study, we make the following unique contributions to develop a PRIM-based procedure searching for predictive signatures with survival data as the measure of clinical outcome: (1) Choosing an appropriate objective function together with a constraint for the procedure and comparing them with the objective function employed by Adaptive Index Models or AIM [9] to highlight the key advantage of our choice; (2) Developing the procedure with an automatic parameter tuning step and coupling the procedure with a resampling scheme to help PRIM achieve more effective signatures; (3) Investigating the procedure's performance in some typical scenarios of oncology clinical trials by simulation and thus characterizing conditions that empower the procedure to function reasonably; (4) Demonstrating applicability of the procedure in two real-world data sets and comparing its stratification results with those produced by AIM to present their respective advantages.
This paper is organized as follows. Section 2 considers objective functions and a related constraint for PRIM in the context of discovering predictive signatures. We then describe our search procedure based on PRIM's framework in Section 3. We present results from a simulation study in Section 4 and from two case studies of real-world data sets in Section 5. We conclude this paper with a discussion in Section 6.
Objective function
We begin this section by introducing a model formulation to motivate an objective function and a related constraint we adopt for PRIM and then compare them with AIM's objective function to reveal their different implications for identifying predictive signatures. We refer to variables that define a signature as signature variables and other irrelevant variables as noise variables. Because we focus on applications with survival data, we describe the formulation with a proportional hazards regression model: , where t is time, T is a treatment factor, and X denotes signature variables. For a patient indexed by i, a linear hazard score is modeled as follows: where T i = 0 indicates that the patient is in the control arm and T i = 1 if the patient is in the treatment arm under a two-arm design, and the signature indicator Z(X i ) = 0 if the patient is stratified into a signature-negative group and Z(X i ) = 1 for the patient in a signature-positive group. Accordingly, 1 indicates the treatment effect for the signature-negative group, and 1 + 2 suggests the treatment effect for the signature-positive group. In principle, any signature can define a signature-positive group (and thus a signature-negative group with complimentary rules) as long as it describes some characteristics of patients, but the signature may not be predictive. To define a predictive signature, we need to discuss the following two conditions on the treatment effects: (1) 1 + 2 < 0-the treatment-effect condition; (2) 1 + 2 < 1 , which can be reduced to 2 < 0-the interaction-effect condition.
The first condition is required to ensure signature-positive patients respond better to an investigational treatment compared with an SOC. The second condition means that the treatment effect in the signature-positive group should be greater than the signature-negative group; that is, the hazard ratio in the signature-positive group is smaller than the one in the signature-negative group. On the other hand, in practice, the estimateŝ1 and̂2 satisfying the inequalitŷ1 +̂2 <̂1 do not guarantee that the statistical significance of̂1 +̂2 is greater than the significance of̂1. If the sample size of the signature-positive group is small and thus it leads to a large standard error of̂1 +̂2, the significance of̂1 +̂2 can be less than the one of̂2, suggesting an undesirable patient stratification. Therefore, to avoid this case, we need the following constraint: The signature-positive group's treatment effect should be more significant than the one of the signature-negative group. We call such a constraint the interaction-effect constraint. Similarly, the treatment-effect condition leads to the requirement that̂1 +̂2 should be significantly smaller than zero. We refer to the requirement as the treatment-effect requirement. For a signature to be predictive in practice, it should satisfy both the treatment-effect requirement and the interaction-effect constraint.
To avoid making assumptions of specific models, we adopted the approach of directly employing p-values of two-sample comparisons to indicate treatment effects. In applications of survival data, we use one-sided log-rank tests for comparisons. This approach was proposed by Lin et al. [10], but they were only concerned with the treatment-effect requirement and did not consider the interaction-effect constraint. We describe our objective function as follows. Let pv + indicate significance of a one-sided test that examines whether signature-positive patients respond better to an investigational treatment compared with an SOC; denote by pv − significance of the same test for patients in a signature-negative group. pv + and pv − can be used to capture the essence of the treatment-effect requirement and the interaction-effect constraint. Previously, we specified the two criteria in the setting of regression; now, we conceptually map the treatment-effect requirement to a small pv + and map the interaction-effect constraint to the constraint pv + < pv − . To achieve a maximally beneficial treatment effect in the signature-positive group, we chose pv + as the objective function of PRIM for its optimization. To drive the search toward satisfying the interaction-effect constraint, we enforce pv + < pv − in PRIM's search process. Such enforcement is not redundant. It is true that the constraint is automatically assured if a minimal pv + achieved by PRIM is the global minimum; however, in cases where the minimal value is a local mode, pv + could be greater than pv − , and thus, the interaction-effect constraint is violated. Although PRIM can result in a local optimum with respect to pv + , its stratification is still useful if the interaction-effect constraint holds. Therefore, the enforcement of the constraint in the search process is designed to help generate desired stratification. Moreover, when initial search steps start to explore a search space, it is possible that a minimal pv + used for making local decisions is greater than pv − , and thus, it drives the search toward a potentially less meaningful direction. Readers may understand this statement better after reading through the search procedure in Section 3. Although the aforementioned patient property of PRIM can employ later steps to remedy mistakes made by previous search steps, these mistakes may still lead to less optimal solutions. We will demonstrate this point in our case study of a real-world data set.
Tian and Tibshirani [9] developed AIM for stratifying population into different risk groups and for detecting treatment-marker interactions. AIM searches for K covariates x 1 , … , x K and corresponding cut- is a binary indicator function and x * j is either a covariate x j or its negative value −x j . To detect possible treatment-marker interaction, AIM maximizes a test statistic testing the treatment-score interaction term Tw in the following linear hazard score L = 1 T + 2 Tw, where T is a treatment factor with the same definition as in Eq. 1, and w is the aforementioned index-score variable. The authors suggested that patients can be stratified into a lowscore group and a high-score group by comparing their index scores to median of all index scores. The high-score group defines a signature-positive group given a negative coefficient of the interaction term, with the remaining patients forming a signature-negative group; in case the coefficient sign is positive, the low-score group then defines a signature-positive group. In this way, the score-based patient stratification essentially defines Z in Eq. 1, with Z = 1 for patients in the signature-positive group and Z = 0 for the signature-negative patients. Given the definition, the AIM's formulation can be mapped or transformed into Eq. 1. Because such transformation will not affect any conclusion we draw, we will refer to the linear hazard score in Eq. 1 as the formulation for further discussion to maintain notational consistency. Also, for simplicity, unless there is a need for detailed specification, we will use the terms the treatment-effect condition and the interaction-effect condition to indicate two general requirements of a predictive signature instead of referring to various statistics employed by different approaches for these two conditions. By focusing on the treatment-score interaction term, AIM directs the search to optimize the interactioneffect condition. However, a detected interaction effect may or may not lead to a predictive signature because the treatment-effect condition is ignored. Specifically, if 1 + 2 ⩾ 0, as illustrated by the thicker line in Figure 3 (left), the investigational treatment is no better than the SOC in the signature-positive group. In this situation, the interaction effect can still be significant if the investigational treatment is significantly worse than the SOC in the signature-negative group, as shown in Figure 3 (right). Therefore, the resulting signature-positive group is not useful for identifying responders to the investigational treatment compared with the SOC; rather, the resulting signature-negative group reveals patients to whom the investigational treatment is even more harmful. If in this case a predictive signature exists but its interaction effect is less significant than the one just demonstrated, the signature will be missed by the search in AIM. Hence, AIM or, in general, a method only optimizing the interaction-effect condition has a limited utility on discovering predictive signatures. On the contrary, our choice of the objective function aims at enriching responders to an investigational treatment in a signature-positive group and thus does not have such limitation.
Moreover, even if we assume that AIM or a method can ensure the treatment-effect condition satisfied while optimizing the interaction-effect condition, its resulting signature can be less desirable than the one produced by a method optimizing the treatment-effect condition while making sure the interactioneffect condition satisfied. Let us consider the following example. Imagine that there exist two predictive signatures for a data set: Signature A has the maximum (treatment-score) interaction effect but a small beneficial effect of an investigational treatment over an SOC in its signature-positive group; in comparison to signature A, while signature B leads to a much larger or the largest beneficial treatment effect for signature-positive patients, it has a smaller interaction effect. Clearly, signature B is more helpful in identifying patients for maximizing efficacy of the investigational treatment than signature A. However, signature A would be reported by any method whose result is driven by the optimality of the interaction-effect condition. Therefore, in order to detect a predictive signature such as signature B, we recommend the approach that treats the treatment-effect condition as the primary objective function to be optimized and imposes the interaction-effect condition as a constraint that should not be violated, as opposed to an approach that treats the interaction-effect condition as the primary objective function and the treatment-effect condition as the constraint.
Search procedure
In this section, we present our procedure based on the PRIM's framework in the context of searching for predictive signatures. We also propose an automatic parameter-selection step and a resampling scheme to improve search performance. For the sake of simplicity, this study is concerned with continuous signature variables. But this is not a restriction in that with the same objective function, it is easy to extend the procedure to handle categorical variables in the way discussed by Friedman and Fisher [1].
The framework
Let us first introduce notation involved in the procedure description (Algorithm 1). Assume that there are p variables, and let x j denote a variable indexed by j, for j = 1, … , p. Let x jmin and x jmax be the minimum and maximum values of x j , respectively. x ij is the value of x j for patient i. The set of indices of patients in a signature-positive group is denoted by G + . For patients indexed by G + , x j( ) denotes a quantile of their x j , corresponding to a probability in the lower tail. Following this convention, x j(0) is the minimum value of x j and x j(1) the maximum value of x j . For a group of patients indexed by G, P G denotes the pvalue of a one-sided test that examines whether patients receiving an investigational treatment respond better than patients treated by an SOC. Because the algorithm description employs set operations, we clarify relevant symbols here. Given two sets A and B, A ∩ B denotes the intersection between A and B; A ∪ B is the union of the two sets; A∖B denotes the set of elements that belong to A but not to B, and A is complement of A. With aforementioned notation, we are ready to present the algorithm. In line 1 of Algorithm 1, PRIM first splits the whole population in a study into two sets, D 1 and D 2 . In D 1 , it learns a series of candidates for a signature-positive group, as detailed by lines 2-18. Then, one of the learned candidates is chosen to be reported if its corresponding grouping in D 2 achieves the best stratification, as indicated by line 19. At this step, by treating decision rules associated with candidates as models, PRIM essentially utilizes data in D 2 to select a final model. We will discuss more on this issue in Section 3.2. In our simulation study and case studies, we assign an equal number of samples to D 1 and D 2 .
Learning candidates consists of three processes: peeling (lines 4-8), pasting (lines 9-13), and dropping (lines [14][15][16][17][18]. While peeling aims at shrinking a candidate to generate a new one, the other two processes attempt to create new candidates via candidate expansion. Specifically, starting with a trivial candidate with all patients (line 3), PRIM tries to peel different subsets of patients who have extremely small or large values of a variable in lines 5-6. A parameter (0 < < 1) specifies the proportion of patients considered to be peeled in a current candidate group. A peeling occurs if its resulting candidate has the best stratification, as shown by lines 7-8. The peeling process is repeatedly applied to newly produced candidates until a pre-defined minimum support (or sample size) of a candidate is reached. Then, from the smallest or largest values of a variable in the current candidate, PRIM tentatively pastes patients who have immediate smaller or larger values back to the group, as indicated by lines 10-11. The amount to be pasted is up to of the current group size. A pasting is actually made if it improves the stratification most, as suggested by lines 12-13, and pastings are repeated till no improvement can be gained. Furthermore, PRIM drops a rule that defines the current candidate and thus includes patients who are previously excluded according to the rule, as shown by lines 15-16. In lines 17-18, a rule is chosen to be dropped if its removal produces a candidate with the best stratification. Rules are sequentially dropped in this fashion to generate new candidates till no rule can be further dropped. The stop of the dropping process completes the candidate generation (lines 3-18) for a specific value. Candidate generation continues for other values as indicated in line 2.
In peeling process, the number of possible peelings (till all data are consumed by peeling) is around (log C 0 − log n)∕(log(1 − )), where C 0 is a pre-defined minimum support or sample size of signaturepositive groups and n is the sample size of a study. Because there are O(log n) peelings and p potential signature variables to be examined in one peeling, the number of computing operations is in the order of O(p log n). The same computational complexity holds for pasting. For the dropping process, the complexity is O(p) because at most 2p decision rules are sequentially dropped. After dropping, O(p log n) candidate rule sets need to be tested in D 2 . Therefore, the complexity of the algorithm is O(p log n). At the end of Section 4, we will present PRIM's running time in a simulation scenario given different number of variables and different sample sizes.
Parameter and candidate selection
A final candidate is selected among all candidates learned in the following process: (i) given a value of -the parameter controlling the number of patients to be peeled and pasted-the process of peeling, pasting, and dropping learns a series of candidates (lines 3-18 of Algorithm 1); (ii) different values induce different series of candidates by repeating the aforementioned learning (line 2). PRIM's inventors suggested that a pre-determined value between 0.05 and 0.10 tends to work well because a small value encourages the search procedure to be patient, the key feature making PRIM superior to other aggressive approaches. In this case, only the first learning component generates candidates.
Alternatively, they recommended that after applying PRIM with different values, the user can choose a value that produces a candidate striking a trade-off between a p-value indicating treatment effect in a signature-positive group (pv+ for patients in the withheld data D 2 ) and the corresponding group size or a trade-off between the p-value and the number of corresponding rules. The former trade-off intends to increase a signature's prevalence by sacrificing stratification performance-a larger p-value may allow more patients to be included in a signature-positive group; the latter balance prefers a simpler rule set over the one achieving the smallest p-value. With subjective judgment on these trade-offs, the user can select the candidate and a corresponding value.
As just mentioned, these choices are suboptimal in terms of stratification performance. Our strategy is to generate multiple series of candidates corresponding to different values (line 2 of Algorithm 1) and then select a value leading to a candidate that obtains the best stratification performance indicated by pv + for patients in D 2 (line 19). In this way, the parameter value and the candidate can be automatically decided. We prescribe the following values for : 0.05, 0.06, 0.07, 0.08, 0.09, 0.10, 0.20, 0.30, 0.40, and 0.50. A finer resolution is used between 0.05 and 0.10 because small values are more likely to encourage PRIM's patience and thus lead to a better solution. For an illustration, Figure 4 shows the minimal pv + value (among pv + values of a series of candidates) given each pre-specified value of in a case-study data set. In this case, = 0.09 and the corresponding candidate achieving the smallest pv + were selected according to our strategy. It is of note that because pv + for patients in D 2 is employed for the aforementioned selection, it should not be used as a measure of PRIM's predictive performance; instead, we will describe a cross-validation (CV) measure in Section 5 to indicate generalizability of a learning method in future data.
Multiple rule sets via covering
In the aforementioned subsections, we have explained how the procedure in Algorithm 1 finds a single set of conjunctive rules for defining a signature-positive group. As suggested by Friedman and Fisher [1], the same procedure can be applied repeatedly to discover multiple rule sets via a rule induction approach called covering [11]. These rule sets can collectively define a signature-positive group. Specifically, we first exclude signature-positive patients who satisfy existing rules from data and then apply the search procedure to the remaining patients to learn another set of conjunctive rules. The disjunction of the newly discovered rules and the previously reported rules defines a new signature-positive group, which is the union of patients satisfying the new rules and patients satisfying the existing rules. Such repeated application stops till no rules can be further found or the treatment effect in a resulting signature-positive group is less significant than the treatment effect in the original population.
Resampling
As we will see in the simulation study (Section 4), PRIM's performance degrades as the number of noise variables increases. To help the algorithm focus on searching cutoffs for relevant variables, we propose the following resampling scheme to reduce search scope: PRIM is repeatedly applied to random samples of original data, and only top k variables that are most frequently returned by PRIM are selected as candidate variables for further consideration. PRIM then searches for signatures in original data with selected candidate variables as input variables. Section 5). Note that no stratification was generated given = 0.50, and thus, no associated pv + is visualized.
Specifically, let 1 , 2 , … , 100 be 100 random samples of original data. Sampling is performed without replacement. In each sample, we draw 63.2% of original observations. That is the same number as the average number of distinct observations in a bootstrap sample [12]. Bootstrapping is not directly utilized because replicated values cause peelings not to exclude the expected number of patients controlled by . Given a random sample h , PRIM proceeds as usual by first splitting h into D 1 and D 2 data sets and then searching for signatures. If PRIM reports x j as one of signature variables, an indi- denotes the variable with its selection frequency at the j-th rank. Given the ranking, x (1) , … , x (k) are chosen as the input variables for PRIM, and PRIM is applied to original data. The scheme was motivated by our observation that although PRIM cannot detect exactly a complete set of true signature variables in the presence of noise variables, it can frequently reveal some of them. The underlying assumption of the scheme is that variables repeatedly selected by PRIM in random samples of a population are likely to be true signature variables. For reference later, we call the search procedure coupled with the resampling scheme as Re-PRIM.
Simulation setup
To study the procedure's performance under different scenarios, we first describe a simulation setup as a baseline scenario and then compare it with other scenarios having different parameter settings. For signature-positive patients, their survival time S in a control arm follows an exponential model with a parameter + ctl , S ∼ exp( + ctl ), and their survival time in a treatment arm S ∼ exp( + trt ). For signature-negative patients, their survival time S ∼ exp( − ctl ) for the control arm and S ∼ exp( − trt ) for the treatment arm. Survival time is randomly right-censored with probability 0.2. We assume + trt = 0.05, + ctl = − trt = − ctl = 0.1. The hazard ratio + trt ∕ + ctl = 0.5 indicates a reasonable treatment effect for the signature-positive group while the ratio − trt ∕ − ctl = 1 represents no treatment effect for the signature-negative group.
Two signature variables were simulated from a uniform distribution: X 1 , X 2 ∼ U(0, 1). The conjunctive rules of 0.2 ⩽ x 1 ⩽ 0.9 and 0.2 ⩽ x 2 ⩽ 0.9 define a patient to be a signature-positive patient if his or her x 1 and x 2 values fall into the ranges. Later, we would also check the situation where the number of signature variables is increased to four. The percentage of signature-positive patients is known as prevalence. The prevalence given the aforementioned rules is around 50%. In addition to the signature variables, we also considered some noise variables as input variables of PRIM. A noise variable is generated from the same uniform distribution but is not involved in a signature definition. Denote by p n the number of noise variables. We examined settings where p n = 0, 2, 4, 6, 8, 32, 128. With 32 or 128 noise variables, we tested the algorithm in the limit of its working conditions. It is less feasible to involve much more variables than the range we consider here for PRIM to properly identify predictive signatures in the settings of clinical trials. This is due to the challenges of limited sample size and realistic effect sizes in these applications. On the other hand, according to our experiences, it is not atypical that in an analysis task a signature is requested to learn from 8 or 10 variables. We will also evaluate PRIM's performance with eight variables for two real-world data sets later. In another study of rule-based subgroup identification [13], Lipkovich et al. conducted their simulation study in a similar scale in terms of the number of variables (given a sample size 900) with continuous responses, reflecting the same challenges as we face. We consider the total number of patients or the sample size n = 200, 400, 800, 1600, and 3200. For every setting, an equal number of patients were assigned to each arm in each signature group. The range of sample sizes demonstrates situations of large clinical phase II or III studies. As we will see later in this section, less than 200 samples are not sufficient for PRIM to work for most of settings involving noise variables.
We refer to the aforementioned parameter settings as scenario 1. Later, we will report results on scenarios with a different number of signature variables, different prevalence, and a different effect size. Please see Table I for reference. The settings of these scenarios will be detailed when their results are presented. For each scenario, we simulated 1000 data sets and provided their performance summary. We compared the approach of collecting a single set of conjunctive rules (by applying the search procedure once) with the approach of collecting multiple rule sets by covering. They share similar performance in the simulation study. We will discuss results from the former approach because it allows us to directly compare estimated lower and upper bounds for signature variables with their true values. The minimum support of signature-positive groups was set at 20 for controlling stop of peelings. To give an idea about how fast the implemented procedure is, we recorded its running time for all the data sets in scenario 4 and will report timing summary when we discuss results in that scenario.
Performance measures
After PRIM is applied to a simulated data set, one can imagine three As an overall measure for patient stratification, classification measures such as sensitivity or recall r sens , specificity r spec , and precision r prec are reported. In the framework of a two-class problem, signaturepositive patients are defined as observations in a positive class (or success class), and signature-negative patients are labelled as observations in a negative class (or failure class). Given these two classes, r sens denotes the proportion of true signature-positive patients detected among true signature-positive patients; r spec is defined as the proportion of true signature-negative patients detected among true signature-negative patients; r prec is the proportion of true signature-positive patients detected among signature-positive patients claimed by the procedure. Note that r sens , r spec , and r prec were computed in testing data rather than training: A signature was first learned from one data set and then applied to other data sets in the same simulation setting, and performance measurements in testing data sets were averaged to evaluate generalizability of a method. For example, given a scenario of 200 samples in a data set, a signature is learned from the data set and then is used to stratify samples in the other 999 data sets under the same simulation parameter setting (with 200 samples in each of the testing data sets). After stratification, the numbers of true/false positives and true/false negatives are collected for each testing data set. Based on the classification results, sensitivity, specificity, and precision are calculated-these three numbers are corresponding to the signature learned from one data set. Because there are 1000 data sets in each parameter setting, the aforementioned process is repeated for 1000 learned signatures. The results are then averaged for the 1000 signatures.
Because the goal of patient stratification is to identify a subpopulation having an improved treatment effect, a direct performance check is to examine whether the p-value indicating the treatment effect is improved or not in a signature-positive group. Specifically, let pv denote the p-value of a one-sided test that examines whether patients receiving an investigational treatment respond better than patients treated by an SOC; given pv + and pv − , respectively, generated from the same test for patients in a signaturepositive group and in a signature-negative group (as previously defined in Section 2), we check whether pv + is smaller than pv, that is, whether we observe a better efficacy in the signature-positive group; we also calculate pv − to see whether the interaction-effect constraint pv + < pv − holds in a stratification. Consistent with the calculation of the classification measures, p-values were computed in each testing data set, resulting 999 p-values of each type (for example, pv + ) for a learned signature and thus 1000×999 p-values of each type for 1000 learned signatures in each parameter setting. Medians of p-values of each type were reported as performance measures because of skewness of p-value distributions.
A baseline scenario
When no noise variable is involved (settings with p n = 0 in Table II), even with a sample size n = 200, PRIM can well detect true signature variables and corresponding lower and upper bounds exactly. However, when noise variables are also given as input, the number of exact detection n E substantially drops. For example, there are only 45 hits of 1000 runs in exact detection for n = 200 and p n = 4. To achieve a reasonable exact-detection rate and accurate bound estimation in the presence of noise variables, sample sizes need to be no less than 3200 for p n ⩽ 8, as highlighted by bold fronts in Table II. On the other hand, requiring a smaller sample size for inclusive detection, the procedure can return true signature variables and their bounds in a considerable number of runs, for example, n I = 714 for the setting with n = 400 and p n = 2; n I = 702 for n = 800 and p n = 4. There are also a good number of hits (n I = 786) with the sample size 3200 for 32 noise variables. This partially correct detection can also be observed for marginal detection. Because of the partial detection, the procedure achieves reasonably high r sens , r spec , and r prec (see bold fonts in Table III). As a reference for comparison, stratification results from a random procedure are listed in braces in the table. The procedure randomly selects variables and their bounds to create signatures under the constraint of the same minimum support of signature-positive groups as the one specified for PRIM. In this comparison, PRIM is much more sensitive and more precise than the random procedure while being reasonably specific. Comparing to sample sizes needed for good classification results, a larger sample size is required to observe an improved efficacy in a signaturepositive group as indicated by pv + < pv (Table III). For example, in the case of no noise variable, n = 800 rather than n = 200 is necessary for pv + to be less than pv. For p n = 8, n = 1600 is required. When p n goes up to 32 and 128, n = 3200 becomes the only sample size, which makes it possible to observe improved treatment effects for signature-positive patients. We observed pv + < pv − for all sample sizes, which indicates the interaction-effect constraint generally holds in the results.
Resampling in the baseline scenario given 32 noise variables
As shown in Table IV, when Re-PRIM with k = 2 is applied to the cases of p n = 32, it substantially improves the performance of PRIM under every sample-size condition listed in Table II. In another way of understanding the results, we note that Re-PRIM needs less samples to make accurate detection: For exact detection, with n = 1600 instead of n = 3200, the method can detect the signature for 600 out of 1000 runs. With respect to stratification accuracy and p-values, the performance with n = 1600 in Table V is also much superior to the one (with n = 1600 and p n = 32) in Table III, where resampling was not employed. These results represent an ideal situation where the number of true signature variables is assigned to k, the parameter of Re-PRIM for determining the number of selected variables as final input of PRIM. If a larger k value is pre-specified, results are expected not to be better than those given an equivalent number of input noise variables. When the number of true signature variables is greater than a prescribed value of k, the resampling scheme induces bias by enforcing rule simplification while it reduces instability. Therefore, their trade-off decides whether the scheme can enhance PRIM's performance. In practice, cross-validation can be employed to choose an optimal parameter value in terms of predictive performance.
A scenario with more signature variables
We next investigate a scenario where the number of signature variables increases from two to four (scenario 2 in Table I). x 3 and x 4 are the additional signature variables. To maintain prevalence around 50%, the rules are tuned to be 0 ⩽ x i ⩽ 0.85, for i = 1, 2, 3, 4. In this situation, a much larger sample size is required for PRIM to return proper results for both exact detection and inclusive detection (Table VI) in comparison with the baseline scenario (Table II). For example, n = 3200 (instead of n = 200) is needed for the detection given no noise variable. Such a large sample size does not empower either the exact detection or inclusive detection after four or more noise variables are added. A minimal sample size 1600 is also required by marginal detection given p n ⩽ 8, and n = 3200 is needed given p n = 32 (Table VII). To achieve satisfactory classification performance, n = 800 is necessary for the cases of p n ⩽ 8, and again, n = 3200 is demanded when p n is increased to 32. To achieve improved p-values in signature-positive groups, the method needs at least 1600 samples given p n ⩽ 8 and 3200 samples for p n = 32. As previously seen in the baseline scenario, Re-PRIM with k = 4 considerably boosts exact detection (Table VIII) in comparison to the results without involving resampling-based variable selection (Table VI). The method also substantially enhances marginal detection and stratification performance (Table IX).
A scenario with increased prevalence
To cover the scenario with a larger signature-positive population, we increased prevalence from 50% to 72% by decreasing the lower bound of a signature variable from 0.20 to 0.05 while keeping other parameter values the same (see scenario 3 in Table I). The increased positive signals in the data lead to the following changes in stratification results: On average, r sens and r prec are increased by 9% and 20%, respectively, while r spec is decreased by 9%. Other results are similar to previous ones (Tables II and III).
A scenario with a relatively large effect size
We change + trt from 0.05 to 0.025, making a scenario where the effect size is increased twice as much as in the baseline scenario. This change decreases the hazard ratio in the signature-positive group from 0.5 Table VI. The results of exact and inclusive detections in scenario 2 for PRIM. 2 0 0 3 2 0 --------4 0 0 3 2 0 --------800 32 to 0.25 (scenario 4 in Table I). As highlighted in Table X, PRIM only needs n = 800 to achieve similar results in Table II for p n = 2 or 4 in exact detection. That is, only one quarter of the previously required sample size is needed. Similarly, it asks for n = 1600, one-half of the previous sample size to make better detection given p n = 6 or 8. With 1600 samples, the method can achieve good results for p n = 32 or 128, which is not obtainable even with 3200 samples in the baseline scenario. Sample sizes are also reduced by at least one-half for inclusive detection. The similar situation holds for marginal detection and stratification (Table XI). For example, given p n = 8, n = 400 is sufficient for satisfactory results. That is, PRIM works well with one-fourth of the corresponding required sample size in Table III. With many noise variables as in the case of p n = 128, the method also performs reasonably given n = 1600. Figure 5 shows the running time of the method, the averages for 1000 data sets in each parameter setting of scenario 4. We conducted the simulations with R version 3.0.2 in computing servers that have dual Intel E5-2650L processors and at least 64 GB memory. Our program can finish within a few minutes given p n = 8 and n = 3200 and return results in a couple of hours when p n increases to 128. As suggested by the algorithmic complexity O(p log n) (see discussion at the end of Section 3.1), the running time approximately shows logarithmic growth as sample size increases and linear growth given the increasing number of variables.
We summarize PRIM's performance in the simulation study as follows. PRIM can perform well in exact detection with hundreds of samples given no presence of noise variables, but this becomes less impressive as the number of true signature variables increases. However, given a few thousand samples that might be available in large phase III or even phase II trials, PRIM is capable of detecting at least some of true signature variables and thus stratifying a good number of patients into right groups in the presence of a moderate number of noise variables (up to 32 noise variables in our simulation study). Coupling with the proposed resampling scheme, PRIM can achieve satisfactory results with a substantially less number of samples. In scenarios having a relatively large but still realistic effect size, PRIM asks for no more than 1000 samples to accurately detect cutoffs and a few hundred samples to reasonably stratify patients given a small number of noise variables; moreover, it needs less than 2000 samples to perform well given 100 noise variables or so. Overall, the simulation study provides a general idea on conditions that enable PRIM to propose a relevant stratification of patients for therapy, such as manageable number of input variables and required sample sizes given different effect sizes.
The data
We apply PRIM, Re-PRIM, and AIM to two real-world data sets collected by Loi et al. [14] and Lenz et al. [15], respectively. As part of the first study, gene expression was measured on an Affymetrix whole genome microarray platform for 414 patients with estrogen receptor (ER)-positive breast carcinomas. Among them, 137 patients received no systemic adjuvant treatment, and 277 patients received adjuvant tamoxifen only. To phrase the case in our terms, we refer to the untreated population as the control arm and the tamoxifen-treated population as the treatment arm even though the cohorts involved are not part of a single randomized trial. Relapse-free survival with right censoring was used as the clinical endpoint. We excluded 21 patients from analysis because their event indicators were missing. All the data are available at the Gene Expression Omnibus or GEO database with ID GSE6532. The other retrospective study [15] terms, we refer to the group treated with CHOP as the control arm and the group treated with R-CHOP as the treatment arm. Clinical responses are given by overall survival with right censoring. These data can also be downloaded from the GEO database with ID GSE10846.
Procedure setup
We first present results of PRIM and Re-PRIM by only employing a single set of (conjunctive) rules and then discuss the utility of covering (multiple rule sets). The number of selected variables for Re-PRIM, k, is set at two given limited sample sizes in these two data sets. We applied the AIM implementation in the published R package AIM: To allow AIM to have an option analogous to pasting in PRIM, we specified AIM's parameter backfit = TRUE; to permit two splits for each variable as in PRIM, the parameter maxnumcut was set at two; we assigned 0.05 to mincut to make AIM's minimum cutting proportion comparable with the minimum value of in PRIM; other parameters kept their default values. The aforementioned parameter settings make the AIM procedure more flexible than its default version and thus lead to a fair comparison with PRIM. To stratify patients by AIM, we followed the approach suggested by its inventors (see details in Section 2): Given index scores computed by AIM, patients are stratified into a low-score group and a high-score group depending on whether a patient's score is greater than the median score or not. These two groups can be defined to be signature-positive and signaturenegative groups according to coefficient sign of the treatment-score interaction term in AIM's regression model.
Candidate gene selection
As mentioned before, it is a typical scenario that a few biomarker candidates are pre-determined based on prior knowledge for predictive-signature development. To mimic this scenario, we randomly drew a subset of observations from a data set and selected eight candidate genes by genome-wide analysis of association between gene expression and clinical responses in the subset. This subset of data would not be utilized any more after the selection of the candidates. Predictive signatures were developed with the selected candidates in remaining data. We refer to the remaining data as the ER data set and the CHOP data set for the two cases, respectively. The following selection procedure was employed to choose eight candidate genes based on the data of 196 patients randomly sampled from the first study [14]: (i) a Cox proportional hazards model was fitted with expression profiles of a gene as a single predictor for the patients in the treatment arm and in the control arm separately; (ii) a gene was included for further consideration if its hazard ratio ⩽ 0.5 or ⩾ 2 for the treatment arm, but for the control arm, its hazard ratio is in between 0.5 and 2 and the p-value for the regression coefficient ⩾ 0.5 with a two-sided test; (iii) the genes retained were then ranked according to their p-values in the treatment arm, and top eight genes were selected. The conditions in the second step intend to select genes whose expression profiles are substantially associated with the patients' survival in the treatment arm but not associated with the response in the control arm. These genes have the potential to be interacted with treatments and thus meet the interaction-effect condition. For the other study, eight candidate genes were similarly chosen based on the data of 207 patients randomly drew. They served as basis of predictive-signature development in the remaining data. Figure 1 shows that two arms have no differentiation in responses in the ER data set (logrank p-value = 0.54). Based on a predictive signature learned by PRIM, patients were stratified into signaturepositive and signature-negative groups. The signature-positive patients tend to respond better to the investigational treatment than the SOC, while the signature-negative patients reverse the pattern (Figure 2). It is of note that p-values from two-sample tests on these signature-positive or signature-negative patients are not valid measures to quantify predictive performance of PRIM because the data had already been explored for signature-learning-the separation between the curves simply illustrates training results. We adopt p-values based on 5-fold CV to quantify a method's predictive performance. In the CV process, a data set is randomly splitting into five subsets. In each fold, a method learns a signature from four subsets, and then based on the signature, patients in the remaining subset are labelled to be either signature-positive or signature-negative. Specifically, PRIM splits the four subsets into D 1 and D 2 to learn a signature from them, and similarly, the data in the four subsets serve as the input of Re-PRIM for signature-learning; after a signature is obtained, it is used to stratify patients in the remaining subset. To stratify all patients in the five subsets, the learning is repeated five times.
Performance measures
After all patients are stratified into signature-positive or signature-negative groups, a p-value is calculated for each group based on a one-sided two-sample test that examines whether an investigational treatment is better than an SOC. We refer to such p-values as CV p-values. CV p-values are similar to p-values presented in a pre-validation scheme by Tibshirani and Efron [16], which attempted to quantify significance of learned predictors and facilitate a fair comparison between a learned predictor and pre-defined covariates. Because of variability in random splittings, 5-fold CV is repeated for 100 random splits. In addition to p-values, we calculated hazard ratios for signature-positive or signature-negative groups in the CV process because they are helpful references for comparing an investigational treatment with an SOC. We refer to such hazard ratios as CV hazard ratios.
As presented earlier, the data explored for candidate genes selection were not used later. Therefore, the aforementioned CV was only applied to data that were never used for pre-selection of candidate genesif it were applied to data including the samples from which candidate genes were selected, CV p-values would be a biased measure of predictive performance of a method. In addition, for CV p-values to be a proper measure of predictive performance of Re-PRIM, all steps in Re-PRIM including the resampling procedure for selecting candidate signature variables and PRIM for learning a final signature should only be applied to training data (four subsets of data in the case of a 5-fold CV) rather than all data in every fold of CV.
P-values and hazard ratios calculated in aforementioned CV process help reduce overoptimistic estimation because of using the same data twice (for both training and testing) and thus provide realistic estimation of predictive performance of a method. However, because these CV quantities are essentially based on retrospective analysis, they cannot replace the role of p-values or hazard ratios calculated based on randomized controlled trials (RCTs). To obtain a valid p-value or hazard ratio to confirm efficacy in a population with signature-positive status, investigators should conduct an RCT for that population. To validate the predictive value of a signature, an RCT is also needed to examine the lack of efficacy in a population with signature-negative status. Figure 6 shows the distributions of 100 CV p-values for signature-positive groups and 100 CV p-values for signature-negative groups from PRIM's results in the ER data set. Compared with the original p-value 0.54, the CV p-values for pv + substantially shift to smaller values (with 93% of the CV p-values less than 0.54). This indicates that the stratification can potentially improve efficacy. In addition, the majority of CV p-values for pv + are less than 0.2 while the majority of CV p-values for pv − are greater than 0.8. This demonstrates that the procedure is able to enrich the responders to the investigational treatment in signature-positive groups while it includes most of responders to the SOC in signature-negative groups. Because of skewness of the distributions, which is typical according to our empirical observations, we recommend to report median of CV p-values to represent their center and median absolute deviation to indicate variation. Similarly, we also report median and median absolute deviation of CV hazard ratios. Denote by p mcv+ the median of the CV p-values for signature-positive groups and p mcv− for signaturenegative groups. Let HR mcv+ be the median of the CV hazard ratios for signature-positive groups and HR mcv− be the one for signature-negative groups. They are listed in Table XII for reference. Figure 7 illustrates the performance of Re-PRIM: It is not as good as PRIM, with more large CV p-values for signature-positive groups and more small CV p-values for signature-negative groups. Re-PRIM's p mcv+ is also substantially larger than PRIM's p mcv+ while its p mcv− is smaller-neither makes the method more favorable in this case. Figure 8 shows the results from AIM. The distributions share similar skewness with those in Figure 6. Compared with the distribution of CV p-values for signature-positive groups resulted from PRIM in Figure 6, the distribution from AIM significantly shifts to larger values (p-value = 7.05 × 10 −6 by Wilcoxon rank sum test). This indicates that the treatment effect is much less obvious in AIM's signaturepositive groups than in PRIM's signature-positive groups. Consistently, we also observed that AIM resulted in larger p mcv+ and HR mcv+ than PRIM (Table XII). Therefore, PRIM is more desirable in maximizing efficacy for signature-positive patients. On the other hand, with respect to CV p-values for signature-negative groups, AIM produces significantly larger values than those obtained by PRIM (pvalue = 3.34×10 −7 ). This suggests that although the signature-negative patients defined by both AIM and PRIM tend to respond better to the SOC than the investigational treatment, yet this response difference is considerably larger in AIM's stratification than in PRIM's. Along the same line of such observations, HR mcv− is substantially higher from AIM than from PRIM (Table XII). These reflect the discussion in Section 2 that AIM only focuses on the treatment-score interaction without considering which signature group leads to that interaction. We also report, n m+ , the median of sample sizes of signature-positive groups in Table XII. AIM often generates a larger signature-positive group (n m+ = 148) in contrast to PRIM (n m+ = 80). This is useful when larger prevalence is required by real-world applications.
The results
In the CHOP data set, Re-PRIM is able to reduce the number of signature variables from six in PRIM's rules to two with similar performance, as indicated by p mcv+ , p mcv− , HR mcv+ , and HR mcv− (Table XII). Suggesting a stratification that can enhance efficacy, 73% of Re-PRIM's CV p-values for pv + are less than 0.025, the p-value indicating the significance of the original treatment effect. The CV p-values for pv + are also significantly smaller than those from AIM (p-value = 7.99 × 10 −11 ), demonstrating better enrichment of responders to the treatment in signature-positive groups than that in AIM's results. Similar to the situation in the ER data, AIM generated larger CV p-values for pv − than those in Re-PRIM and PRIM, and it produced considerably larger signature-positive groups in this case (see n m+ in Table XII). Note that we only discussed valid results returned by AIM from 85 out of the 100 random splits because the procedure in the AIM package exited with errors for the other random splits of the data set in CV.
Employing the covering strategy, PRIM included more patients in signature-positive groups for the two data sets while maintaining similar treatment effects to those obtained by a single set of rules (see PRIM (M) in Table XII). In contrast to Re-PRIM using a single set of rules, Re-PRIM with multiple sets of rules achieved similar results (except higher prevalence) in the ER data set (see Re-PRIM and Re-PRIM (M) in Table XII), but in the CHOP data set, the method attained significantly better results as indicated by higher prevalence (p-value = 3.24 × 10 −11 ) and smaller CV p-values for signature-positive groups (p-value = 2.48 × 10 −4 ), along with larger CV p-values (p-value = 4.43 × 10 −6 ) and hazard ratios (p-value = 1.97 × 10 −4 ) for signature-negative groups. Although the increased prevalence by Re-PRIM (M) is still lower than the one from AIM, its enlarged CV p-values for signature-negative groups are significantly greater than those obtained by AIM (p-value = 2.48 × 10 −2 ), along with significantly larger CV hazard ratios (p-value = 4.45 × 10 −3 ). Overall, we observed some case-dependent advantages to employ multiple rule sets generated by the covering strategy.
As mentioned in Section 2, the interaction-effect constraint is not redundant partially because, without enforcing the constraint, the search procedure can initially be misled by a local decision that was based on a minimal pv + with pv + > pv − and ends up with a less optimal solution. In search for a signature in the CHOP data set, such situation indeed occurred (with = 0.2): While the original signature or rule set (selected according to P G , the stratification significance in D 2 in line 19 of Algorithm 1) has pv + = 0.016 and pv − = 0.62, pv + increases to 0.036 with pv − decreasing to 0.52 for the signature obtained by the search procedure without enforcing the interaction-effect constraint. This is because, without the enforcement, the procedure chose a signature-positive group that violated the constraint at the first search step. Therefore, it is necessary to ensure the constraint satisfied in the search process.
Discussion
In the simulation study, we presented a unimodal situation where signature-positive patients are centralized in one location of the population space. A much more challenging situation would be multimodal, with signature-positive patients located in more than one location. Besides the factors we considered to affect the performance of PRIM, number of modes and their relative positions and magnitudes may also impact final results. Although a single set of conjunctive rules are more feasible to implement and thus may be preferred by users in clinics, it is clear that such rules are not expressive enough to describe a multimodal situation well. It is interesting for a future study to examine whether the covering strategy with multiple rule sets can capture multiple modes accurately.
Another type of p-values examines whether signature-positive patients in a treatment arm respond better than signature-negative patients in a control arm. Such a p-value is relevant because in an ideal situation, a patient should receive a treatment based on positiveness of a predictive-signature-based diagnostic test-he or she should be treated by an investigational treatment only if his or her signature test is positive, and he or she may need to receive an SOC given a negative test result. Denote a p-value of this type by pv e . Given a predictive signature, we cannot always expect pv e to be small and its corresponding test to be significant. It is true that pv e will be small if signature-negative patients treated by the SOC share the same survival profiles as signature-positive patients receiving the SOC; however, pv e can be large if the former tends to live longer than the latter (for example, the case illustrated in Figure 2), that is, signature-positiveness actually indicates poor-prognosis. In our simulation study, signature-positive patients have the same survival profiles as signature-negative patients in the control arm, indicating that pv e should be similar to pv + and thus should be less than pv.
Many tree-based methods have been developed to identify subgroups that have maximal differential treatment effects by recursively partitioning population. Several examples are Negassa et al. [17], Su et al. [18], and Lipkovich et al. [13]. They essentially aimed at maximizing significance of the interactioneffect condition (either through test statistics or p-values), which does not always lead to a predictive signature as discussed in Section 2. Besides this category of objective functions, in the subgroup identification based on differential effect search (SIDES) method, Lipkovich et al. [13] proposed a different one as the optimization criterion for splits inside a tree-significance of the treatment-effect condition in any one of two child nodes generated by a split. This is similar to our objective function. However, this objective function ignores the interaction-effect condition. Although they suggested a hybrid approach that attempts to incorporate this condition into an objective function by maximizing either the aforementioned significance or significance of the interaction-effect condition, the goal of this approach is a mixture and is less clear than the one of our constrained-optimization approach. Unlike the interactiontrees approach [18], which is concerned with treatment effect on the entire covariate space, SIDES focuses on treatment effect in specific areas of interest (and ignores complete estimation in the rest of the space)-such focal search is in essence similar to PRIM's bump hunting. To restrict search space, SIDES only allows a variable to define a subgroup in one direction with its cutoff (by either greater than or no less than the cutoff). PRIM permits both directions and thus is more expressive. Working with binary responses, Foster et al. [19] proposed a method called 'Virtual Twins' (VT) to identify a subgroup that has an improved treatment effect. Under the same two-arm design as in our study, VT first estimates a responding probability of an individual given the treatment he or she received and a responding probability of the same individual in a hypothetical scenario where he or she were treated by the other treatment that he or she did not actually receive. Such estimation is performed for all patients through a random-forest model with treatment factor and other covariates as input. Given the estimation, the difference between two probabilities of an individual can be calculated as a new response indicating an estimated treatment effect for the individual. Finally, a CART model is built with covariates to predict these new responses and thus specifies decision rules for identifying a subgroup with an enhanced treatment effect. The idea of the VT approach is very interesting, but it faces a great challenge that the random-forest model needs to accurately estimate responding probabilities given a real treatment or a counterfactual one.
Drug development is paying more and more attention to predictive signatures that stratify patients into groups, with the hope that signature-positive patients respond better to investigational treatments than SOC regimens. PRIM is a natural approach to this problem because its bump-hunting formulation fits exactly into the scenario of patient stratification-bumps are corresponding to signature-positive patients, and associated rules define a signature. It is also attractive because of its returning simple rules and its patient-search property: The former makes rules discovered by PRIM easily and directly applicable to patients by clinicians or other medical practitioners; the latter induces better decision rules than aggressive approaches. In this study, we proposed a search procedure based on PRIM's framework for predictive-signature development and suggested a parameter-selection step and a resampling scheme to improve the search. We investigated the procedure's performance by simulating typical situations where the procedure is expected to be applied and provided guidance on conditions under which the procedure can find relevant rules and reasonably stratify patients into different signature groups. By searching for signatures in two real-world data sets, we demonstrated that PRIM has a good potential for patient stratification in practice. In addition, we discussed the advantage of the objective function we adopted for PRIM by contrasting it to AIM's objective function, compared results of these two methods in the real-world data, and illustrated their respective superiorities in these scenarios. In summary, this paper provides a general and practical recipe for applying PRIM to predictive-signature development in oncology studies with survival responses. | 2018-04-03T00:41:57.434Z | 2014-10-27T00:00:00.000 | {
"year": 2014,
"sha1": "38a5023b0e604d616651b61e6154d3c7a1c8798a",
"oa_license": "CCBYNCND",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/sim.6343",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "38a5023b0e604d616651b61e6154d3c7a1c8798a",
"s2fieldsofstudy": [
"Computer Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
258437408 | pes2o/s2orc | v3-fos-license | Gabapentin for chronic refractory cough: A system review and meta-analysis
Objective To evaluate the efficacy and safety of gabapentin in the treatment of chronic refractory cough by Meta-Analysis. Methods Literatures were retrieved from PubMed, Embase (OvidIP), Cochrane Library, CNKI, VIP, Wanfang Database and China Biomedical Management System and eligible prospective studies were screened. Data were extracted and analyzed by using RevMan 5.4.1 software. Results Six articles (2 RCTs and 4 prospective studies) with 536 participants were finally included. Meta-analysis showed that gabapentin was better than placebo in cough-specific quality of life (LCQ score, MD = 4.02, 95%CI [3.26,4,78], Z = 10.34, P < 0.00001), cough severity (VAS score, MD = −29.36, 95% CI (-39.46, -19.26), Z = 5.7, P < 0.00001), cough frequency (MD = -29.87, 95% CI [- 43.84, -15.91], Z = 4.19, P < 0.0001) and therapeutic efficacy (RR = 1.37,95%CI [1.13,1.65], Z = 3.27, P = 0.001), and equal in safety (RR = 1.32,95%CI [0.47,3.7], Z = 0.53, P = 0.59). Gabapentin was similar to other neuromodulators in therapeutic efficacy (RR = 1.07,95%CI [0.87,1.32], Z = 0.64, P = 0.52), but its safety was better. Conclusion Gabapentin is effective in the treatment of chronic refractory cough in both subjective and objective evaluations, and its safety is better than other neuromodulators.
Introduction
Chronic cough is defined as a cough that lasts for 8 weeks in adults and 4 weeks in children in the ERS guidelines [1,2], but with a more complicated situation in clinical work. The morbidity of chronic cough in adults is about 7%-11%, with an average of 10% [3,4]. This disease is more prevalent in Europe, America and Oceania, compared to Asia and Africa. The patients with chronic cough are mainly females, and the most common age for presentation was in the sixth decade [5]. Up to 40% of these patients can be refractory.
Chronic refractory cough is defined in the ERS guideline as a type of chronic cough which is persistent despite any investigation and treatment according to published practice guidelines [1]. Chronic refractory cough is not a serious or fatal disease, but more of a symptom of various diseases, so it hadn't attracted enough attention in the past. However, it indeed has a significant impact on patients' daily life, mental health and social communication, and even causes incontinence, cough syncope when it is serious. Nowadays, with the improvement of patients' requirements for quality of life, the treatment of chronic refractory cough has become a challenge in clinical practice, especially when the classic treatment is not ideal.
In the past decade, many researchers have indicated that chronic refractory cough, especially cough without obvious respiratory disease, may be induced more than just respiratory or throat diseases. They have noticed that chronic cough was very similar to chronic pain, and it, like chronic pain, may also be a neurological disease in some aspects [6,7]. Therefore, some drugs for neurological diseases, especially for chronic pain, have the potential of treating chronic cough [1,[6][7][8]. Gabapentin, one of the drugs for neurological diseases, is a typical drug for chronic pain and epilepsy, and was first investigated in the treatment of chronic cough from 2005 [9]. Because of its underlying therapeutic mechanism for central sensitivity and less adverse effect, it has attracted more attention in the treatment of chronic refractory cough in recent years. However, the clinical trials of gabapentin for chronic refractory cough had some defects: fewer participants, lower research quality and greater bias, compared with other trials of gabapentin for chronic pain and epilepsy. The aim of this paper is to provide a theoretical basis for clinical treatment and further research through a systematic and comprehensive analysis of reported trials on gabapentin for chronic refractory cough.
Search strategy
We searched PubMed, Embase (using Ovid platform), Cochrane Library, CNKI, VIP, Wangfang database and SinoMed from inception to September 28th, 2022. The keywords of searches included Gabapentin or 1-(Aminomethyl)cyclohexaneacetic Acid or Neurontin or Gabapentin Hexal Convalis or Gabapentin-Ratiopharm or Gabapentin Ratiopharm or Novo-Gabapentin or Novo Gabapentin or NovoGabapentin or PMS-Gabapentin or Apo-Gabapentin or Apo Gabapentin or ApoGabapentin or Gabapentin Stada, cough or coughs, chronic cough and chronic refractory cough in English platform, and 加巴喷丁, 1-(氨甲基)环己基乙酸, 咳嗽, 慢性咳嗽, 慢 性难治性咳嗽 in Chinese platform. The retrieval process included comprehensive retrieval of literatures, deleting of duplicated literature, rough screening by reading titles and abstracts, screening by reading full texts, discussion of disputed literatures, and finally inclusion of eligible literature.
Selection criteria
Inclusion criteria was listed as follows: 1. Prospective study; 2. The language of publication was Chinese or English; 3. The research subjects were adults and met the diagnostic criteria of chronic refractory cough or unexplained chronic cough (UCC); 4. The intervention included gabapentin (if the intervention included other treatments, the control group also needed corresponding treatments), and the control measures could be placebo or other neuromodulators; 5. The outcomes should include one or more of the following: cough-specific quality-of-life score (LCQ score), cough severity score (VAS score), cough frequency, drug efficacy and adverse effect. 6. Relevant data can be extracted or transformed.
Exclusion Criteria were as follows: 1. Abstract in Chinese or English, text in other languages; 2. The subjects included pregnant or lactating women; 3. Gabapentin was used in both groups; 4. Data cannot be extracted even after transforming.
Data extraction
We did the work of literature screening, data extraction and quality evaluation by two researchers (Sheng Xie and Meiling Xie) independently, with the inclusion and exclusion criteria described above. In this process, any disputes should be resolved by discussion, otherwise should be referred to a third researcher (Yongchun Shen). Data of the type of study, number of participants, demographic baseline, course of disease, intervention, treatment, follow-up time and outcomes were included and analyzed.
Quality assessment
Tool of RoB 2(Version 6.2) was adapted for the assessment of literature quality and risk of bias. The tool included 7 items: random sequence generation, allocation concealment, blinding of participants and personnel, blinding of outcome assessment, incomplete outcome data, Selective reporting, other biases (e.g., residual effects in crossover trials, recruitment bias in cluster randomized trial, etc.). According to the specific situation, the literature was evaluated as low risk, high risk and uncertain risk in each item.
Data analysis
Review Manager 5.4.1 software was adopted for data analysis. The statistical strategies were as follows: continuous data were represented by Weighted Mean Difference (WMD) and 95% Confidence Interval (95%CI), while dichotomous data were represented by Relative Risk (RR) and 95%CI. Descriptive analysis was introduced while studies cannot be statistical analyzed. When heterogeneity test suggested that heterogeneity presented between studies (P ≤ 0.05, I 2 ≥ 50%), random effect model was adopted for pooled analysis, and sensitivity analysis was performed to locate the source of heterogeneity. When heterogeneity test indicated that homogeneity presented between groups (P > 0.05, I2 < 50%), fixed effect model was adopted for pooled analysis. Funnel plot analysis was performed for publication bias if necessary.
Literature screening
Through the comprehensive retrieval of the databases mentioned above, a total of 1186 literatures were identified, and 6 full manuscripts were included after screening [10][11][12][13][14][15]. The retrieval screening process is shown in Fig. 1.
Characteristics of included citations
All the six articles included were prospective studies, published from 2012 to 2019. A total of 536 adult participants were included, including the maximum subjects of 217 and the minimum subjects of 27. Of all the researches, five used gabapentin alone (maximum dose 1800mg/d, minimum dose 900mg/d) in the intervention group, and one used gabapentin combined with PPI (900mg/d +40mg/ d). The duration of treatment and follow-up ranged from 4 weeks to 6 months. The basic characteristics of the six studies are shown in Table 1.
Quality assessment and bias risk assessment
Among the six included studies, two studies were explicitly informed of the randomization methods [11,12], two studies were not randomized or allocation concealed [10,13], two studies were not informed of the method of randomization and allocation concealment [14,15]; two studies were not blinded in the experimental process [12,14], two studies were not blinded in the outcome measurement [14,15]; in one study [14], excluded and lost follow-up population was not reported or explained; in one study [15], outcome indicators were not clear. Quality assessment and bias risk assessment are shown in Figs. 2 and 3. We chose LCQ score as the cough-specific quality-of-life score and three studies were included [11,14,15]. In the study of Nicole M Ryan et al. [11], extractable data were divided into Central sensitivities (CS) and non-central sensitivities (No CS), so we adopted both in the meta-analysis separately. The heterogeneity test showed no statistical significance (P = 0.47, I 2 = 0%), so the fixed-effect model was adopted. As shown in Fig. 4, the LCQ score difference between the two groups was statistically significant (Z = 10.34, P < 0.00001), indicating that gabapentin could significantly improve the cough-specific quality of life in the patients with chronic refractory cough compared with placebo.
Cough severity (VAS score)
We chose VAS score as cough severity score. A total of 3 studies were included [11,14,15], and the random effect model was adopted (P = 0.001, I 2 = 85%). As shown in Fig. 5, the VAS scores of the two groups showed a statistically significant difference (Z = 5.7, P < 0.00001), indicating that gabapentin could significantly improve the subjective cough severity of patients with chronic refractory cough compared with placebo.
Cough frequency
Three studies were included in cough frequency [11,14,15], and the random effect model was adopted (P < 0.00001, I 2 = 99%). As shown in Fig. 6, there was a statistically significant difference (Z = 4.19, P < 0.0001) in cough frequency between the two groups, indicating that gabapentin could significantly reduce the objective cough frequency of patients with chronic refractory cough compared with placebo.
Therapy efficacy
Based on placebo or other neuromodulators as the control group, we divided the eligible studies into two groups for meta-analysis. There were two studies in which placebo was selected as control [11,14], and the fixed-effect model was adopted (P = 0.42, I 2 = 0%). As shown in Fig. 7, there was a statistically significant difference (Z = 3.27, P = 0.001) in therapy efficacy between the two groups, indicating that gabapentin was more effective than placebo in the treatment of chronic refractory cough.
There were three studies in which other neuromodulators were selected as control [10,12,13], and the fixed-effect model was adopted (P = 0.93, I 2 = 0%). As shown in Fig. 8, there was no statistically significant difference (Z = 0.64, P = 0.52) in therapy efficacy between the two groups, indicating that gabapentin and other neuromodulators have similar efficacy in the treatment of chronic refractory cough.
Three of the five studies selected placebo as control [11,14,15]. The details see Table 2. Meta-analysis was conducted and random effect model was adopted (P = 0.009, I 2 = 79%). As shown in Fig. 9, there was no statistical significance (Z = 0.53, P = 0.59) in adverse reactions between the two groups, indicating that gabapentin did not increase the incidence of adverse effects compared with placebo.
Two studies reported the comparison of adverse effects between gabapentin and other neuromodulators [10,12]. The details see Table 3. Meta-analysis was conducted and fixed-effect model was adopted (P = 0.32, I 2 = 0%). As shown in Fig. 10, adverse effects in the two groups were statistically significant (Z = 6.41, P < 0.00001), indicating that gabapentin had a lower incidence of adverse effects compared with other neuromodulators.
Heterogeneity analysis
In the analysis of cough severity (VAS score), there was significant heterogeneity (P = 0.001, I 2 = 85%) among the included studies. Sensitivity analysis showed that the heterogeneity decreased significantly (P = 0.022, I 2 = 33%) when the studies of Nicole M Ryan et al. were excluded. We presumed that the sources of heterogeneity may include: 1. Literature quality; 2. The race of participants is Fig. 4. The LCQ score after using gabapentin and placebo. * refer to participants with CS, **refer to participants with No CS. different. However, no matter which study was removed, the outcome remained unchanged, suggesting that the results were relatively robust.
In the analysis of cough frequency, there was obvious heterogeneity (P < 0.00001, I 2 = 99%) among the included studies. Sensitivity analysis was performed and studies were removed one by one. The heterogeneity decreased significantly (P = 0.84, I 2 = 0%) when the studies of Nicole M Ryan et al. were excluded. We presumed that the sources of heterogeneity may include: 1. Literature quality; 2. The race of participants is different. However, no matter which study was removed, the outcome remained unchanged, suggesting that the results were relatively robust.
Publication bias
Due to the small number of studies included in each research indicator, the validation of publication bias was not carried out. We presumed that publication bias may exist.
Discussion
Based on the above analysis, we can presume that gabapentin was superior to placebo in the therapy efficacy, improvement of cough-specific quality-of-life, reduction of cough severity and reduction of cough frequency in the treatment of chronic refractory cough, and the adverse effects were comparable to placebo. And compared with other neuromodulators, gabapentin was equal in the success rate of chronic refractory cough, but had a lower incidence of adverse effects.
Gabapentin was a typical medicine for chronic pain and epilepsy. Why is it effective in treating chronic refractory cough? It is related to the nerve function of cough. We will discuss this in the aspects of the nerve conduction pathway of cough, cough regulation mechanism and the role of gabapentin below.
The conduction pathway and regulation of normal cough reflex
Cough is a protective mechanism of the airways. The conduction pathway of cough reflex see Fig. 11. The receptors and regulation of cough reflex see Fig. 12. Fig. 9. The side effect of gabapentin and placebo. 11. Conduction pathway of cough reflex [7,8,[16][17][18][19]. The underline part could be affected by gabapentin. The red parts refer to central nervous system, and the blue parts refer to peripheral nervous system. (For interpretation of the references to colour in this figure legend, the reader is referred to the Web version of this article.)
Cough hypersensitivity syndrome, CHS
Although multiple factors can lead to chronic refractory cough, cough hypersensitivity syndrome is one of the most important mechanisms.
If (all or part of) the receptors on cough reflex are sensitized and respond to weak stimuli that do not lead to cough in normal condition, or overrespond to stimuli above the threshold, it indicates that the body is in a state of increased cough sensitivity, which is called cough hypersensitivity syndrome [25]. This sensitive state may be activated when patients get (respiratory) disease and lasts after disease cured. In the state of CHS, factors that can cause cough include talking, cold air and other stimuli which usually do not cause cough [20]. The patients of CHS usually describe a sensation of itchiness, irritation, and unpleasantness in the throat region or even describe it as "something physically stuck in the throat" [10,11,26], thus similar to laryngeal hypersensitivity syndrome [6].
So far, the exact mechanism of CHS hasn't be fully understood. Neuroinflammation maybe one of the main mechanisms of CHS, and is related to functional changes of TRPV1, TRPA1 and P2X3 [6]. The pathophysiological basis may be related to acid or non-acid fluid and gas reflux [20]. Existing studies have indicated that CHS has two aspects: peripheral sensitization and central sensitization. Peripheral sensitization refers to the increased excitability of peripheral sensory nerves to chemical and other stimuli. Peripheral sensitization may be related to the increase of endogenous inflammatory factors and the depolarization of downstream ion channels which are excited by G protein-coupled receptor, as well as the up-regulation of receptors [7]. The upregulated receptor is mainly TRPV1 [27], which is five times higher in patients with chronic cough than in normal people [28]. TRPV1 is a chemical gated calcium ion channel that is sensitive to capsaicin, and other factors that can stimulate it include pH change, temperature change and so on [29,30]. TRPV1 mainly exists in C fiber [7], and is also one of the downstream ion channels of G protein-coupled receptor. Central sensitization refers to the overrespond of the cough neural network of the central nervous system to peripheral stimuli. The mechanism of central sensitization in chronic cough is not very clear, but the process is similar to that in chronic pain. It is related to substance P, central glial cells, sensitization of secondary neurons and changes in the central descending system. The vagus nerve can also release some neurokinins under long-term inflammatory stimulation, which may also be related to central sensitization [7]. At the receptor Fig. 12. The receptors and regulation of cough reflex [20][21][22][23][24]. The underline part could be affected by gabapentin. The red parts refer to central nervous system, and the blue parts refer to peripheral nervous system. (For interpretation of the references to colour in this figure legend, the reader is referred to the Web version of this article.) level, central γ -aminobutyric acid receptor (GABA) and N-methyl-D-aspartic acid receptor (NMDA) are involved in central sensitization [31].
Chronic (refractory) cough and chronic pain
As described above, chronic (refractory) cough, like chronic pain, is a hypersensitive disorder of the nervous system. The similarities between the two diseases support the usage of gabapentin in chronic (refractory) cough. In previous studies, the following similarities had been found.
Mechanism
Central and peripheral sensitization exists in both chronic pain and chronic cough [6]. Similar to referred pain and allodynia, symptoms like allotussia and hypertussia can be found in a variety of cough phenotypes [20]. The mechanism of referred pain and allodynia is mainly central sensitization, while inflammation of the lung and esophagus can also lead to discomfort in the larynx through central sensitization: C fiber of the esophagus and lung is stimulated and releases transmitters and peptides in the brain stem to sensitize cough reflex originating in the larynx, resulting in discomfort in the larynx [19,37].
Neuroreceptors
A series of neuroreceptors exist in both chronic cough and chronic pain, details see Table 4.
Mechanism of gabapentin
Gabapentin (1-(aminomethyl) cyclohexylacetic acid), chemical formula C9H17NO2, molecular weight 171.237 g/mol, is a lipophilic structural analog of the neurotransmitter g-aminobutyric acid (GABA) [53], with central action and peripheral analgesic effect [54,55]. Gabapentin is absorbed orally by diffusion and by the carrier-mediated, l-aminoacid transport system. Bioavailability of gabapentin is about 60% for a 300 mg dose, 40% for a 600 mg dose, and 35% for a 1600 mg dose [56]. The decrease in bioavailability of gabapentin with increased dose may be due to saturation of transport system [56]. Its bioavailability is not affected by food ingestion, but can be decreased by antacids (by 20%) when they are taken simultaneously or up to 2h after gabapentin administration. Gabapentin does not bind to plasma proteins and can cross the blood-brain barrier. The concentration of gabapentin in CSF equals 20% of the plasma concentration, and the concentration in brain tissue is about 80% of corresponding plasma concentration. Gabapentin is not metabolized and is excreted unchanged in urine. The elimination half-life is about 5-9 h. Gabapentin has a high frequency of administration due to its slow absorption, non-linear relationship between plasma concentration and dosage, and short half-life [57]. There were no significant pharmacokinetic interactions reported. The plasma gabapentin concentration of patients over 65 years old was twice that of young people [58]. The abrupt discontinuation of high-dose gabapentin can cause withdrawal reactions (irritability, agitation, anxiety, palpitation, and diaphoresis, etc.) for 1-2 days [56]. Dose escalation is recommended for initiation and dose tapering is recommended for cessation.
The mechanism of gabapentin has not been fully understood. So far, its pharmacological effects are thought to be binding α2δ1 auxiliary subunit (a auxiliary subunit of voltage-gated calcium channel, CaV), resulting analgesic effect. Its binding inhibits the transmembrane transport of α1 pore forming units of CaV (mainly Cav2.2 N type, existing in both central and peripheral synapses [59]) in the presynaptic membrane, thereby reducing the influx of calcium ions and the resulting release of neurotransmitters [20].
Furthermore, gabapentin also inhibits the axoplasmic transport of α2δ1 auxiliary subunits [20]. Gabapentin can also modulate TRP channels, NMDA receptors, protein kinase C, and inflammatory cytokines. Gabapentin can reduce the levels of TNF-α and IL-6 in spinal cord of L5 ligated rats, and has a dose-dependent effect [60]. Gabapentin can also reduce the expression of TRPV1, substance P and calcitonin gene-related peptide in lung tissues of post-infection cough rats, and reduce their airway neurogenic inflammation [61]. Gabapentin can act on supra-spinal regions to stimulate noradrenaline mediated descending inhibition, which contributes to its anti-hypersensitivity action in neuropathic pain [62]. The functional region of gabapentin in the central nervous system is not so clear, but it has been studied in some literatures. An fMRI study showed that gabapentin acted on multiple cortical and subcortical regions in rats, including the thalamus, periaqueductal gray matter (PAG), tegmental area, ectorhinal cortex, subiculum and amygdaloid nucleus, and it also showed that gabapentin can change the neural activity of the brain involved in nociceptive processing [63]. Another fMRI study demonstrated the complex effects of gabapentin on brain activation, the most pronounced one of which was a reduction in stimulus-induced brain deactivation following central sensitization [64].
Considering the similarities between chronic cough and chronic pain and the similar cortical response between airway stimulation and pain [65][66][67], the mechanism of gabapentin's inhibition of cough and pain may be similar. In the central nervous system, gabapentin can change the neural activity of the cerebral cortex involved in cough by NMDA receptors and CaV receptors and activate the descending inhibitory system, leading to the decrease of the central sensitivity of cough reflex. In the study of Bowen AJ et al., gabapentin was more effective in the UCC patients with paroxysmal spasms, dysphonia or cough triggered by talking, and considering that throat irritation and talking-triggered cough were the symptoms indicated the high sensitivity of central nervous system, this outcome may reflect the central mechanism of gabapentin for chronic refractory cough [10,11]. Meanwhile, gabapentin could reduce Table 4 The receptors in chronic pain and chronic cough [6,16,28,[38][39][40][41][42][43][44][45][46][47][48][49]. Chronic (refractory) cough also has similarities with other neurological diseases except chronic pain (see Table 5).
TRPV1
NK1 P2X3 NADM/non-NADM Others (e.g. NaV) In chronic pain Several highly selective TRPV1 antagonists have been investigated for chronic pain NK1 receptor antagonists can block behavioral responses to noxious (animal) 1.As a research target for the treatment of chronic pain, arthritis and other diseases.
2.P2X3 has been proved to be related to chronic pain (animal).
The antagonist ketamine is a analgesia and has both acute and prolonged effects on chronic neuropathic pain syndromes and symptoms of allodynia and hyperalgesia.
The antagonist lidocaine is a analgesia and anethesia In chronic (refractory) cough One highly selective TRPV1 antagonists is being investigated for chronic refractory cough NK1 receptor antagonists can block behavioral responses to capsaicin-induced cough (animal) P2X3 antagonists have an antitussive action, which is absence in capsaicin-induced cough, indicating that the action is independent of suppressing capsaicin effect (animal) NMDA receptor activation play a predominant role in cough, and the modulatory role of non-NMDA receptors had also been found.
The upregulated NaVs on vagus nerve (NaV 1.7-1.9) increase the sensitivity of cough reflex peripheral sensitivity of cough reflex by modulating peripheral TRP channels and inflammatory factors in cough-related sites. About safety, the concentration of gabapentin in brain tissue equals 80% of plasma, so its adverse effects mainly affect the central nervous system. Studies have shown that gabapentin is safe in most times. The most serious adverse effects reported were rhabdomyolysis and acute renal insufficiency in 2 diabetic patients, both of which recovered after treatment [68,69]. In a retrospective case series, 48 patients with overdosing gabapentin (vary from 600 mg to 3000 mg) suffered no or only mild adverse effects, and the highest tolerated dose was 48g [70]. Gabapentin is therefore safe in oral administration, but we should pay more attention when it is used in the elderly, in patients with renal insufficiency, and when used with antacids.
We should ensure that safe dose of gabapentin must be guaranteed, but on the other hand, sufficient dose may also be one important factor affecting its efficacy. In the study of Bowen AJ et al. [10], 59% of patients in the gabapentin group completed the 6-month treatment when the drug dosage was increased (the main reason for withdrawal was tachyphylaxis rather than side effects), and the average increase in LCQ score (5.4) was doubled compared to 2-month treatment (2.48). It was also higher than the average increase in LCQ score of the 2-month tricyclic antidepressant group (3.46). Only three (20%) of the tricyclic antidepressant group in this study completed 6-month treatment (also mainly because of tachyphylaxis), reflecting the tolerance and long-term efficacy of gabapentin was better than other neuromodulators, particularly tricyclic antidepressants.
There were some defects when gabapentin was used for chronic refractory cough. In the study of Ryan et al. changes in LCQ score and cough frequency returned to baseline after treatment withdrawal, and VAS score was even higher than the baseline, indicating that the efficacy of gabapentin was poorly sustained after withdrawal [11]. However, in the study of Dong Ran et al. the recurrent rate in gabapentin group was only 3% 2 weeks after complete cessation of gabapentin [12]. Therefore, the efficacy sustenance of gabapentin after withdrawal needs more research. Furthermore, the efficacy of gabapentin may decrease after a period of use, which may be the result of the downregulation of glutamate transporter 1 (necessary for initiating noradrenergic signaling) in the locus coeruleus [71]. Therefore, gabapentin was not so efficient in long-term treatment compared to initial use. In addition, SF-36 scores in the gabapentin group were also lower than those in the control group in the study of Ryan et al. suggesting that while cough-specific quality of life improved, patients' overall quality-of-life decreased. Ryan et al. did not explain this result, but we assumed that it may be related to the adverse effects of gabapentin.
There are several shortcomings in this study. There is very little eligible literature included in this study, and even less for each meta-analysis. Moreover, due to the small number of RCTs studying gabapentin in the treatment of chronic refractory cough, the included studies are not all RCTs, and the quality of some studies is not very high, so there are risks of publication bias and other bias. The basic disease status of patients in different studies were not the same or similar, which may increase the risk of bias. However, as the patient status varies and the results are robust by sensitivity analyze, the results may be applied to more patients.
There are some aspects we think need to be studied on the use of gabapentin for chronic refractory cough in future. The first one is the peripheral effect of gabapentin for chronic refractory cough. The study of Ryan NM et al. showed that gabapentin had a poor effect on capsaicin-induced cough, which was ascribed to its poor effect on the peripheral sensitivity [11]. However, in the study of Dong R et al., C2 and C5 were significantly increased in gabapentin group, which indicated that gabapentin could decrease the sensitivity of peripheral nervous system [12]. Although Dong R et al. ascribed the discrepancy to the difference of the recruited patients, the role of gabapentin in peripheral sensitivity of chronic refractory cough and whether gabapentin is more suitable for patients with peripheral sensitivity compared to those who without should be studied in subsequent trials. And the recurrence rate of cough in patients after drug withdrawal was also different as mentioned above, so the sustenance of gabapentin treatment efficacy can also be part of the future study. In Dong R et al. 's study, there was no statistically significant difference in the efficacy of gabapentin in cough patients with acid and non-acid reflux. Yu Y et al. found that gabapentin was more effective in patients with RSI>19, while patients with RSI>19 had more proximal reflux (closer to the throat), non-acid reflux and gas reflux [13]. These differences and mechanisms may also provide a direction for future research.
In conclusion, we presume that gabapentin can be used as the first choice in the treatment of chronic refractory cough for its better efficacy and less adverse effects. However, as there are still some discrepancies between studies and too little RCTs, further research is needed to support this conclusion.
Author contribution statement
Sheng Xie: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper.
Meiling Xie, Yongchun Shen: Performed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper. A familial axonal sensory neuropathy Exist in neuralgia Deyun Cheng: Conceived and designed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper.
Data availability statement
Data will be made available on request. | 2023-05-03T05:06:59.261Z | 2023-04-18T00:00:00.000 | {
"year": 2023,
"sha1": "83bb207c887d2b78e90d9fc715ba389b3872e556",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "PubMedCentral",
"pdf_hash": "83bb207c887d2b78e90d9fc715ba389b3872e556",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
125384887 | pes2o/s2orc | v3-fos-license | Non-negative Matrix Factorization and Co-clustering: A Promising Tool for Multi-tasks Bearing Fault Diagnosis
Classical bearing fault diagnosis methods, being designed according to one specific task, always pay attention to the effectiveness of extracted features and the final diagnostic performance. However, most of these approaches suffer from inefficiency when multiple tasks exist, especially in a real-time diagnostic scenario. A fault diagnosis method based on Non-negative Matrix Factorization (NMF) and Co-clustering strategy is proposed to overcome this limitation. Firstly, some high-dimensional matrixes are constructed using the Short-Time Fourier Transform (STFT) features, where the dimension of each matrix equals to the number of target tasks. Then, the NMF algorithm is carried out to obtain different components in each dimension direction through optimized matching, such as Euclidean distance and divergence distance. Finally, a Co-clustering technique based on information entropy is utilized to realize classification of each component. To verity the effectiveness of the proposed approach, a series of bearing data sets were analysed in this research. The tests indicated that although the diagnostic performance of single task is comparable to traditional clustering methods such as K-mean algorithm and Guassian Mixture Model, the accuracy and computational efficiency in multi-tasks fault diagnosis are improved.
Introduction
As one of the critical components in rotary machines, rolling element bearing occupies a significant position in modern mechanical equipment, which promotes a series of researches relating to the bearing fault mechanism, fault detection and fault precaution [1]. To highlight the weak fault information contaminated in the noise or reduce the influence of external factors, some time-frequency analysis methods have been studied to effectively extract features of bearing faults, including wavelet transform [2], matching demodulation transform [3], principal components analysis (PCA) [4], etc. Meanwhile, some modified classifiers were designed with the goal of improving the bearing fault diagnostic performance, such as the manifold learning classifier [5], support vector machine (SVM) [6] as well as neural network methods [7]. Other researchers even explored detection of multiple faults appearing in rotary machines. For example, Tang [8] demonstrated the high potential of kurtosis deconvolution to detect the compound faults of rolling bearings.
However, the approaches mentioned above have to suffer from the inefficiency when facing with those complicated or crossed diagnosis tasks. Aiming at the multi-task issue, a general strategy is to consider different tasks individually (e.g., crack size and crack location), thus producing increased computational loads, which, however, cannot meet the requirement for real-time diagnosis. Although some of multi-task strategies have been applied in other fields, like the diffusion least mean square (LMS) [9] for network node recognition, the multi-task learning in context classification [10], their availability in bearing fault diagnosis is still unknown.
To introduce the concept of multi-task fault diagnosis, a novel idea which combines the nonnegative matrix factorization (NMF) and Co-clustering is proposed in this paper. Different from the global characteristics of vector quantization (VQ) and PCA, NMF gives a good description about the local features, specializing in searching the small scale information from several tasks. Meanwhile, since the concept of Co-clustering is put forward by Higbee [11] in 1996, some outstanding algorithms have been developed, including the CTWC (Coupled Two-way Clustering), the Crossing Minimization, BCCA (Bi-Correlation Clustering Algorithm) [12], to realize the clustering in the row and line of a 2d matrix at the same time to meet some special requirements.
The basic principle of multi-task fault diagnosis
A challenge on multi-task bearing fault diagnosis is how to classify both fault locations and fault severity levels simultaneously. To tackle this challenge, two strategies were considered in traditional method: 1) Recognize these two tasks one by one, which does not consider the link between multipurposes; 2) Subdivide the multi-task as a mixture-types classifying model (for example, 4 types for task 1 and 5 types for task 2 means 20 types for mixture task), which has the adverse effects on diagnostic accuracy. Therefore, a multi-task classifier is carried out in this paper to overcome the weaknesses of methods above.
Preparing for the multi-task classifier, short-time Fourier transform (STFT) is adopted for feature extraction because of the non-stationary characteristics of vibration signals.
where ( ) is the window function; ( ) is the collecting vibration signals. The results of STFT reflect the energy distribution both in time and frequency domain.
In the STFT-based feature extraction strategy, the time-frequency power spectrum graph is segmented by windows, and the maximum value in each window is chosen as a feature. So there will be features, which are defined as: A chirp signal as well as its feature graph is shown in Figure 1 as an example, where selection of and values depends on the non-stationarity in the time-frequency domain. More fluctuation in curves requires more segments. Then, we assume CmCnx 2 samples that are chosen from 2-task sets: 1~Cm category for task 1; 1~Cn category for task 2. x 2 is the number of each sub-category. The process of multitask diagnosis is shown in Figure 2 and described as follows: 1) A group of feature matrixes is constructed using the STFT features, where each matrix is composed of the identical dimension sub-feature ( ) , , -, , of all CmCnx 2 samples: 2) A Co-clustering classifier based on information entropy is carried out in each 2D feature matrix after NMF algorithm, and then the clustering matrixes are obtained, where two different types are divided into n and m groups in horizontal and vertical grid respectively. But it is possible that there are a bits of missing categories in each matrix, such as that shown in Figure 2(c): 3) The final categories are confirmed by a fuse approach, such as weight fusion, in result matrixes.
The Non-negative Matrix Factorization
Since the non-negative characteristic of energy features, we considered the NMF algorithms for separating two tasks in the same non-negative matrix [13]. Two sub-matrixes are represented as and , respectively.
where the dimension of is xC m xC n , which is then approximately factorized into an xC m r matrix and an r xC n matrix . Usually r is chosen to be smaller than xC m or xC n , so that and are smaller than the original matrix. Factorization results mean two compressed versions of the original data matrix.
To find an approximate factorization , some cost functions are defined to quantify the degree of approximation. Such a cost function can be constructed using some distance measures between two non-negative matrixes A and B: 1) Euclidean distance: 2) Divergence distance: Equations (5) and (6) are both lower bounded by zero if and only if A = B. In addition, the divergence distance is asymmetric between these two matrixes ( ( ) ( )). Then, the formulation of NMF is considered to be an optimization problem according to the definition of two distances: To solve the optimization problem, traditional numerical optimization approaches like gradient descent and conjugate gradient are applied to find local minima. But the convergence of the former is slow and the computation of the latter is complex. So a multiplicative update rule is designed in our paper to provide a good compromise between speed and ease of implementation.
1) For ( ), the update rule of & can be designed as [14]: Note that, the divergence is invariant under these updates if and only if and are at a stationary point of the divergence. Also, each update consists of multiplication by a factor. In particular, it is straightforward to see that this multiplicative factor is unity when . As an example of NMF results for two-task matrix, here the full matrix was defined as a 300 200 task matrix and each 100*100 represent different categories, whatever in horizontal or vertical direction. As shown in Figure 3, the dimensions of sub-matrix and are 300 r and r 200, respectively. When the dimension r is set less (e.g. r=10), the classification result is pure and the computational load is also small. However, if the types of full matrix are not clear, we must increase the dimension of sub-matrix for higher classification accuracy in next classifier.
The Co-clustering Based on Information Entropy
After the completion of NMF, the matrix have been divided into two sub-matrixes and in the horizontal and vertical direction respectively. To maximize the gap between different categories and minimize it between same categories, we design an information entropy-based clustering model for co-clustering.
During the model, we assume the classification of task 1 is listed as * +, where n is the total number of task 1, and the classification of task 2 is listed as * +, where m is the total number of task 2. The mutual entropy ( ) between task 1 and task 2 can be calculated as: where discrete random variable * + , * + ; ( ) means the joint probability distribution between and ; ( ) means the probability distribution of W; ( ) means the probability distribution of .
Reference [15] proved the optimal formula when the Co-clustering reaches the optimum values: And in this paper, we choose the Kullback-Leibler ( ) to represent the ( ) ( ): is the Kullback-Leibler distance between probability distribution function ( ) and ( ), which represents the relative entropy between them and is calculated in equation (13) or (14) Therefore, the smallest mutual information entropy loss can be acquired by minimizing the distance between ( ) and ( ) or the distance between ( ) and ( ), and we can find the best mapping function. The detailed procedures can be described as following steps: 1) Initialize the probability density function: the sub-matrix W & H are divided into k & l groups according to the farthest segmentation criteria [16], then calculate the initial probability distribution ( ) and ( ); 2) Update the row clustering: search a new category label i for each row using the constraint condition (15), to reduce the KL distance of equation (13) as far as possible.
, ( ) ( )- Meanwhile, update the probability distribution ( ) with (12); 3) Update the column clustering: search a new category label j for each column using the constraint condition (16), to reduce the KL distance of equation (14) as far as possible.
, ( ) ( )- Meanwhile, update the probability distribution ( ) with (12); 4) Compare the mutual entropy ( ) with a threshold value ( ), if ( )< ( ), the program outputs the result of Co-clustering, otherwise, return to step (2) for next update. Particularly, a disturbance search algorithm [16] is adopted during the update process in row clustering or column clustering to avoid the local optimum. Finally, a result fusion based on the weight of each feature is carried out to obtain the final diagnosis result: where means the number of frequency domain segmentation of STFT power spectrum graphs; means the number of time domain segmentation of STFT power spectrum graphs; and denote the weight of the f th feature in task 1 and task 2, respectively.
Experiments and Performance Analysis
The bearing data set from Western Reserve University Bearing Data Center Website were used for the experimental study. As shown in Figure 4, the bearing apparatus consists of a drive motor, a torque transducer, and a dynamometer. Here 16 channel vibration data were collected using accelerometers, which were attached to the drive end and fan end of the motor housing with magnetic bases. Otherwise, speed and horsepower data were collected using the torque transducer/encoder, both with 12 kHz sampling frequency.
The bearing data set from Western Reserve University Bearing Data Center Website were used for the experimental study. As shown in Figure 4, the bearing apparatus consists of a drive motor, a torque transducer, and a dynamometer. Here 16 channel vibration data were collected using accelerometers, which were attached to the drive end and fan end of the motor housing with magnetic bases. Otherwise, speed and horsepower data were collected using the torque transducer/encoder, both with 12 kHz sampling frequency.
Drive motor
Torque transducer Dynamometer Figure 4. The bearing test system. Firstly, we extracted and compared the STFT features in different diagnostic tasks. Figure 5 illustated the power spectrum graphs of fault severity task (Fault location: inner; Load: 0 horsepower; Rotate speed: 1797Hz). Figure 6 gave the power spectrum graphs of fault location task (Fault severity: 0.533; Load: 0 horsepower; Rotate speed: 1797Hz). From Figures 5 and 6, we can find the power spectrum graphs between different types have a difference in the frequency-axis, but are stable in the time-axis. Obviously, the influence of fault severity and fault location on the frequency domain is larger than that on the time domain. Based on the observed power spectrum graphs, we segmented these graphs using a (100 2) window, where 100 means there being 100 segmentations in the range of 0~500Hz, and 2 is the number of windows from 0s to 120s. Therefore, the dimension of feature vector of each sample is 200, and we chose 100 (10 10) samples in each fault category for NMF algorithm.
C1
C2 C3 C4 C5 Figure 5. The power spectrum graphs of fault severity task. Figure 6. The power spectrum graphs of location task.
Secondly, the NMF and the Co-clustering strategy were carried out for 100 (5+6) samples. We compared the performance of bearing fault diagnosis as well as their computational loads when the r value in W and H increases from 1 to 100, the fault diagnosis accuracy curve along with the relative time cost is shown in Figure 7. It can be seen that the diagnosis accuracy grows from 74.16% (①) to 97.08% ( ③ ) when the dimension of NMF sub-matrix rises from 1 to 100, meanwhile, the computational load also appear an exponential increase from approximately 0 to 100. According to Figure 7, a balance point is found in r=39 (②) where the diagnosis accuracy is satisfactory enough (96.04%), also the diagnosis keeps a low level (15.21%). So, the dimension of NMF is designed as 39 when proposed method is applied to the Western Reserve University bearing data set. To verify the effectiveness of the proposed approach, we compared its performance with some classical clustering algorithms, such as K-means algorithm and GMM (Guassian Mixture Model) algorithm. The fault diagnosis results are listed in Table 2. Some conclusions can be obtained from this table: 1) The bearing diagnostic accuracy in fault severity (91.16%) is weak smaller than that in fault location (94.08%) in all five strategies, which means that task 1 mainly lies on the time domain features, while task 2 depends on the frequency domain features. So increasing the time domain segmentation number (more than 2) of NMF is a good measure to improve the classification performance of task 1; 2) During classical clustering algorithms, although the accuracy of "Task 1+Task 2" strategy is higher than the "Task 1 Task 2" strategy, the time cost of former is also larger than the later (increase by about 30%). Meanwhile, the GMM algorithm requires more time , even though possessing higher performance than K-mean approach; 3) The NMF-based Co-clustering offers good fault diagnosis performance compared with classical clustering algorithms. In addition, task 1 and task 2 are classified at the same time, which guarantees a low computational load, only 70.7% as compared with the K-means.
Conclusion
A fault diagnosis method based on NMF and Co-clustering strategy is proposed for multi-tasks bearing fault diagnosis. Here the high-dimensional matrixes are constructed using the STFT features, where the dimension of each matrix equals to the number of target tasks. With the NMF method, both fault severity and fault location of bearings can be identified at the same time. The time cost of the proposed method improves about 30% compared with classical clustering algorithms such as K-means and GMM, with acceptable diagnostic performance. Meanwhile, the suitable dimension of sub matrixes by NMF is 39. In that case, both classification accuracy and time cost are satisfactory thus being appealing for the multi-task application in real time bearing fault diagnosis system. | 2019-04-22T13:06:29.931Z | 2017-05-01T00:00:00.000 | {
"year": 2017,
"sha1": "3d4601c2d9109dd201ba78ef041a7820e3dc7955",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/842/1/012046",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "44719d9c9b7d44398dc93e049e366f56082eca67",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
238805553 | pes2o/s2orc | v3-fos-license | Financial and Labour Obstacles and Firm Employment: Evidence from Europe and Central Asia Firms
: This paper examines how obstacles in access to finance, labour regulations, and employment quality affect employment growth and the permanent worker ratio at the firm level. Using firm-level data of 11,691 firms in 33 low-income and middle-income countries in Europe and Central Asia, where unemployment rates are the highest worldwide, this paper demonstrates that access to finance and employment quality obstacles hinder employment growth. The paper also shows that the greater the obstacles in access to finance and labour regulations, the lower the permanent worker ratio. The findings are robust when applying a two-stage least-squares method to address endogeneity issues. Furthermore, quantile regression analysis shows that access to finance obstacles impede the lowest-growth firms the most and the highest-growth firms the least. Our results indicate that significant financial and regulatory reforms are needed to spur sustainable employment growth.
Introduction
The literature on business obstacles has examined how they affect firms' growth and decisions.In particular, most theoretical and empirical studies aim to measure the effects on investment decisions and sales growth (for example, see [1][2][3]).However, studies on how business obstacles affect employment decisions are scant.
The current literature indicates that obstacles in access to finance, labour regulation, and employment quality have a direct impact on employment decisions at the firm level.Specifically [4][5][6], among others, find that access to finance is a crucial determinant of employment growth and the firms' choice between permanent and non-permanent workers.Employment protections at the national level are well documented in the literature as constraints to employment growth and the choice between permanent and non-permanent workers.Refs.[7,8] have reported a positive impact of labour market liberalisation on employment growth and the use of permanent workers at the firm level.Hence, it is important to examine how labour regulation obstacles at the firm level affect employment growth and the permanent worker ratio.Furthermore, recent studies show that there is a positive correlation between the quality of the labour force and employment growth, as well as the proportion of permanent workers at the country level.Average unemployment rates are higher for those with a lower education level because the labour market is systematically oversupplied with low-education labourers.In addition, non-permanent workers are relatively less educated than permanent workers [9,10].The current literature has been heavily focused on barriers to employment growth and labour structure with regard to access to finance, but have largely ignored the impact of labour regulation and the quality of labour at the firm level on employment decisions.Thus, this paper contributes to the literature by exploring how labour regulation and employment quality obstacles affect employment growth and the permanent worker ratio at the firm level.
Our study utilises firm-level data from 33 low-and middle-income countries in Europe and Central Asia, where the unemployment rates are the highest and labour force participation rates are the lowest in the world.A significant increase in employment and workforce participation rates in these areas is particularly important to achieve the sustainable growth objectives highlighted in the 2030 Agenda for Sustainable Development [11,12].By showing how business obstacles affect employment growth and the permanent worker ratio, our study provides policymakers with empirical evidence to tackle the current employment and labour structure issues impeding this sustainability target.
We propose instrumental variables (IVs) regression models to address potential endogeneity concerns in the literature.The existing studies employ a standard ordinary least-squares (OLS) regression model to explore the effect of business obstacles on firm growth and performance [13][14][15].However, the OLS model may suffer from an endogeneity issue: a correlation between the error term and one or more obstacle variables.This leads to an endogeneity problem, creating bias and inconsistent estimates [16,17].To overcome the endogeneity issue, this paper uses the industry-country averages of obstacles as instrumental variables to break the correlations between the error terms and the obstacle variables.These IVs are carefully selected and satisfy two conditions for good IVs: (1) they are uncorrelated with the error term, but (2) are partially or fully correlated with the obstacle measures once other independent variables are controlled for.The results of the F-tests in the first-stage regressions were significant, indicating that our IVs are relevant.Furthermore, our chosen IVs may help to isolate the exogenous variation of obstacles because the causality is likely to be from average obstacles to individual firms, not vice versa.Finally, this paper utilises the quantile regression method to explore the different effects of the business obstacles across employment growth quartiles.Thus, this paper complements the work of [18,19], who found that the partial impacts of reported obstacles differed across different segments of employment growth.
This paper investigates how obstacles in access to finance, labour regulation, and employment quality affect employment growth and the permanent worker ratio in 33 lowand middle-income countries in Europe and Central Asia.Our results show that access to finance obstacles have a significant and adverse impact on employment growth and that this effect varies across employment growth quantiles.Specifically, access to finance obstacles constrains the lowest-growth firms the most and the highest-growth firms the least.This paper also shows that employment quality obstacles weaken employment growth.Finally, we found that the higher the level of access to finance and labour regulation obstacles, the lower the permanent worker ratio at the firm level.
The rest of the paper proceeds as follows.In the next section, we review the relevant literature and discuss the development of our hypothesis.Then, the data sample and variable estimations are described in Section 3. Section 4 discusses the methodology.Regression results are analysed in Section 5, and Section 6 offers conclusions.
Literature Review and Hypothesis Development
This paper relates to two streams of business obstacles and firm performance literature.The first stream focuses on the relationship between business obstacles and growth at the firm level.The second stream explores the link between the obstacles and the choice between permanent and non-permanent workers.
Several studies employ aggregate measures of national development and firm-level survey data to examine the economic and institutional effect on firm growth.These studies report a positive correlation between financial and institutional development and sales growth at the firm level.Their findings all emphasize the importance of financial and institutional development at the macro level in driving growth and performance at the firm level [20][21][22][23][24][25][26][27].
Other studies explore firm-level survey data, mainly from the World Bank, to examine how reported obstacles affect firms' growth and operation.These studies utilise firms' responses on the extent to which various obstacles affect their business operations and performance [13,28,29].Existing studies show that the impact of reported obstacles on firm growth is unclear.Differences in economic and institutional development across countries are the main reason for the differences in the impact [13].In countries with less developed systems, firms are affected by all obstacles to a greater extent than firms in countries with a more advanced legal and financial system and less corruption [15,30,31], among others, reported that access to finance, as well as legal and corruption obstacles, hinder firm growth [30].Investigated the correlation between firm performance and the business environment in various developing countries, including Bangladesh, China, India, and Pakistan.Business environments were measured based on the number of days required for customs clearance (import/export), the number of days without power during the year, and the number of days required to set up a landline.They found that a poor business environment was correlated with lower productivity, profits, and employment growth at the firm level.Higher power outages and longer customs clearance times reduced productivity and profitability.Firms with easier access to financial services showed higher growth in assets, employment and output [32], found that access to finance obstacles negatively affected firm growth in four out of five countries in the Euro area after controlling for growth opportunities, characteristics of the firms, time, and industry effects [33], investigated the impact of financial development on labour participation and employment ratios in China.They found that the effects were different across regions [15], did not find an impact of access to finance obstacles on the sales and employment growth of 27 Eastern European and Central Asian countries from 2002 to 2009 [34], observed that firms that experienced financial distress showed reductions in both employment and wages.
Empirical studies on the link between reported labour regulations and employment quality obstacles and firm performance receive little attention [28], used the World Bank Enterprise Survey (WBES) of 30 African countries and found that labour regulation and employment quality obstacles have a significant adverse impact on employment growth [35], investigated the link between labour skill deficits and firm performance in Tanzania.They found that firms with a higher proportion of skilled workers were more productive [36], explored the employment implications of the severance payment policy in China.They found that the policy of increased severance payments led to an increase in median firm size.Based on the above analysis, we propose the first hypothesis: Hypothesis 1 (H1).Firms with higher reported obstacles have lower employment growth.
Investigating the impact of obstacles on low-growth and high-growth firms will allow us to determine which factors help explain why some firms grow slower/faster than the average [37,38].Moreover, recent studies show differing effects of reported obstacles on employment growth across segments of employment growth [18,19,39].Ref. [39] investigated the effects of access to finance on firm growth across growth quantiles during the global financial crisis.They observed differences in the growth dynamics between high-growth and low-growth firms.The credit crisis in Europe after 2008 seriously affected low-growth firms, whereas high-growth firms were barely affected.Therefore, we propose the second hypothesis: Hypothesis 2 (H2).The impact of business obstacles varies across different segments of employment growth.
A higher number of obstacles in the access to finance, on the one hand, requires firms to increase labour productivity to increase the profitability of the capital they have raised, leading to an increase in demand for permanent workers.On the other hand, firms with a greater number of obstacles in the access to finance are uncertain about their ability to attract capital in the future, leading to a lower demand for permanent workers to enable higher flexibility [40].Refs.[41,42] show that firms tend to increase non-permanent workers when facing higher financing obstacles.The financial crisis in 2008 had a positive effect on Germany's employment status.Meanwhile, the opposite effect was found in relation to women and young people with disabilities in Spain [43,44].
Labour regulations, which often aim to protect permanent workers, create substantial layoff costs for permanent workers compared to those of non-permanent workers.These costs include firing costs (e.g., separation pay, costs associated with lawsuits), search costs (e.g., fees to recruitment agencies, advertising costs), recruitment costs (e.g., viewing applications, conducting interviews), and training costs for new workers [4,[45][46][47] among others, observed that a greater number of labour protection regulations causes higher layoff costs and thus reduces permanent employment at the firm level [47], also showed that labour regulation reforms in Europe that aim to relax restrictions on layoffs raise the proportion of permanent workers [48], found evidence of a substantial increase in the permanent worker ratio after reforms of employment protection that lowered the firing costs in Italy in 2015.
Based on these theoretical and empirical observations, we hypothesize the following: Hypothesis 3 (H3).Firms with higher business obstacles reduce their share of permanent workers.
Data Description and Variable Construction
This study used data from the most recent World Bank Enterprise Surveys (WBES) of 33 countries in Europe and Central Asia: http://www.enterprisurveys.org(accessed on 11 July 2020).The World Bank's sampling method aims to achieve two main goals: first, to benchmark the business of individual economies worldwide; and second, to conduct performance analyses of how obstacles in business affect productivity and job creation.To achieve these two goals simultaneously, the World Bank proposes two sampling principles: (1) to create a sample that represents the entire private non-agricultural economy, and therefore includes service and other related sectors; and (2) generate sample sizes that are large enough for selected industries to perform robust statistical analyses with accuracy levels of at least 7.5 percent precision for 90 percent confidence intervals.
The World Bank uses stratified random sampling to select firms in the sample.First, the World Bank divides entire firms in each country into stratified groups based on size (small, medium, and large), the business sector (manufacturing, retail, and other services), and the geographic region within a given country.Then, the surveyed firms are selected by means of the simple random sampling method in each group.This technique ensures that the sample represents the population of firms by size, industry, and geographic region.
The initial sample includes 21,459 firms; however, some firms did not answer all the questions used in the empirical analysis, so we exclude firms with missing values for any explanatory and control variables.In addition, since statistical measures such as mean and standard deviation are sensitive to outliers, we trimmed values less than the 2.5th percentile or greater than the 97.5th percentile of the independent variables.The final sample size used in the empirical analysis included 11,691 firms across 33 countries in Europe and Central Asia.
The number of firms surveyed in each country depended on their gross national income (GNI) in 2008.Accordingly, the World Bank selected 150 firms in very small economies (GNI < 15 billion); 360 firms in small economies (GNI from 15 billion to 100 billion); 1000 in medium economies (GNI from 100 billion to 500 billion); and 1320 firms in large economies (GNI > 500 billion).Table 1 presents the number of firms, the macroeconomic indicators, and the mean values of the obstacles in each country after removing missing values and outliers.The number of sampled firms varied across countries in the region.Seven major economies, including Portugal, Uzbekistan, Russia, Ukraine, Kazakhstan, and Turkey, had 4431 chosen firms, accounting for more than 36 percent of the sample size.At the other extreme were the four smallest economies-Azerbaijan, Tajikistan, Montenegro, and Kosovo-from which less than 120 firms were included for each country.This study used macro-level data on GDP growth, GDP per capita, and inflation as country-level controls in each country.We select annual GDP growth and GDP per capita because they positively correlate with investment opportunities at the firm level.We also included the inflation rate, as it is an indicator of whether the local currency provides a stable measure of values in contracts between firms [49,50].Country-level variables are the average of the values for the three years prior to the surveyed year.Detailed definitions of these measures and data sources are presented in Table A1 in Appendix A. Countries in the sample showed significant differences in GDP per capita.The countries with the lowest per-capita GDP were the Kyrgyz Republic and Tajikistan, with an average income of around USD 1000 per year compared to approximately USD 30,000 in the two highest per-capita income countries, Italy and Cyprus.
The World Bank collected firm owners' opinions on 15 different business obstacles to determine their perceptions of how the obstacles constrained their growth and performance by answering the following question: "To what extent is ________ an obstacle to the current operations of this establishment?"The blank spaces contained each of the 15 obstacles, which were access to finance, access to land, business licensing and permits, corruption, courts, crime, trade regulations, electricity, employment quality, labour regulations, political instability, practices of the informal sector, tax administration, tax rates, and transportation.Firm owners rated obstacles on the same scale from 0 to 4, indicating no obstacle (0), small obstacle (1), moderate obstacle (2), large obstacle (3), and very serious obstacle (4).Information about the perceptions is helpful because they implicitly provide a measure of impact.Firms are asked to assess how each of the obstacles affects their ability to operate and grow.Therefore, areas that may involve a significant increase in costs but receive little attention, or where the firm already has alternatives, will be unlikely to rate highly in the obstacle ranking [20].
Answering the question of how reported obstacles affect employment growth and the permanent worker ratio is significant in practice.It helps policymakers to prioritise the obstacles that need to improve to create more incentives for firms.Even if firms benefit from improvements in all business environment aspects, addressing them all at once would be challenging for any government.This paper focuses on three main elements of the business environment: access to finance, labour regulation, and employment quality.Table 1 reports the average level of owners' perceptions on access to finance, labour regulation, and employment quality obstacles.Interestingly, none of the country averages was two or higher, suggesting that all three obstacles were low and moderate at the country level.
See Table A1 for variable definitions and data sources.Figure 1 shows the relationship between economic development, measured by the GDP per capita, and the average level of obstacles at the country level.Each circle represents a country in the sample.The size of the circles shows the relative number of firms included in each country-the bigger the ring, the more firms are included in the sample.Figure 1 shows that firms tended to report lower levels of access to finance obstacles in more developed countries.A similar pattern is shown when using the ratio of private sector loans to GDP as a proxy for each country's level of financial market development.An exciting finding is that in more developed economies, firms reported higher levels of labour regulation obstacles.This result is consistent with the literature that less developed economies are more likely to relax regulations on worker protection [51].The correlation between GDP per capita and employment quality obstacles is relatively weak.Finally, we found a higher variation in the level of the reported obstacles in less developed economies.One potential shortcoming when the owners' perceptions are used in the analysis is that unsuccessful owners may blame business obstacles for their failure [17,52].However, the prime purpose of WBES was to assess the business environment, not the firm's performance.Accordingly, the interviewee only answered questions on their operation after completing questions in the business environment section.This sequence reduces respondents' ability to justify their unsuccessful performance, having answered earlier questions about the business environment.We acknowledge that bias in self-reported data cannot be eliminated.However, it is less likely to be a significant source of bias (see [13] for a detail discussion).
Table 2 presents summary statistics of the variables at the firm level used in our analysis.Employment growth is the percentage change in the number of permanent workers the firm has at the surveyed time compared to three years prior.Permanent workers are utilised because their growth reflects the firm performance and is of interest to policymakers [14].Employment growth is calculated as follows: where Employment growth it is the employment growth of firm i in year t.l i,t and l i,t−3 are the number of permanent workers of firm i in year t and three years ago (t − 3), respectively.The main advantage of using the average number of permanent workers in the denominator is that it results in the same absolute value of employment growth between two numbers of workers, regardless of whether there is an increase or decrease in workers.Employment in a firm includes permanent and non-permanent workers.Non-permanent workers, who generally have lower job satisfaction and receive less training and income than permanent workers, are less desirable [53,54].We measured the permanent worker ratio by calculating the percentage of permanent workers out of the total full-time equivalent (FTE) as follows: where t i is the number of months that non-permanent workers i worked in the last 12 months.N and l f ull are the total number of non-permanent and permanent workers, respectively.Table 2 shows that the employment growth and permanent worker ratio averages were 1.956 percent and 96 percent, respectively.Employment growth in Europe and Central Asia was much lower than in other regions, which is also consistent with the literature [55].On average, the levels of access-to-finance obstacles, labour regulations, and employment quality obstacles were 1.148, 0.962 and 1.448, respectively.The employment quality obstacles in Europe and Central Asia were much higher than in other regions and the global average, which is consistent with other studies showing that the proportion of trained workers and the labour participation rate in these regions are well below those of the rest of the world [11].Table A2, in Appendix A, presents the Pearson's pairwise correlation matrix of the 16 firm-level variables listed in Table 2.All three obstacles were positively correlated, implying that firms that reported a higher level of one obstacle were also more likely to face higher constraints in the others.
Methodology
This paper explores the effect of access-to-finance obstacles, labour regulations, and employment quality obstacles on employment growth and the permanent worker ratio.All regressions are estimated using firm-level data across 33 countries and country-year fixed effects.The inclusion of country-year fixed effects allows the model to capture time-varying and country-specific unobservable factors such as national culture.We also control for the country-and firm-specific variables, including firm size and age, CEO experience, ownership, certified financial statements, labour costs per sales, and sales growth.These control variables are widely used in the literature [56][57][58][59][60][61]; Shibia and Barako (2017); Grazzi and Moschella (2018); Di Cintio, Ghosh and Grassi (2017)).We follow the World Bank classification system to divide firms into three groups based on the number of workers: small (<20), medium (20-99), and large (>100) with the reference group being large firms.We include two dummy variables: small and medium to control for the firm size.We measure the CEO's experience by the number of years of management experience in the relevant business.Audited financial statements are more reliable and informative, which significantly influences the decisions of investors and creditors.We include a dummy variable as an indicator of the reliability of a firm's financial statements.This variable takes the value of one if the financial statements were audited by an independent auditing firm and zero otherwise.We also control for the effect of ownership by including two dummy variables, government and foreign.Finally, we control for other firm-level variables, including industry, firm age, labour costs per sales, and sales growth.Table A1 in Appendix A provides detailed descriptions and the sources of each variable.
To assess the impact of access-to-finance obstacles, labour regulations, and employment quality obstacles on employment growth and the permanent worker ratio, we have estimated the following regression: where y ijt are the dependent variables of interest, which are either employment growth or the percentage of permanent workers of firm i in country j and year t, λ j and η t are country and year fixed effects, respectively, and ε ijt denotes the error term.
We first run a standard ordinary least-squares (OLS) regression to find the mean relationship between the regressors and the outcome variables based on the conditional mean E(y ijt |X) .Estimates in Equation (1) may encounter an endogeneity problem because of the non-random assignment of obstacles.For example, unobserved firm characteristics may cause some firms to grow faster and employ more permanent workers, and these characteristics may not be distributed randomly.As a result, OLS might produce bias and inconsistent estimates [62].The source of the endogeneity problem in this case is the correlation between the error term and one or more obstacle variables.
Our strategy to solve the problem is to apply instrumental variables (IVs) with the two-stage least-squares (2SLS) estimates to break the correlation between the error term and the independent variables.To use the 2SLS method, we need to find instrumental variables that are, firstly, uncorrelated with the error term, and secondly, partially and fully correlated with the predictive variables.This paper follows [63] in using the average values of obstacles in each country-industry as instrumental variables.In practice, we are unable to verify the first condition as the error term is unobservable.The F-statistics in the first stage regressions are all greater than 100 and statistically significant in all models, indicating that the industry-country averages of obstacles satisfy the second condition for good instruments.Moreover, instrumenting the obstacles with the averages in each industry group helps to isolate the exogenous part because when obstacles are measured at the aggregate country-industry level, causality is likely to occur from average obstacles to individual firms, not vice versa.
Refs. [18,19] find that the partial impact of reported obstacles on employment growth differs across different segments of employment growth.We follow the quantile estimation method of [62] to investigate such effects.The regression models for the quantile level τ of the response are as follows: where Q τ (y ijt ) is the τth percentile of the firm's employment growth i in country j and year t.X ikt is a vector of firm-level control variables.In the quantile regression, the estimated slopes and intercept coefficients β i (τ) depend on τ.Unlike the OLS estimator, we estimate the parameters of the conditional quantile function by means of the quantile regression estimator β τ that minimises the objective function Q( βτ ): In the quantile regression, the slope coefficient for a predictor x i , βiτ , indicates the amount of change in the conditional quantile τ of y, Quant τ ( y|x), associated with a oneunit change in x.
We also estimate the economic impact of the obstacles on the mean sample by multiplying its corresponding β coefficients by the sample mean of the obstacles.This result measures the total effect of the obstacles on employment growth and the permanent worker ratio that considers both the magnitude of the average obstacles and the size of the corresponding coefficients.
To test the hypothesis that a reported obstacle influences firm employment growth and the permanent worker ratio, we have performed a t-test to determine if its corresponding coefficient is significantly different from zero.
Results and Analysis
Table 3 reports the impact of access-to-finance obstacles, labour regulations, and employment quality obstacles on employment growth at the firm level.Columns ( 1) and (2) provide estimates using 2SLS (IV) and OLS for the conditional mean, respectively.Columns 3-5 are estimates for the 25th, 50th, and 75th percentiles.The IV estimates show that access-to-finance obstacles hinder employment growth and the impact is statistically significant.On average, each additional level of access-to-finance obstacles reduces firm employment growth by 0.812 percent.This result is consistent with our first hypothesis that firms with higher reported obstacles have lower employment growth.Note: The OLS t-statistics (in parentheses) are robust to heteroscedasticity.The quantile regression estimates, along with t-statistics (in parentheses), were obtained using Stata 14.0.* significant at 10%; ** significant at 5%; *** significant at 1%.
We computed the economic impact of each obstacle by multiplying the coefficients of the obstacle variables by the mean level of the reported obstacles.Estimates of the economic impact of the access-to-finance obstacles on the sample mean also show a reduction of 0.93 percent in employment growth.Our results show that the access-to-finance obstacle coefficient is large enough to affect the employment growth of firms at the sample mean level.Our findings are consistent with the current literature, confirming that accessto-finance obstacles are a relevant factor in explaining firm growth [32,64] The quartile estimates in columns 3-5 show that access-to finance obstacles affect employment growth differently across quartiles.Access-to-finance obstacles constrain the lowest-growth firms the most and the highest-growth firms the least.On average, each additional level of access-to-finance obstacles reported by the firms reduced employment growth for quartile 1 and median firms by 0.043 percent and 0.039 percent, respectively.
The labour regulation obstacle was positively correlated with firm growth.This observation seems like a paradox at first glance.However, when answering the business environment questions, firms ranked the obstacles in the context of how they affected their business.Accordingly, firms that do not need to expand their labour force are less concerned with the labour regulation obstacle and will likely score low for the obstacle.Finally, an increase in the employment quality obstacle by one level causes a 0.536 percent reduction in employment growth.
When considering the effects of the control variables on employment growth, the results of our estimates are consistent with the literature.The employment growth of small firms is lower than that of large firms (the control group).State-owned firms have lower employment growth rates.Young firms have a higher growth rate than mature and old firms.One interesting feature from the estimates is that labour-intensive firms, as measured by the ratio of labour costs to sales, have higher employment growth rates.
Figure 2 summarises our estimates of the access-to-finance obstacles, labour regulations, and employment quality obstacles' coefficients across employment growth quantiles.The absolute values of the estimated coefficients of all three obstacles around the median of employment growth were statistically significant.However, the size of the coefficients were relatively small, suggesting that their economic effects are marginal.Access-t-finance and employment quality obstacles hindered the lowest-growth firms the most and the highest-growth firms the least.The labour regulations obstacle was positively correlated to slow-growth firms, as discussed above.Table 4 reports how these obstacles affected the permanent worker ratio at the firm level.We estimated five model specifications for robustness checks.We first ran the regression using 2SLS-IV (1) and OLS (2) methods, in which all three obstacles-access-tofinance obstacles, labour regulations, and employment quality-were included.We then investigated the individual effects of each obstacle using the OLS method (models 3-5).Access-to-finance obstacles were negatively correlated with the permanent worker ratio.Each additional level of access-to-finance obstacles lowered the permanent worker ratio by 1.92 percent.
When entered individually, both labour regulation and employment quality obstacles negatively affected the permanent worker ratio, as expected.However, the employment quality coefficient lost its significance in the presence of the remaining two obstacles, suggesting that the access-to-finance and labour regulation obstacles played a more sig- Table 4 reports how these obstacles affected the permanent worker ratio at the firm level.We estimated five model specifications for robustness checks.We first ran the regression using 2SLS-IV (1) and OLS (2) methods, in which all three obstacles-access-tofinance obstacles, labour regulations, and employment quality-were included.We then investigated the individual effects of each obstacle using the OLS method (models 3-5).Access-to-finance obstacles were negatively correlated with the permanent worker ratio.Each additional level of access-to-finance obstacles lowered the permanent worker ratio by 1.92 percent.When entered individually, both labour regulation and employment quality obstacles negatively affected the permanent worker ratio, as expected.However, the employment quality coefficient lost its significance in the presence of the remaining two obstacles, suggesting that the access-to-finance and labour regulation obstacles played a more significant role in determining the permanent worker ratio at the firm level.This result is also consistent with the work of Ferreira (2017) and the prediction of [4] that the access-to-finance obstacle and other obstacles that relate to permanent hiring and firing costs are the main determinants when firms choose the form of their employment contracts (Ferreira (2017) and [4]).Our results suggest that increased labour regulation obstacles may cause an increase in the hiring and firing costs of permanent workers.As a result, firms use fewer permanent workers in order to minimise these costs.Our estimation results support the third hypothesis, that firms with higher business obstacles use fewer permanent workers.
When looking at the control variables, smaller firms used relatively more permanent workers than large firms.Mature firms, old firms, and firms with audited financial statements employed fewer non-permanent workers since their growth and operations were more stable.Other control variables had a marginal influence on the structure of labour.
Low-income and middle-income countries in Europe and Central Asia share some common features: they have a large share of the state-owned sector, undeveloped financial systems, a shortage of educated workforce, and high unemployment rates [12,65].Thus, promoting employment is a priority for countries in this region.Table 5 summarises challenges that firms in this area are facing, focusing on SMEs.This table also highlights some opportunities and solutions for these nations to facilitate firms' financial and human demands.
Conclusions
This study investigated how access-to-finance obstacles, labour regulations, and employment quality obstacles affect employment growth and the composition of permanent workers.When deciding on the optimal permanent worker ratio, firms consider both demand for labour and productivity and the requirement of labour flexibility.A greater number of obstacles in the access to finance requires firms to increase labour productivity, whereas future financing obstacles increase the need for labour flexibility.Furthermore, when firms face more significant labour regulation obstacles, the hiring and firing costs of permanent workers tend to be higher.
This paper focuses on 33 low-income and middle-income countries in Europe and Central Asia, where unemployment rates are the highest worldwide.Creating more and better jobs is arguably the most critical challenge in relation to promoting the prosperity of countries in this region.
Our results show that access-to-finance obstacles and employment quality obstacles constrained employment growth at the firm level.Our findings suggest that easing access to finance is vital in order to promote employment growth and increase the permanent worker ratio at the firm level.
Figure 1 .
Figure 1.GDP per capita and reported obstacles at the firm level.
Sustainability 2021 ,
13, x FOR PEER REVIEW 12 of 18most and the highest-growth firms the least.The labour regulations obstacle was positively correlated to slow-growth firms, as discussed above.
Figure 2 .
Figure 2. Coefficients of the obstacles to employment growth by quantile of employment growth distribution.The dashed and the dotted lines are the coefficient estimates and their 95% confidence intervals, respectively.
Figure 2 .
Figure 2. Coefficients of the obstacles to employment growth by quantile of employment growth distribution.The dashed and the dotted lines are the coefficient estimates and their 95% confidence intervals, respectively.
and editing, A.T.B. and T.P.P.; funding acquisition, A.T.B. and T.P.P.All authors have read and agreed to the published version of the manuscript.
Table 1 .
Economic indicators of the selected countries in the sample.
Table 3 .
Employment growth and firm-level obstacles.
Table 4 .
Permanent employment and firm-level obstacles.
Table 5 .
Summary of barriers, drivers, and solutions relating to the promotion of employment growth and job security. | 2021-09-27T20:48:04.503Z | 2021-08-01T00:00:00.000 | {
"year": 2021,
"sha1": "38efd9c4bf5b41b45bea49073320c1f8a718d059",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/13/15/8650/pdf?version=1641355678",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "e62ab80b91a051fe177fdaaba792fe6544a058d2",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Business"
]
} |
205416138 | pes2o/s2orc | v3-fos-license | Evidence for Magnetic Weyl Fermions in a Correlated Metal
Recent discovery of both gapped and gapless topological phases in weakly correlated electron systems has introduced various relativistic particles and a number of exotic phenomena in condensed matter physics. The Weyl fermion is a prominent example of three dimensional (3D), gapless topological excitation, which has been experimentally identified in inversion symmetry breaking semimetals. However, their realization in spontaneously time reversal symmetry (TRS) breaking magnetically ordered states of correlated materials has so far remained hypothetical. Here, we report a set of experimental evidence for elusive magnetic Weyl fermions in Mn$_3$Sn, a non-collinear antiferromagnet that exhibits a large anomalous Hall effect even at room temperature. Detailed comparison between our angle resolved photoemission spectroscopy (ARPES) measurements and density functional theory (DFT) calculations reveals significant bandwidth renormalization and damping effects due to the strong correlation among Mn 3$d$ electrons. Moreover, our transport measurements have unveiled strong evidence for the chiral anomaly of Weyl fermions, namely, the emergence of positive magnetoconductance only in the presence of parallel electric and magnetic fields. The magnetic Weyl fermions of Mn$_3$Sn have a significant technological potential, since a weak field ($\sim$ 10 mT) is adequate for controlling the distribution of Weyl points and the large fictitious field ($\sim$ a few 100 T) in the momentum space. Our discovery thus lays the foundation for a new field of science and technology involving the magnetic Weyl excitations of strongly correlated electron systems.
Recent discovery of both gapped and gapless topological phases in weakly correlated electron systems has introduced various relativistic particles and a number of exotic phenomena in condensed matter physics [1][2][3][4][5] . The Weyl fermion 6-8 is a prominent example of three dimensional (3D), gapless topological excitation, which has been experimentally identified in inversion symmetry breaking semimetals 4,5 . However, their realization in spontaneously time reversal symmetry (TRS) breaking magnetically ordered states of correlated materials has so far remained hypothetical 7,9,10 . Here, we report a set of experimental evidence for elusive magnetic Weyl fermions in Mn 3 Sn, a non-collinear antiferromagnet that exhibits a large anomalous Hall effect even at room temperature 11 . Detailed comparison between our angle resolved photoemission spectroscopy (ARPES) measurements and density functional theory (DFT) calculations reveals significant bandwidth renormalization and damping effects due to the strong correlation among Mn 3d electrons. Moreover, our transport measurements have unveiled strong evidence for the chiral anomaly of Weyl fermions, namely, the emergence of positive magnetoconductance only in the presence of parallel electric and magnetic fields. The magnetic Weyl fermions of Mn 3 Sn have a significant technological potential, since a weak field (∼ 10 mT) is adequate for controlling the distribution of Weyl points and the large fictitious field (∼ a few 100 T) in the momentum space. Our discovery thus lays the foundation for a new field of science and technology involving the magnetic Weyl excitations of strongly correlated electron systems.
Traditionally, topological properties have been considered for the systems supporting gapped bulk excitations 1 . However, over the past few years three dimensional gapless systems such as Weyl and Dirac semimetals have been discovered, which combine two seemingly disjoint notions of gapless bulk excitations and band topology [2][3][4][5] . In 3D inversion or TRS breaking systems, two nondegenerate energy bands can linearly touch at pairs of isolated points in the momentum (k) space, giving rise to the Weyl quasiparticles. The touching points or Weyl nodes act as the unit strength (anti) monopoles of underlying Berry curvature [4][5][6][7] , leading to the protected zero energy surface states also known as the Fermi-arcs 4,5,7 , and many exotic bulk properties such as large anomalous Hall effect (AHE) 12 , optical gyrotropy 13 , and chiral anomaly 6,[14][15][16][17][18][19] . Interestingly, the Weyl fermions can describe low energy excitations of both weakly and strongly correlated electron systems. In weakly correlated, inversion symmetry breaking materials, where the symmetry breaking is entirely caused by the crystal structure rather than the collective properties of electrons, the ARPES has provided evidence for long-lived bulk Weyl fermions and the surface Fermi arcs 4,5 .
On the other hand, the magnetic Weyl fermions have been predicted for several spontaneously TRS breaking phases of strongly correlated materials since the early stage of the search 7, 9, 10 , but so far they have evaded experimental detection. In correlated electron systems, when the spectroscopic detection of coherent Weyl quasiparticles and Fermi arcs can be complicated due to the interaction induced suppression of bandwidth and life time, it becomes essential to perform complementary measurements of bulk physical quantities such as AHE and chiral anomaly, which are sensitive to the underlying topology.
Among the candidates, Mn 3 Sn is the only compound that exhibits a spontaneous Hall effect 11 . Mn 3 Sn is a hexagonal antiferromagnet (AFM) with a stacked kagome lattice. The geometrical frustration leads to a 120 • structure of Mn moments, whose symmetry allows a very small spin canting and thus macroscopic magnetization (Fig. 1a) 21,33 . This is the first AFM that exhibits a surprisingly large AHE below the Néel temperature of 430 K, which is comparable with or even 3 exceeds the AHE of ferromagnets even though the magnetization is negligibly small. Such a large AHE in an AFM at room temperature is not only potentially relevant for technological applications, but indicates a novel mechanism 22 that induces a large Berry curvature Ω(k) 23 , whose scale reaches ∼ a few 100 T 24 . In fact, a recent band calculation has shown that the antiferromagnetic state can support Weyl fermions and large Ω(k) 25 . Here, we show that Mn 3 Sn is strongly correlated by ARPES measurements. Moreover, we provide evidence for magnetic Weyl fermions by combining the observations of large AHE 11 , and concomitant positive longitudinal magnetoconductance (LMC) and negative transverse magnetoconductance (TMC) originating due to the chiral anomaly.
We first present in Fig. 1b the overview of the band structure calculated with spin-orbit coupling (SOC) for the magnetic configuration shown in Fig.1a. The TRS breaking lifts the spin degeneracy and leads to the band crossing at a number of k points, resulting in various pairs of the Weyl nodes at different energies. Among them, the most relevant for transport and other macroscopic measurements are the Weyl points that are closest to the Fermi energy, E F . In accordance with the previous prediction 25 , such Weyl points are found along K-M-K line (dotted rectangle in Fig. 1b); an electron band and a hole band centered at M intersect slightly above E F forming the Weyl points. Moreover, we show in the following that the magnetic texture sensitively determines the presence/absence of the gap at the band crossing.
In Fig. 1c, we summarize the k x -k y location of the above mentioned Weyl nodes. Without SOC, the electron-hole band crossings form a nodal ring surrounding K points (dashed circles).
Inclusion of SOC opens the gap along the nodal ring, except a pair of points corresponding to the Weyl nodes with different chirality. Notably, in Mn 3 Sn, the direction of the sublattice moments can be controlled by a small magnetic field of ∼ a few 100 Oe (Supplementary Note 1 and Figure S1).
Thus, by rotating a magnetic field in the x-y plane, the Weyl nodes, whose locations are determined by the spin texture and thus by magnetic field, may be moved along the hypothetical nodal ring.
As an important consequence of the magnetic symmetry, the electronic structure becomes orthorhombic even in the hexagonal crystal system. The presence of the mirror symmetry ensures that the pairs of Weyl nodes appear along a k line parallel to the local easy-axis (i.e. x axis).
To demonstrate this, we show in Fig. 1d an enlarged view of the band structure cut along the two distinct lines, K-M-K and K-M ′ -K (Fig. 1c). For K-M-K, the electron-hole band crossing Fig. 2c). This clarifies that the incident hν of 103 eV selectively detects the bulk band dispersion at k z = 0, where the Weyl nodes should exist near E F . The contour of photoelectron intensity clarifies the location of the Fermi surfaces on the k x -k y plane at k z = 0 (Fig. 2b).
Experiment clearly captures the main six elliptical-shaped contours centered at M points, which have the same topology as the Fermi surfaces (solid circle) predicted by DFT. This agreement is significant as it is this electron band that creates the Weyl points at its intersection with the other hole band (Fig. 1d).
According to the theory, the Weyl points on the K-M-K line are located slightly above E F (Fig. 1d). Here we show the ARPES images observed at 60 K along several k x cuts (Fig. 2j inset) around K and M points in Figs. 2f-i obtained before (left) and after (right) dividing the intensities by the Fermi-Dirac (FD) distribution function to detect thermally populated bands above E F . In Fig. 2i, we find two strong intensity regions at around M point just above E F and ∼ 40 meV below E F . With increasing k y from −0.80Å −1 (Fig. 2i), the two regions become closer in energy In Fig. 2k, on the other hand, we observe additional anomalies arising from the crossings between the electron and hole bands along the K-M-K line (Fig. 2g). Comparing with theory, we note that the peak (red bar) at k x ∼ 0.3Å −1 (−0.3Å −1 ) between K and M points most likely comes from the dispersion in the immediate vicinity of the Weyl point W + 1 (W − 2 ) (Fig. 1c), which should be located at ∼ 8 meV above E F , given the strong band renormalization. The peak (red bar) at k x ∼ 0.5Å −1 (−0.5Å −1 ) between K and Γ 2nd points corresponds to a large electron band, which crosses with another band and form the Weyl point W − 1 (W + 2 ) of different chirality.
The single peak at k x ∼ 0.1Å −1 (blue bar in Fig. 2k) has been shifted from k x = 0 by intensity gradient and would arise from the flat hole band (blue curve in Fig. 2g) centered at M located at To identify the effects of chiral anomaly, we prepared single crystals with more Mn ( To clarify the anisotropic character, we scanned the angle θ between B and I directions for the magnetoconductance measurements under a field of 9 T, which is much higher than the co- Figure S4). Figure S9). Unless it is necessary to specify the exact composition of the crystals used for measurements, we use "Mn 3 Sn" to refer to our crystals for clarity throughout the paper. The sample temperature was kept at 60 K during the measurement to avoid a cluster glass phase with ferromagnetic moment which appears below 50 K 11 . To prepare a single magnetic domain, we applied a field of 2000 Oe along the x-axis at room temperature before cleaving the crystals.
Transport and magnetization measurements. Supplementary Note 3 : Results of soft X-ray angle-resolved photoemission spectroscopy. Since the spectral weight in ARPES with vacuum ultraviolet is generally governed by surface signal, we performed more bulk-sensitive soft X-ray ARPES (SX-ARPES). In the photoelectron distribution at E F on the wide k x -k y sheet at k z =0 in the 7th Brillouin zone (hν=330 eV; Supplementary Fig. S4a), we see the strong photoelectron intensity only around zone boundary as observed by using hν of 103 eV (Fig. 2b in the main text). This result strongly supports our conclusion that ARPES with hν of 103 eV detects the bulk electronic structures.
In Supplementary Fig. S4b, we summarize the photoelectron intensity near E F as a function of hν.
The intensity jumps at hν ∼640 eV and 652 eV, corresponding to the energies associated with Mn 2p-Mn 3d excitations. The variation of the peak intensity with hν can be described by a Fano line shape as expected for the resonance behavior, which is a consequence of a localized character of the Mn 3d states. From this resonant photoemission, we unambiguously show that the electronic structure near E F is predominantly formed by Mn 3d orbitals, which is consistent with theory as shown in Supplementary Fig. S4c. Therefore, the Weyl fermion states in Mn 3 Sn should be predominantly formed by the Mn 3d orbitals. In conventional dirty semimetals and semiconductors, weak localization effects can cause negative magnetoresistance. Weak localization describes decrease in conductivity as the effect coming from constructive interference between two electron waves that travel along opposite directions along a closed path and are scattered off by the same impurities. Since an external magnetic field causes a phase difference between two waves, it disrupts the constructive interference, leading to enhanced conductivity or nega-tive magnetoresistance 41 . Just like any localization related effect, negative magnetoresistance due to field induced suppression of weak localization is generically a low temperature effect. In addition, for three dimensional materials, it has been demonstrated by Kawabata that both LMR and TMR become negative in the weak localization regime 42 | 2017-10-17T08:54:17.000Z | 2017-09-25T00:00:00.000 | {
"year": 2017,
"sha1": "009195607252e056aa8b8bde7f2323af86176b42",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1710.06167",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "009195607252e056aa8b8bde7f2323af86176b42",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
60847788 | pes2o/s2orc | v3-fos-license | Do You Have an Institutional Data Policy ? A Review of the Current Landscape of Library Data Services and Institutional Data Policies
Briney, K., Goben, A., & Zilinski, L. (2015). Do You Have an Institutional Data Policy? A Review of the Current Landscape of Library Data Services and Institutional Data Policies. Journal of Librarianship and Scholarly Communication, 3(2), eP1232. http://dx.doi.org/10.7710/2162-3309.1232 Do You Have an Institutional Data Policy? A Review of the Current Landscape of Library Data Services and Institutional Data Policies
INTRODUCTION
Many research institutions' libraries have developed research data services and hired dedicated data librarians in order to collaborate with researchers to improve data management.However, while current published research data management curricula for librarians mentions data policy, the focus has primarily been on funder, government, and scholarly journal policies.As such, the place of library data services within the larger framework of research data support at the institutional level and in shaping emerging policies is often unclear.
Though major funding agency policies have received wide recognition, institutional level data policies are less well known, if they even exist.As a result, researchers may misunderstand their home institutional policies with regards to research data.This, in turn, leads to difficulties when the researcher wishes to or is required to share data; is attempting to coordinate policies in advance of a collaboration with researchers at another institution; or needs guidance setting policy for the department, unit, or lab.
The object of this study is to provide a comprehensive exploration of whether institutional research data polices can be located for larger United State (US) research institutions and determine any correlation between policy existence and either library data services or the presence of a data librarian.The results of this study will call attention to areas for future research in institutional data policies and the academic library's role.
Research data has become a topic of increased concern for librarians over the past 15 years and, in response, there has been an increase in resource and service development.Tenopir, Birch, & Allard (2012) provided the most recent comprehensive data services review in academic libraries in a white paper for the Association of College and Research Libraries (ACRL).The review identified that while few academic libraries offered data services in 2012, many were planning to launch them, with larger institutions leading the development of these services.Though the study identified a need for policies, including surrounding data retention, policy development was not identified as a major activity for librarians.
As these services have emerged, Antell, Foote, Turner, and Shults (2014) note library publications in this area have typically been case studies focused on librarians offering data services and assisting faculty with research management.Though more recent papers have mentioned the need for librarian training in policy, (Information School, University of Sheffield, Cox, Verbaan, & Sen, 2014; University of Manitoba & Ishida, 2014) they focus only on policies from funding agencies and other public policies without including institutional policies.Additionally, the role for librarians in creating and influencing policy is not explored in this research, leaving a gap in the opportunities for librarian influence.One of the popular data management curricula presently available for librarians, the New England Collaborative Data Management Curriculum (Sheridan et al., n.d.), includes a section about policies but the usefulness is limited in that it primarily focuses on sharing data and does not go into detail about reviewing institutional data concerns or clarifying ownership and retention at the institutional level.
Presently, there do not appear to be any reviews of challenges surrounding institutional data policies.In a recent book chapter, MacKenzie Smith (2014) outlines many of the general policy issues by examining what she calls data governance: lack of clarity in terms of licensing, inability to stack licenses to combine data sets, general practice in science versus vague policy, different types of data, security and privacy for sensitive data (e.g.human subject), issues with creative works as data type, and software used in data creation and analysis.Though she briefly touches on ownership at the university level, her focus is primarily overarching themes such as terms of use, copyright and ownership licenses, public domain, and variances across international borders.According to Smith, these issues are impeding data sharing and reuse and institutions' abilities to create or improve tools in this area.Similarly, Krier and Strasser's book (2013) on data management for libraries only briefly covers institutional policies.Krier and Strasser recognize that many researchers start working at an institution without any knowledge of the institution's policies and that librarians must provide guidance in policy awareness and development.However, the brevity of the text does not provide sufficient direction for librarians looking to this avenue to effect change.
Research data ownership is particularly unclear and is frequently influenced by intellectual property laws, which are complicated by a variety of institutional, state, and country jurisdictions (Borgman, 2007;Jahnke & Asher, 2013).Some researchers assert that their data belong entirely to them (McGlynn, 2014), while others understand that their institution owns their data (Faulkes, 2014).However, many researchers are less clear about general data ownership rights (Murray-Rust, 2010).This can be further complicated with the variety of people who may be gathering data, including faculty, staff, research employees, students of all levels, fellow researchers, and collaborators at other institutions (Evola, 2013).
Law and government policy on the state, national, and international level heavily influence research data, especially in terms of access and intellectual property retention.In 2007, the OECD Principles and Guidelines for Access to Research Data from Public Funding (Pilat & Fukasaku, 2007) provided recommendations on access to data created with public funding for the 30 participating member governments, which included the United States.The Bayh-Dole Act (Law & Act, n.d.), an intellectual property law that frequently arises for the US, gives universities control of inventions funded by federal research money.Another frequently cited federal requirement is Circular A-100 from the Office of Management and Budget (1999), which was recently folded into the Office of Management and Budget's new Uniform Guidance (2014).This guidance states that the federal government has rights, including reproduction and publication, to data created from a federal grant.Finally, the Office of Science and Technology Policy released the "Increasing Access to the Results of Federally Funded Scientific Research" memorandum (Holdren, 2013), which directed US federal agencies to develop a plan to improve public access to federally funded research, including research data.During the spring of 2015, policies have begun to be released by the federal agencies (Adler, 2015;Whitmire et al., 2015).International research collaboration further complicates compliance as policies may or may not comprehensively exist.
Funding agency policies on research data are probably the best known by researchers due to the potential financial repercussions.One longstanding policy is the National Institutes of Health (NIH) Data Sharing Policy (8.2.3.1)(2010), which supports data sharing following acceptance of the primary research findings for publication.More recently, effective in 2011, the National Science Foundation (NSF) added to their data sharing policy (2010), which requires a data management plan where researchers outline how they will comply with the NSF sharing policy.Federal funding agencies have great variance in their policies (Dietrich, Adamus, Miner, & Steinhart, 2012) but a recent paper suggested that grant reviewers are not putting particular value on these data sharing plans (Pham-Kanter, Zinner, & Campbell, 2014).An analysis by Mischo, Schlemback, and O'Donnell (2014) revealed no significant difference in plans for funded and unfunded proposals.Various private funders have also adopted polices surrounding research data.One prominent example, which took effect on January 1, 2015, was the Bill & Melinda Gates Foundation (n.d.) requirement that all new funded agreements would immediately make published results and the supporting research data accessible to all.Further data policy developments from funders are expected in the coming years.
Scholarly journals have emerging and evolving policies regarding sharing and access to data that also affect researchers.A 2007 editorial ("Got data?," 2007) reminded authors that data sharing was part of the Nature author guidelines and recommended specific national repositories for consideration, though deposit is not mandatory for all dataset types (Nature Publishing Group, 2014).In 2014, PLOS took a stronger stance by requiring authors to make the data underlying their manuscripts available publicly or the article would be rejected (PLOS One, n.d.).This firm policy is similar to one long held by the American Geophysical Union for their accepted publications (American Geophysical Union, 2013).A more conservative example is the Journal of the Medical Library Association's policy, which encourages data deposit and provides storage space for authors, but primarily focuses on retention (Medical Library Association, 2013).Many journals do not have an explicit data sharing policy.For further examples and reference points, Strasser provides short summaries about the Joint Data Archiving Policy (2012) and the Journal Research Data Policy Bank (2013).
Despite the many issues surrounding research data policy, there has been a limited amount of research focused on the content of those policies.Bohémier et al. (2011) surveyed 20 institutions with a Carnegie Classification of Very High to identify available data polices and their contents.However, the results included policies that were not institution wide and do not address ownership of data, and the small sample size limits the generalizability that can be extrapolated In 2013, researchers for the DataRes project (Keralis, Stark, Halbert, & Moen, 2013) briefly reviewed 197 institutions based on high NIH or NSF funding to determine available policies.While this was a more comprehensive sample size, the researchers' returns were limited by use of only a few search terms to locate policies, and their analysis did not deeply explore the content or impact of the policies retrieved; instead, Keralis et al. summarize the policies as mostly ineffectual.The ARL SPEC Kit 334: Research Data Management Services (Fearon, Gunia, Lake, Pralle, & Sallans, 2013) provides a small selection of sample policies from major universities but presents older versions that did not yet respond to the emerging mandates.
Few papers provide context and framework, which could be drawn upon in creating or updating an institutional policy.Erway (2013) develops a detailed list of the stakeholders at institutions who need to be at the table when policies are being developed (e.g.IT, researchers, the Library, Office of Research), as well as identifying a variety of questions that need to be discussed over the course of policy development.Borgman (2012) reviews primary reasons for creation of data policy from the perspective of sharing.She also addresses an additional area: the reward motivation for researchers.Borgman points out that researchers are most likely to follow data research policies when there is a specific benefit to them.Furthermore, Keralis et al. ( 2013) point out institutions are likely to emulate funding agency policies, and the lack of momentum by the latter may be leading to lack of policy development by these institutions.Given the continually evolving nature of the data policy landscape, more work needs to be done to provide clarity and guidance in this area.
METHODS
The authors collected information on 206 American universities, their libraries' data services support efforts, and their institutional data policies.The list of universities chosen for this study was drawn from the July 2014 Carnegie list of universities in the United States with "Very High" or "High" research activity designation.Of the 206 universities, 107 were also identified as members of the Association of Research Libraries.
Student assistants gathered basic information on the 206 universities from the universities' websites, including type (public or private), total student population, faculty size, and research funding expenditures.This information was collected in June and July of 2014 and, wherever possible, data was from the 2012-2013 academic year.Two different student assistants collected data for each university to aid with accuracy.The authors cleaned the data by grouping faculty size and research funding into general categories to aid with analysis (Table 1, see Results).When there was disagreement as to which category a school should be placed into for a particular variable, the authors reexamined the university websites to decide proper categorization.
The authors gathered information from August to November of 2014 on which data support services the university libraries offered, including whether the library offered data services, had a data librarian on staff, maintained a repository specifically for data, or accepted data into their regular institutional repository (IR).Initially, all of the authors collected data on the same three universities to ensure consistency in collection.During this process, it was agreed that universities must offer services such as consultation or training to qualify as having data services (a data management LibGuide alone was not counted as data services); that a data librarian had data defined in her title or as a primary job duty; and that IR documentation must specifically mention data as an acceptable type of content for deposit.Beyond the initial three schools, data for the other 203 universities was collected by only one of the three authors.
Concurrent with collecting information on library data services, the authors searched for publicly available university data policies.Search phrases such as "data policy," "data retention," "data management," "data ownership," and "data stewardship" were used on university websites.Each university's intellectual property (IP) policy was also reviewed for explicit mention of "data."Policies that focused on institutional data (e.g. the common data set) or categorized levels of data requiring extra security such as student information were excluded, as these policies only applied to a subset of research data when it was covered at all.Similarly, data policies covering one department or college were not included.Policies that covered a whole university system were also excluded unless they were specifically hosted on an individual university's website.This method of identifying policies was chosen in order to estimate the ability of a researcher at the institution or an outside collaborator to locate a publicly-accessible policy.The authors did not attempt, for this project, to gain access to policies only available on intranets or behind other university log-ins.As with library services, all three authors coded the same three universities to ensure consistency in data gathering before having one researcher per university collect data.Gathered information on each policy included a link to the policy, a screenshot or saved pdf of the policy, and a record of which department or administrative office hosted the policy on the university website.The final portion of data collection involved coding the policies for their content, specifically looking to see if the policy included any of the following: As with previous data collection, inter-coder reliability was verified with a sample of the data before individual authors coded a subset of the universities.Coding was arranged so that the researcher who initially collected the policy information on the university website did not code the policy contents.
The authors collected all of the classification and coding data into a master spreadsheet covering all 206 universities.OpenRefine was then used to standardize the data and determine the final counts presented in the results section.Variables were counted individually (e.g. the total number of universities with library data services) as well as in combination (e.g. the number of universities with data services that also have a data policy).The authors performed a chi squared test for all variable correlations, using a threshold of p = 0.05 as a measure of significance.
RESULTS
The authors examined 206 universities by 24 different characteristics relating to university type, library data services, policy type, and policy contents.To best present this information, correlations are shown for the following sets of characteristics: university type and data services, university type and policy type, data services and policy type, and policy type and policy contents.These correlations correspond to the four subsections of the Results section.Each subsection contains tables of university counts and corresponding percentages for a set of characteristics with respect to another characteristic defined at the beginning of each row.
Library Data Services by University Type
Half (50%) of all the universities studied offered some data services through their library, with larger, more research-focused universities being more likely to offer data services than their counterparts.These findings are shown in Figures 1 and 2 (page 11).In particular, universities with a higher Carnegie classification ( Χ 2 = 63.13,p < 0.01), ARL membership ( Χ 2 = 28.03,p < 0.01), higher research expenditures ( Χ 2 = 58.91,p < 0.01), and larger faculty size ( Χ 2 = 23.65,p < 0.01) (as broken down in Table 1, following page) are all more likely to offer data services through their university libraries.Tables 2 and 3 (following pages) break down these findings in more detail.There is no significant difference between the percent of public and private universities that offer data services ( Χ 2 = 0.96, p = 0.33).Fewer universities have a data librarian on staff (37%) than offer data services.As with data services, the tendency is for larger universities conducting more research to employ a data librarian.This is true for universities with a higher Carnegie classification ( Χ 2 = 50.45,p < 0.01), ARL membership ( Χ 2 = 30.01,p < 0.01), higher research expenditures ( Χ 2 = 55.23,p < 0.01), and larger faculty size ( Χ 2 = 31.54,p < 0.01).Again, there is no significant difference between public and private universities ( Χ 2 = 0.06, p < 0.80).
Faculty Size Categories
More than half of all libraries (65%) offer a place to host research data, either in an IR or in a repository specifically for data.However, fewer universities (11%) have dedicated data repositories as compared with IRs that accept data (58% Notably, all universities with over $1 billion per year in research expenditures offer data services and a place to host data.The vast majority (89%) of these institutions also have a data librarian on staff.Additionally, this group shows the highest chance of having a data repository (33%).
Data Policy by University Type
Out of 206 universities, only 90 had some type of university-level policy covering research data (44%).One-third of these university policies are IP policies that specifically include data (15% overall) while the remaining two-thirds are standalone data policies (29% overall).One university in particular, John Hopkins, has both an independent data policy and an IP policy that covers research data.Due to having both types of policies, John Hopkins' data is included in both the IP policy and a data policy columns of Tables 4 and 5 (following page) for all of the categories to which the school belongs (Carnegie "Very High", ARL member, private university, over $1 billion in research expenditure, and between 501-1000 faculty).
Data Policies with Respect to Library Data Services
Nearly half (44%) of all universities studied have some type of policy covering research data, as reported in Table 6 (following page).Half of all libraries with data services have some data policy, but this is not a significant difference from the average ( Χ 2 = 3.40, p = 0.07).However, universities employing a data librarian are statistically more likely to have some type of data policy ( Χ 2 = 7.38, p < 0.01).There is no significant difference in policy numbers for universities hosting data in a repository of any type.Breaking the numbers down further by policy type, standalone data policies are more likely to be found at universities with data services ( Χ 2 = 4.23, p = 0.04) and a data librarian ( Χ 2 = 5.76, p = 0.02), but not those that host data in any repository ( Χ 2 = 0.01, p = 0.93) or specifically a data repository ( Χ 2 = 0.68, p = 0.41).Universities with data under their standard IP policy, however, show no significant difference in offering data services ( Χ 2 = 0.11, p = 0.74), having a data librarian ( Χ 2 = 1.29, p = 0.26), or hosting data in any repository ( Χ 2 = 0.32, p = 0.57) or a data-specific repository ( Χ 2 = 0.05, p = 0.83).Figure 4 shows the percent of universities with different data services that also have data policy, both standalone and IP policies covering data.
Again, as Johns Hopkins has both an IP policy covering data and a separate data policy, it is counted in both of these columns in Table 6 for all of the categories for which it belongs (data services, data librarian, and data repository).
Data Policy Contents
For the 90 universities that had data policies, the policy contents were coded by whether the policy defined data, identified a data owner, designated a non-owner responsible party (or data steward), required a retention period (either specific or vague), identified who is allowed access to the data, and described what happens to the data when a researcher leaves the university.Correlation statistics were not performed for these values as it was deemed outside of the scope of the current article.The authors plan to do a more comprehensive content analysis, including text analysis, in the future.Table 7 (following page) breaks down the contents of the policies by type of policy and Figure 5 (following page) displays the prevalence of the coded characteristics for different policy type.Note that in this case, the Johns Hopkins data is not included in the IP policy row as it was coded primarily by its standalone data policy.
Overall, over half of the policies designated an owner of research data generated at the university (67%) and required that the data should be retained for some period of time (43% for a specific period and 9% for a vague period).
There is a noticeable difference between the contents of IP policies that cover data and standalone data policies.IP policies primarily covered data ownership (76%) with little attention given to other data management issues.Standalone data policies, on the other hand, covered many topics.Over half of the data policies defined data (61%), identified a data owner (62%), state a specific retention time (62%), identified who can have access to the data (52%), and described what happens to the data when a researcher leaves the university (64%).Almost half of the policies (46%) also designate a data steward.
Library Data Services
The 2011 implementation of the NSF data management plan requirement was the impetus for a significant number of university libraries to create data services.Within only a few years of the requirement going into effect, half of the major research universities now offer data services.This is a large increase from the approximately 20% of ACRL libraries offering data-related services previously observed by Tenopir et al. (2012).However, this study's numbers are fairly consistent with their projection of research libraries that will offer data services in the near future.The comparison is not perfect, as the numbers in this study capture only overall data services while Tenopir et al. looked at individual services that may not be fully discoverable via the public websites for the libraries.Still, there is an overall increase in library data services being offered at research institutions.These results also confirm that the more research a university does the more likely its library is will offer data services.This demonstrates a general expectation for libraries to provide data services when their patrons conduct high levels of research.
While not as prevalent as data services, a large number of academic libraries have a data librarian on staff.This is again an increase over what was observed by Tenopir et al. (2012) who found that less than 10% of ACRL libraries had a dedicated data librarian/specialist. Duties for such librarians vary from offering data management support to assisting patrons with finding data for reuse, but the increasing number of data librarians is another sign that libraries are taking the new focus on data seriously.
It is somewhat surprising that more universities offer a place to host data than offer data services.An especially large portion of hosting comes from institutional repositories that accept some smaller data sets for deposit.Based on observations during data collection, the authors suspect that these large numbers are mainly due to hosted repository platforms, such as BePress, that state in their default guidance that the repository accepts data.While repository documentation states that data is accepted (and it was thus counted for this study), it is not fully clear if those libraries are actively acquiring data.Therefore, it is likely that the number of libraries actively working to collect research data is smaller than the reported 65%.
Unlike IRs, a small number of universities offer data repositories and/or promote them on their library webpages.While not statistically significant, the authors do note that public universities are almost twice as likely to have a data repository as private universities (see Table 2).This difference is likely due to university systems that offer a data repository for the member universities.For example, the University of California (UC) system runs the Merritt data repository, which is directly linked to many of the individual UC campuses' library websites.Given the expense of maintaining a data repository, university-systemwide data repositories represent a good use of resources for the universities that can leverage such connections.
Overall, there is a general trend toward libraries offering more data services with larger research universities leading the way.Based on the percentage of libraries offering data support for the six funding levels examined in this study (see Table 3), and estimating growth from previous studies, the authors expect that data services, having a dedicated data librarian, and providing research data hosting will become standard for all research-intensive academic libraries.This research suggests that with growth in higher institutional research output, greater opportunities for development of library data services will also grow.
Data Policies
This study found that just under half of the universities examined (44%) had a data policy of some sort; two-thirds of which are standalone data policies (29% overall) and one-third are intellectual property policies that cover data (15% overall).These numbers are larger than the 18% existence of university data policy that the DataRes project described by Keralis et al. (2013).DataRes studied the top 197 NSF and NIH university grant awardees, searching Google and individual university websites for the keywords "data management" and "policy".While the study is comparable in size, more policies were located in this study than the DataRes projects and several reasons have likely led to this, including the use of more search terms and broader keywords, the specific inclusion of university IP policies in this study, and an expected growth in research data policy as universities have increasingly focused on research data management in the past three years.
Ignoring IP policies, this study measures a 10% difference from the Keralis et al. ( 2013) project in the prevalence of standalone university data policies.Assuming at least some of this represents an overall increase in data policy, a question that remains is "Where is this difference is coming from?"While specific universities cannot be isolated to identify growth in this area, the data from this study show that universities with data services and a data librarian are statistically more likely to have a standalone data policy.Therefore, the growth in data policy could reasonably correlate with the observed growth in data services, as universities focus more on research data overall.However, no study currently addresses the direct influence of the creation of data services on creation of policy or vice versa.In seeing these strong correlations, the authors hope to discover some causation in future studies.
The greatest challenge in gathering these policies, as also found by the Bohémier et al. (2011) study, lays in the inconsistency across the universities and the inability of the authors to see policies that exist on internal websites.Though it is probable that other data policies exist for these institutions, the myriad locations (Office of Research, Office of the Provost, Library, Office of Sponsored Research, etc.), variety of terms used, and requirements for authentication prevent the casual seeker from easily discovering the policies for a given institution.As the data policy landscape continues to evolve, the authors hope that some standardization will occur in this area.
Data Policy Content
When data policies were discoverable, policies content included definitions of research data, data ownership, data retention, and terms surrounding the separation of a researcher from the institution.However, only data ownership was found to be in greater than half of the policies.These findings differ from previous studies on data policy.In particular, this study found that a significant number of research universities (15%) use their IP policies to cover research data and these IP policies focus almost solely on ownership as compared with standalone data policies.Previous data policies studies did not specifically examine this source of data policies.
Of most notable difference between the policy content analysis in this research and the DataRes project is that the policies discovered in this present study are more specific.The DataRes project provided little content analysis of the policies and summarized them as being advisory statements as opposed to actual institutional policy.Interestingly, Keralis et al. (2013) mention that a perception from their survey respondents saw data management as a development not fully invested in and that policies were therefore not a priority.The current landscape identified here contradicts this and suggests that universities are attempting to get ahead of data policies.
Standalone data policies appear to more broadly cover different areas of data management, though the focus on retention and researcher separation suggests a significant university concern about legal repercussions rather than on sharing and dissemination of data.This focus differs from that of funding agency data policy.The DataRes project, for example, used a word count analysis to suggest that the NIH data policy focuses primarily on data sharing while the NSF policy focuses on disseminating results (Keralis et al., 2013).
If universities are creating data policies in response to funding agency mandates, there is clearly a difference in priorities between the two groups.This difference, along with the continued evolution of funder plans for policy, may create challenges in developing institutional data policies.
For the purposes of this study, a comprehensive textual analysis was not undertaken.However, further analysis by the authors is forthcoming to identify specific ownership practices, highlight commonalities in retention, and to suggest best practices for institutions developing data policy.Further work also needs to be done to compare US institutional data policies to those from British, European, Australian, and Canadian institutions.
Opportunities for Librarians
Data services at libraries have passed the point of novelty and are becoming mainstream, as seen in the results of this study.As these services further develop, librarians have a particular opportunity to inform researchers about the policies at their institutions and, where none exist or where the policies are unclear or deemed damaging to research, to argue for and contribute to their development or modification.The findings suggest that librarians are already doing work in this area, as universities with data services and/or a data librarian are statistically more likely to have standalone policies for research data.By becoming well versed on present institutional policies, librarians providing data services can assist researchers in navigating policies at their institutions to meet the requirements of research funders and journals.This assistance is particularly necessary in a data policy landscape that is still in flux.Better knowledge of these policies in their present forms will also allow for improved advocacy to streamline data sharing and reuse.
Academic libraries and librarians are in a unique position to provide insight and guidance in the development and revision of institutional data policies and services.The development of data repositories, increasing in the scope of library-provided data services, and partnerships with other offices and divisions on campus (Office of Research, IT, Sponsored Programs) can further help support the overall data management efforts on campus.By leading the discussion on institutional and researcher needs, librarians can assist in creating policy as part of the overall data management infrastructure.
CONCLUSION
Data services as offered by the library and data librarians are becoming a standard at major research institutions.However, institutional data policies continue to be difficult to identify and many times provide an additional layer of confusion for researchers.In this present landscape, the trend of library-offered data services and hiring or designation of a data librarian will become typical at major research institutions.Importantly, standalone data policies significantly correlate with the presence of data services and a data librarian.While further research is needed to fully analyze the content of these policies, ownership, retention, and access will remain the primary topics of institutional data policies.Funder, government, and journal policies continue to emerge and evolve.Institutions are developing or revising policies, providing an important opportunity for libraries and librarians to be at the table in the shaping of these institutional policies and improving researcher awareness and compliance.
•
Policy defines research data in any way • Policy states retention requirement of specific duration • What is that duration?• Policy states retention requirement but does not specify duration • Policy defines a data owner • Who is the data owner?• Policy designates a data steward to administrate the data • Policy states who is allowed/required to have access to the data • Policy defines what happens to the data when the researcher leaves the university • Data fall under the university's regular IP policy
Figure 1 .Figure 2 .
Figure 1.Existence of Library Data Services in Different Types of Universities
Figure 3 .
Figure 3. Existence of Data Policy in Different Types of Universities
Figure 4 .
Figure 4. Existence of Data Policy for Universities with Library Data Services
Table 2 .
Library Data Services by University Type
Table 3 .
Library Data Services by University SIze
Table 4 .
Data Policy Type by University Type
Table 5 .
Data Policy Type by University SIze
Table 6 .
Data Policy Type by Library Data Services
Table 7a .
Policy Contents by Data Policy Type
Table 7b .
Policy Contents by Data Policy Type Figure 5. Policy Contents for Standalone Policies and IP Policies Covering Research Data | 2017-08-08T22:18:02.565Z | 2015-09-22T00:00:00.000 | {
"year": 2015,
"sha1": "907d136a67df81d9564ed99fe811d5632954fd03",
"oa_license": "CCBY",
"oa_url": "https://www.iastatedigitalpress.com/jlsc/article/id/12753/download/pdf/",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "907d136a67df81d9564ed99fe811d5632954fd03",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Political Science"
]
} |
245072293 | pes2o/s2orc | v3-fos-license | Agro-Morphological, Biochemical and Antioxidant Characterization of a Tunisian Chili Pepper Germplasm Collection
: Pepper species have been described as being highly sensitive to climate change. Here, we discuss the variability of the agro-morphological and phytochemical responses of pepper cultivars in the context of ongoing climate changes during seven stages of maturity, including heat stress. The effects and interactions were calculated to determine the source of variation according to rising temperature. Capsaicin content (CAP), total phenolic (TPC) and flavonoid (TFC) levels and antioxidant activity (AA) were also determined at different harvest times (at 10, 25, 40, 55, 70, 85 and 100 days after anthesis, DAA). Agro-morphological data showed that the highest variation was recorded for fruit traits compared to flower and plant ones. In particular, calyx shape margin, calyx annular constriction, fruit shape at blossom end and fruit size had a significant impact on the morphologic diversity among accessions. Levels of bioactive compounds and antioxidant activity depended on the genotype and the harvest time. TPC and AA increased at 100 DAA, while TFC were highly detected at the early harvest. Principal component analysis (PCA) allowed us to separate three clusters with well-defined biochemical traits. In particular, regardless of harvest time, Baklouti Ch é bika, Baklouti Sbikha and Chaabani accessions presented higher levels of TPC, TFC and AA regardless of the considered harvest time. In conclusion, high genetic variability was noted within the analyzed pepper germplasm, thus suggesting the need for major consideration of both agro-morphological and biochemical traits for pepper breeding programs. The current research was conducted to facilitate better management under high-stress conditions due to global warming
Introduction
The atmospheric CO 2 concentration has increased by approximately 50% since human activities have increased the abundance of heat-trapping gases in the atmosphere, exceeding 400 ppm [1]. The average surface air temperature has risen by 1.8 degrees Fahrenheit. The recent increase in the average temperature and the atmospheric CO 2 concentration is already leading to global severe abiotic changes in climate. Therefore, plants such as pepper are expected to face abiotic stresses [2,3].
The genus Capsicum comprises approximately 35 taxa, including only five cultivated species identified [4,5]. Pepper fruits are widely appreciated for their high nutritional value, as they are known as a source of alkaloids, holding various pharmacological properties, properties, and for possessing polyphenols, especially phenolic acids and flavonoids, which reduce the risk of some cancer types and cardiovascular diseases [6][7][8]. Tunisia is the largest producer of chilies in the African continent [9], where the versatility of pedo-climatic conditions favors the cultivation of a large number of specifically adapted pepper accessions that differ in fruit characteristics. They have been partially investigated and data have revealed that some of these landraces contain remarkable amounts of capsaicin and antioxidant compounds, which are useful for the processing industries [10]. Specific consideration must be given to the genotype and harvest time, which are important factors in determining the antioxidant compounds present in fruits and vegetables [11][12][13]. With the target of enhancing these bioresources, the present study aimed to characterize eleven Tunisian chili pepper accessions for their agro-morphological and biochemical traits, evaluating the variation in antioxidant compounds in relation to harvest time.
Plant Material
Eleven autochthonous chili pepper accessions, derived from four sites located in the north and middle of Tunisia, were studied: 'BakloutiChébika' (BaklC), 'Beldi' (Bel), 'Chaabani' (Chba), 'SissebChébika' (SisC), 'Bkalti' (Bka), 'Knaiss'(Kna), 'BakloutiSbikha' (BaklS), 'SissebSbikha' (SisS), 'Fort Menzeltemim' (FkbM), 'Fort de korba' (FkbK) and 'Corne de Gazelle' (CGaz). Details on the native sites of such accessions were described in our previous paper [10]. The present trial was carried out at the experimental field of the University of Tunis, Faculty of Sciences, located in Northern Tunisia (N 36°81′88″, E 10° 16′ 6″, 7.1 m a.s.l.), during the crop years 2016-2017. The experimental site is characterized by a Mediterranean maritime climate (humidity: 73%; annual mean relative temperature: 20.1 °C; annual mean total rainfall: 415 mm) and exhibits agro-climatic differences among the four sites of material collection ( Figure 1). The seeds were sown under greenhouse conditions and the seedlings were transplanted in the experimental field (humidity: 73%; annual mean relative temperature: 20.1 °C; annual mean total rainfall: 415 mm), including a complete randomized block with three replications, for a total of 55 plants per plot. Providing peppers with adequate water is essential, with 1 inch of water per week, and it is necessary to adjust the amount or frequency during hot, dry periods, after rainfall or if the soil is sandy and drains fast. Drip irrigation is ideal for growing peppers, because it is economical and less labor-intensive without wetting the foliage and possibly triggering diseases. In particular, The seeds were sown under greenhouse conditions and the seedlings were transplanted in the experimental field (humidity: 73%; annual mean relative temperature: 20.1 • C; annual mean total rainfall: 415 mm), including a complete randomized block with three replications, for a total of 55 plants per plot. Providing peppers with adequate water is essential, with 1 inch of water per week, and it is necessary to adjust the amount or frequency during hot, dry periods, after rainfall or if the soil is sandy and drains fast. Drip irrigation is ideal for growing peppers, because it is economical and less labor-intensive without wetting the foliage and possibly triggering diseases. In particular, organic fertilizers completely fermented and decomposed, such as compost and manure, were supplemented together with ammonium nitrate, at three time points: early season (2-30 days from planting; 2 kg ha −1 ), main season (31-70 days from planting; 132 kg ha −1 ) and end season (71-130 days from planting; 145 kg ha −1 ). Pest control was undertaken according to the recommendations authorized by the Department of Agriculture, Ministry of Agriculture and Cooperatives. The applied rate of pesticide was determined according to the relative pest, followed by a survey of the disease infestation on all the plant parts and fruits.
Agro-Morphological Characterization
Agro-morphological data were collected from 10 plants and 20 fruits per replicate. Sixteen descriptive traits associated with chili pepper plant, flower and fruit were evaluated on the basis of the Capsicum descriptors developed by the International Plant Genetic Resources Institute [14]. The plant traits included: growth habit (GH), leaf density (LD), leaf pubescence (LP), nodal anthocyanin (NA). The flower traits consisted of corolla color (CC), flower position (FP), calyx shape margin (CSM), calyx annular constriction (CAC). Fruit characters included: fruit color in mature stage (FCM), fruit color in immature stage (FCI), fruit position (FrP), fruit shape at pedicel attachment (FSP), fruit shape at blossom end (FSB), fruit cross-sectional corrugation (FCC). Biometric traits were fruit fresh weight (FFW, expressed as g) measured with an FA-G series electromagnetic balance, fruit length (FL, expressed as cm) and fruit diameter (FD, expressed as cm) determined with a digital caliper. The harvest was performed from mid-July to mid-September and was recorded as the number of fruits per plant (NF/Pl).
Biochemical Analysis
Pepper fruits were harvested every 15 days (at 10, 25, 40, 55, 70, 85 and 100 days after anthesis (DAA)) and the daily mean temperatures were calculated from measurements taken before each harvest (Table 1). Physiological maturity has been defined by Watada et al. (1984) as "the stage of development when a plant or plant part will continue ontogeny even if detached" while horticultural maturity has been defined as "the stage of development when a plant or plant part possesses the prerequisites for utilization by consumers for a particular purpose". In this study, we propose an integrative approach using days after anthesis and post-harvest studies to define physiological and horticultural maturity, as used by Neves et al. [15]. Samples were collected (Table 1) from all cultivars at different harvest times and horticultural maturity: 10 DAA (unripe, fully green), 25 DAA (half ripe, fully green), 40 DAA (ripe, fully green), 55 DAA (ripe, orange), 70 DAA (ripe, red), 85 DAA (over ripe, red) and 100 DAA (over ripe, dark red) and were dried at 30 • C until a constant weight was reached and then were subjected to biochemical analyses. The capsaicin (CAP) content was evaluated in the chili pepper fruits following the procedure reported in previous works [10,16]. Briefly, two grams were crushed in 10 mL of acetone using pestle and mortar. The obtained solution was then centrifuged, and the supernatants were recuperated. The supernatants were evaporated to dryness and re-suspended in 0.4 mL of NaOH and 3 mL of 3% phosphomolybdic acid. After shaking, the resulting solution was allowed to stand for 1 h at room temperature, filtered and centrifuged. A UV-Vis spectrophotometer (BioSpectrometer ® Series, Hamburg, Germany) was used to measure the absorbance of the clear blue solution obtained at 650 nm. The CAP amount was expressed as mg of CAP equivalents g −1 of dry weight (DW).
Extracts for the determination of total phenols (TPC), total flavonoids (TFC) and antioxidant activity analysis were prepared according to Lahbib et al. [10]. The TPC was determined by the method of Folin-Ciocalteu [17] and expressed as mg gallic acid equivalents (GAE) g −1 DW. For the TFC, the method of Um and Kim [18] was adopted, reading the absorbance at 430 nm with a UV-Vis spectrophotometer (BioSpectrometer ® Series, Hamburg, Germany). TFC content was expressed as mg naringin equivalent (NAE) g −1 DW.
Statistical Analysis
All traits were subjected to univariate and multivariate analysis. Biometric data were examined by analysis of variance (ANOVA) using a 95% confidence interval, followed by means comparison with Duncan's test and correlation analysis to determine associations among the studied traits. Descriptive characters were expressed as relative frequencies for all and each accession. Principal component analysis (PCA) and hierarchical cluster analysis (HCA) were used to obtain a general overview of the variation among the studied chili pepper accessions. All the statistical analyses were performed by using XLSTAT for Windows (Addinsoft, New York, NY, USA).
Bel, Kna and Bka accessions exhibited the highest NF/Pl (73.3, 44.9 and 44.9, respectively). Contrastingly, it was observed that both the FFW and FL/FD ratio were lower in Bel (8.6 g, 3.0), Bka (9.7 g, 2.4) and Kna (9.8 g, 2.8) than in FKbK (12.8 g, 8.6) and FKbM (12.1 g, 8.5) ( Table 3). From PCA analysis (Table 4), the agro-morphologic characteristics of the fruit were the greatest contributors to genetic diversity and were associated with PC1. Similar trends have also been reported for other Capsicum species and closely related horticultural crops, including tomato [5,21,22]. Indeed, PC1 (41.7%) was associated with calyx shape margin (CSM), calyx annular constriction (CAC), fruit shape at pedicel attachment (FSP), fruit shape at blossom end (FSB), nodal anthocyanin (NAg), fruit weight (FW) and number of fruits per plant (NF/Pl). PC2 (21.2%) was correlated with flower position (FP), nodal anthocyanin (NA), growth habit (GHp), leaf density (LD) and fruit length/fruit diameter ratio (FL/FD) ( Table 4). A higher degree of dispersion was observed between and within the clusters formed ( Figure 2). Such distribution was attributed to the influence of the different morphotypes of the different studied accessions, as observed by different authors [4,23,24]. From cluster analysis (Figure 2), no relation was found between the origin of the cultivars and the cluster pattern. This suggest that the Tunisian accessions could easily adapt in different cultivation areas as the climatic conditions of the experimental field were different from the original site. This indicates genetic variability within cultivars, and this would be helpful in selecting desirable chili pepper phenotypes.
Capsaicin Content
The cultivars under study had CAP content varying from 0.20-0.26 mg CAP g −1 DW (at 10 DAA) to 0.32-0.63mg CAP g −1 DW (at 100 DAA). The CAP content increased until 40 DAA, resulting in high values in Chba (0.83 mg CAP g −1 DW), BaklC (0.82 mg CAP g −1 DW) and BaklS (0.80 mg CAP g −1 DW). It significantly decreased from 40 to 85 DAA (Figure 3a), while at 100 DAA, the CAP content increased significantly. Fluctuation in CAP content, which occurred during ripening and was registered in the tested accessions, may have been due to variation in the levels of peroxidase (POD) enzymes, as a result of some environmental factors, such as temperature and soil moisture.
Capsaicin Content
The cultivars under study had CAP content varying from 0.20-0.26 mg CAP g −1 DW (at 10 DAA) to 0.32-0.63mg CAP g −1 DW (at 100 DAA). The CAP content increased until 40 DAA, resulting in high values in Chba (0.83 mg CAP g −1 DW), BaklC (0.82 mg CAP g −1 DW) and BaklS (0.80 mg CAP g −1 DW). It significantly decreased from 40 to 85 DAA (Figure 3a), while at 100 DAA, the CAP content increased significantly. Fluctuation in CAP content, which occurred during ripening and was registered in the tested accessions, may have been due to variation in the levels of peroxidase (POD) enzymes, as a result of some environmental factors, such as temperature and soil moisture.
Total Phenol Content
As observed in Figure 3b, TPC content varied with the harvest time. In relation to each harvest time, TPC oscillated from 2.01-4.10 mg GAE g −1 DW (at 10 DAA) to 2.60-5.80 mg GAE g −1 DW (at 100 DAA). At 100 DAA, FKbM and BaklC had the highest levels (5.80 and 5.70 mg GAE g −1 DW, respectively), while CGaz had the lowest (2.60 mg GAE g −1 DW) ( Table 1). In particular, from 10 to 40 DAA, the TPC amount did not vary significantly (Figure 3b), probably due to the vigorous growth of the pepper plants at this time. By contrast, wide variation was observed from 55 to 100 DAA as a result of the availability of precursors of polyphenols at higher temperatures and the establishment of different antioxidant compounds with a different degree of antioxidant activity [29].
Total Phenol Content
As observed in Figure 3b, TPC content varied with the harvest time. In relation to each harvest time, TPC oscillated from 2.01-4.10 mg GAE g −1 DW (at 10 DAA) to 2.60-5.80 mg GAE g −1 DW (at 100 DAA). At 100 DAA, FKbM and BaklC had the highest levels (5.80 and 5.70 mg GAE g −1 DW, respectively), while CGaz had the lowest (2.60 mg GAE g −1 DW) ( Table 1). In particular, from 10 to 40 DAA, the TPC amount did not vary significantly (Figure 3b), probably due to the vigorous growth of the pepper plants at this time. By contrast, wide variation was observed from 55 to 100 DAA as a result of the availability of precursors of polyphenols at higher temperatures and the establishment of different antioxidant compounds with a different degree of antioxidant activity [29].
Total Flavonoid Content
The TFC significantly decreased from early to later harvest times (Table 4). In particular, a significant decrease in the TFC concentration was observed from 55 (0.18-0.40 mg NAE g −1 DW) to 100 DAA (0.11-0.33 mg NAE g −1 DW). Both BaklC and BaklS showed the highest concentration of TFC at 10 DAA (0.44 mg NAE g −1 DW). Our results corroborate those of previous studies by Ghasemnezhad et al. and Howard et al. [30,31]. Some environmental factors, such as light condition, water status and temperature, influence flavonoid amounts [32]. According to the present study, lower temperatures appeared favorable to flavonoid accumulation, as reported also by Pandino et al. [33] for globe artichoke.
Total Flavonoid Content
The TFC significantly decreased from early to later harvest times (Table 4). In particular, a significant decrease in the TFC concentration was observed from 55 (0.18-0.40 mg NAE g −1 DW) to 100 DAA (0.11-0.33 mg NAE g −1 DW). Both BaklC and BaklS showed the highest concentration of TFC at 10 DAA (0.44 mg NAEg −1 DW). Our results corroborate those of previous studies by Ghasemnezhad et al. and Howard et al. [30,31]. Some environmental factors, such as light condition, water status and temperature, influence flavonoid amounts [32]. According to the present study, lower temperatures appeared favorable to flavonoid accumulation, as reported also by Pandino et al. [33] for globe artichoke.
Analysis of Variance and Interaction
All traits exhibited wide diversity among genotypes, even though the effect of the harvest stages had a greater influence on bioactive compound variation. Harvest time (H) and accession × harvest time (A × H) effects accounted for 16.4 to 79.9% and 0.6 to 1.6% of the total variance, respectively. Such results suggest the influence of the related air temperature values evidenced for the seven stages (Table 5). Indeed, the temperature, light intensity, plant nutrition and degree of maturation of the fruit are factors interfering in the metabolic activities during the development of the plant, being able to influence the concentration of phytochemical components [35]. Meckelmann et al. [36] concluded that the production of flavonoids depends strongly on the growing conditions, but the total polyphenols and antioxidant capacity (TEAC) do not vary to a large extent. [35,[37][38][39][40][41], reporting a major role of genotype in capsicinoid variation in several Capsicum spp. plants cultivated in different environments across different locations. Our observation for capsaicin content was due probably to the extent of the trial period, allowing environmental fluctuations that influenced the biosynthesis of capsaicin and other compounds. In previous studies by other authors on pepper, nor-dihydrocapsaicin exhibited similar trends, suggesting a role of the environmental conditions in its accumulation [40,42].
Chemometrics Analysis
According to the PCA representations (Figure 4a-c), TPC acted as the greatest contributor to the antioxidant activity at all the harvest times. It was strengthened by the CAP content contribution at 10, 25 and 40 DAA. PC1 was correlated with TP, FRAP and DPPH antioxidant activity. BaklS, BaklC, Chba, FkbM and FkbK were associated with PC1. CAP contributed moderately to the antioxidant activity in comparison with TPC as it was situated far from all the parameters studied.
According to the PCA representations (Figure 4a-c), TPC acted as the greatest contributor to the antioxidant activity at all the harvest times. It was strengthened by the CAP content contribution at 10, 25 and 40 DAA. PC1 was correlated with TP, FRAP and DPPH antioxidant activity. BaklS, BaklC, Chba, FkbM and FkbK were associated with PC1. CAP contributed moderately to the antioxidant activity in comparison with TPC as it was situated far from all the parameters studied. (Figure 4g). PC1 was dominated by AAFRAP, AADPPH, TPC and TFC, which were positively correlated with BaklS, BaklC, Chba and FkbM. In contrast, FkbK and Kna were correlated essentially with CAP content. At 55, 70, 85 and 100 DAA, TPC was the greatest contributor to the total antioxidant activity, with TFC content at a lesser extent, while CAP content seemed to be negligible as it was situated in the extreme part of the biplot (Figure 4d-g). In addition, at 100 DAA, CGaz formed a separate group, showing the lowest levels of bioactive components and antioxidant activity (Figure 4g).
Based on the seven PCAs ( Figure 4) and HCA analysis ( Figure 5) representations, CGaz, SisS, SisC, BKa and Bel showed the lowest levels of bioactive components and antioxidant activity, while BaklS, BaklC and Chba presented the highest values. FKbM, FKbK and Kna showed intermediate levels at all the harvest times. TPC and antioxidant activity by both DPPH and FRAP assays were strongly correlated at each harvest time, and this correlation was enhanced as the temperature increased. Indeed, many authors have reported a positive correlation between antioxidant activity and TPC accumulation during temperature exposure, including Kishore et al. and Kumari et al. [43,44]. The type of extraction solvent and the drying method, as well as the antioxidant assays used and the interactions of the antioxidant components, similarly influence the biochemical compound content [45].
Correlations
The correlations among the agronomic and biochemical traits are reported in Table 6. The NF/Pl was negatively correlated with FFW, FL/FD and, particularly, with all the biochemical traits. FFW and NF/Pl were the major traits that directly contributed to yield, as observed by Bozokalfa and Kilic [46]. It also appeared that when pepper plants produced less fruits, the biochemical levels increased, along with the FFW. This was probably related to the source-sink balance in the plant, namely to the competition for assimilates that occurred between fruit yielding and dimensions. Indeed, due to this competition, high flower abortion occurs when fast fruit growth takes place, and fruits from the bottom of the plants are also developed, rather than those from the upper, as observed by Wubs et al. and Zewdie and Bosland [47,48]. Accordingly, plants with a greater fruit size and higher biochemical amounts presented the lowest number of fruits per plant. [43,44]. The type of extraction solvent and the drying method, as well as the antioxidant assays used and the interactions of the antioxidant components, similarly influence the biochemical compound content [45].
Correlations
The correlations among the agronomic and biochemical traits are reported in Table 6. The NF/Pl was negatively correlated with FFW, FL/FD and, particularly, with all the biochemical traits. FFW and NF/Pl were the major traits that directly contributed to yield, as observed by Bozokalfa and Kilic [46]. It also appeared that when pepper plants produced less fruits, the biochemical levels increased, along with the FFW. This was probably related to the source-sink balance in the plant, namely to the competition for assimilates that occurred between fruit yielding and dimensions. Indeed, due to this competition, high flower abortion occurs when fast fruit growth takes place, and fruits from the bottom of the plants are also developed, rather than those from the upper, as observed by Wubs et al. and Zewdie and Bosland [47,48]. Accordingly, plants with a greater fruit size and higher biochemical amounts presented the lowest number of fruits per plant.
A weak and negative correlation between average fruit width, FFW and CAP content indicated that selection in favor of larger fruit might result in reduced CAP content [49]. Moreover, the accessions tested could provide genes for larger fruits, which is an essential market quality [46]. As expected, considering the phytochemical traits, a high correlation of TPC was detected with antioxidant activity, measured using both the A weak and negative correlation between average fruit width, FFW and CAP content indicated that selection in favor of larger fruit might result in reduced CAP content [49]. Moreover, the accessions tested could provide genes for larger fruits, which is an essential market quality [46]. As expected, considering the phytochemical traits, a high correlation of TPC was detected with antioxidant activity, measured using both the DPPH and FRAP assays. This is in agreement with previous results for other crops [50,51].
Conclusions
Clear variability existed in terms of plant morphology, with fruit shape and size being the highest contributors to this variation. The high degree of genotypic variability was more pronounced for the biochemical parameters, such as total phenol content and DPPH scavenging activity. Nodal anthocyanin, calyx shape margin, calyx annular constriction and fruit shape at blossom end were discriminative for the tested accessions and were more pronounced within cultivars Kna, Bka, CGaz and SisS. On the other hand, cultivars BaklS, BaklC, Chba, FkbM and FkbK showed high levels of TP, FRAP and DPPH antioxidant activity throughout the harvesting time. PCA and cluster analysis indicated that both agromorphologic and biochemical traits influenced diversity amongst the accessions. Biometric and descriptive traits related to fruit were the most important and valuable traits, useful for further breeding programs. A combination of molecular markers and phenotypic data would be the best choice to further explore the exploitability of local genetic resources and to predict methods to preserve chili pepper germplasm from genetic erosion. | 2021-12-12T16:13:38.739Z | 2021-12-08T00:00:00.000 | {
"year": 2021,
"sha1": "13f6c000fd884aea03cd22017dd4b9a41b810757",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0472/11/12/1236/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "dc0e8504fd3555c481ab03ae6cbc8cd291c3923a",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
} |
212725686 | pes2o/s2orc | v3-fos-license | Complementary lateral-spin-orbit building blocks for programmable logic and in-memory computing
Current-driven switching of nonvolatile spintronic materials and devices based on spin-orbit torques offer fast data processing speed, low power consumption, and unlimited endurance for future information processing applications. Analogous to conventional CMOS technology, it is important to develop a pair of complementary spin-orbit devices with differentiated magnetization switching senses as elementary building blocks for realizing sophisticated logic functionalities. Various attempts using external magnetic field or complicated stack/circuit designs have been proposed, however, plainer and more feasible approaches are still strongly desired. Here we show that a pair of two locally laser annealed perpendicular Pt/Co/Pt devices with opposite laser track configurations and thereby inverse field-free lateral spin-orbit torques (LSOTs) induced switching senses can be adopted as such complementary spin-orbit building blocks. By electrically programming the initial magnetization states (spin down/up) of each sample, four Boolean logic gates of AND, OR, NAND and NOR, as well as a spin-orbit half adder containing an XOR gate, were obtained. Moreover, various initialization-free, working current intensity-programmable stateful logic operations, including the material implication (IMP) gate, were also demonstrated by regarding the magnetization state as a logic input. Our complementary LSOT building blocks provide an applicable way towards future efficient spin logics and in-memory computing architectures.
Introduction
For more than half a century, conventional microelectronic logic circuits based on complementary metal-oxide-semiconductors (CMOS), i.e. the electron (n-) and the hole (p-) type charge conduction devices, have been developed to assemble the present von-Neumann computing architecture. Generally, the information represented by charge carriers are volatile, which has to be transported frequently between the logic processing unit and the memory devices, and thereby consuming massive unnecessary powers while generating undesirable joule heating. As a promising solution to these problems, spintronic devices that utilize the nonvolatile electron spins in a ferromagnet have been suggested by the community over the past decades [1][2][3] . Particularly, technologies of spin-transfer torque (STT) [4] and then spin-orbit torques (SOTs) [5][6][7] not only offer fast data processing speed and low power consumption, but also provide capabilities of programmable spin-logic operations [8][9] as well as non-von-Neumann in-memory computing applications [10][11] . Analogous to CMOS technology, it is important to develop complementary spintronic logic building blocks [12][13][14][15] , i.e. two type of basic spintronic devices that response distinctly to the same input signal, for facilitating complex logic functions with simplified circuit design.
Typically, the SOT-induced magnetization switching with perpendicular magnetic anisotropy requires the assistance of an in-plane external magnetic field [5] , the direction and the magnitude of which determine the switching direction and the critical switching current density. Inspired by this unique feature of SOT switching, 4 / 22 naturally, approaches of external magnetic field-dependent complementary spin-orbit logic devices have been proposed [8,16] . Recently, more scalable SOT technologies with external magnetic field-free switching have also been successfully demonstrated, and the magnetization switching direction can be controlled by various methods, such as introducing a build-in in-plane exchange magnetic field [17][18] and adjusting its direction, creating a spin current gradient [19][20] and tuning its polarity, manufacturing a lateral wedge oxide [21][22] and engineering its tilting orientation, and so on. Following these ways, field-free complementary spin-orbit logic pairs can be reasonably proposed, however, problem of either the existence of an unscalable in-plane coupling FM layer, or the fussy multi-terminal (terminal number > 3) SOT-MTJ device design, or the incompatibility with standard magnetic tunnel junction (MTJ) as well as the difficulties in manufacturing procedure, makes those potential complementary spin-orbit logic proposals not applicable for industrial realization. Thus, magnetic field-free complementary spin-orbit logic pairs with integration-friendly approaches are strongly desired.
Recently, a novel lateral spin-orbit torques (LSOT) induced field-free deterministic magnetization switching has been demonstrated in a locally laser annealed perpendicular magnetic anisotropy (PMA) Pt/Co/Pt structure, the switching orientation is dependent on the relative local annealing location of the in-plane current (for example, along x direction) and the laser track (also along the x direction, but lies on either -y or +y side of the sample) [23] . Inspired by this integration-friendly approach, here we show how a pair of magnetic field-free complementary LSOT logic devices 5 / 22 can be demonstrated as building blocks for programmable and stateful logic operations. By setting the polarity of initialization electric current, basic Boolean logic gates of AND/OR (NAND/NOR) were programmed in a single -y (+y) side laser annealed LSOT device, and an half adder containing the nonlinear separated XOR Demonstration of initialization current-programmable Boolean logic gates using the above two devices. Binary logic inputs of two current pulses I A and I B (-/+4 mA stands for logic value "0"/"1") were applied along the Hall bar channel simultaneously.
/ 22
The resulting nonvolatile current-induced magnetization down/up state represented by the negative/positive V Hall was regarded as logic output value "0"/"1". A necessary initialization current I init of -8 mA (orange pulses in (d)) or +8 mA (purple pulses in (g)) was applied on the devices before each operation, the polarity of which defined the type of the logic gate. For I init = -8 mA, the -y and the +y side locally laser annealed devices showed (e) AND and (f) NAND gates, respectively. Meanwhile, for I init = +8 mA, the -y side and the +y side locally laser annealed devices showed (h) OR and (i) NOR gates, respectively. The x-axis of (d-i) are operation procedures with same scales and values. lateral Pt-Co asymmetry after laser annealing [23] .
However, single-device implementations of n different logic gates require devices with at least totally n possible different modes. Hence, in order to realize the 4 logic gates shown in Figure 1a, another binary variable, i.e. the initial magnetization state Remarkably, for initialization current I init = -/+8 mA, complementary Boolean logic gates of AND/OR and NAND/NOR were realized in the -y and the +y laser annealed LSOT devices, respectively. It is worth noting that the demonstrated NOR gate can also act as a NOT gate of input I A when I B is fixed at "0". CMOS-based half adder consists as many as 18 transistors [14] . As shown in Figure 2a, the simplest half adder design incorporates an XOR gate for SUM and an AND gate for CARRY. Unlike the linearly separable logic gates shown in Figure 1, the XOR gate is a linearly inseparable logic function that requires to define two logic thresholds for device with a monotonic input-output response, and thereby hardly possible to be implemented by a single device or simple circuits.
Nevertheless, the XOR gate can be formed by an OR gate and a NAND gate.
Following this way, two complementary -y and +y side laser annealed LSOT devices, denoted respectively as P and Q in Figure 2b, were connected for the XOR implementation. When programming the initialization currents of P and Q to be = +8 mA and = -8 mA, OR and NAND gates were obtained, respectively, and the synthetic output of ( + ) was shown in Figure 2c. Binary outputs of around 0 (defined as logic value "0" here) or 40 μV (logic value "1") were found, corresponding to the resulting magnetization states of either P being spin down/up and Q being spin up/down or both P and Q being spin up. Thus, nonlinear separated logic gate of XOR was realized, which can work as SUM for a half adder. Together with another -y side laser annealed LSOT device denoted as S, which performed AND were referred to I-R logics. However, future in-memory computing, which aims to eliminate the memory wall problem [24] in von-Neumann computing architecture, requires R-R logic gates that the processing devices can not only store output data but also perform stateful logic operations by regarding their initial states as input variables at the same time [25][26][27][28] . In the following section, we will firstly demonstrate stateful A key difference between the I-R and the R-R logic gates is the role of the initial magnetization state, which act as the programming term and the input variable for the I-R and the R-R gates, respectively. On the one hand, this makes stateful R-R logic gates naturally free from initialization operations; on the other hand, other programming methods have to be involved for assembling multi-functional R-R gates in a single device, or the device would only perform as one specific gate. As shown in Figure 3, a working current pulse I w was simultaneously applied with I A to program the overlapped I ovlp = I A + I w . Particularly, five working modes with respective relationships between I ovlp and the critical switching current ±I c shown in Figure 3a and 3c were derived for I A = +6 mA (as logic value "0") or + 12 mA (as logic value The working paradigm of above IR-R spin logic gates is thought to be applicable for other types of current-driven magnetization switching devices as well. However, advantages of the complementary LSOT devices used here should be underlined due to their capability of significantly enriching the in-memory functionalities and thereby fabricating more straightforward circuits. For example, a -y side laser annealed LSOT device can act as an in-memory 3-input majority gate (MAJ) [10] if the I w is also considered as a logic input. A MAJ returns "1" if and only if the majority (more than half) of its inputs are "1". Refer to the AND gate ( , abbr. I A R Hall ) as shown in Figure 3b3 and the OR gate ( ⋁ ) as shown in Figure 3b5, when I w = -15/-3 mA is defined as logic input "0"/"1", the logic output can be expressed as "< >" is the logic operator for MAJ. Together with the +y side laser annealed LSOT device, which can be programmed to the functionally complete IMP gate [29] as shown in Figure 3d3, spin-orbit in-memory computing circuit designs with versatile reconfigurable operations are promising.
Conclusion
In summary, a pair of magnetic field-free complementary LSOT devices with | 2020-03-17T01:00:52.364Z | 2020-03-14T00:00:00.000 | {
"year": 2020,
"sha1": "34b6edd52e8852f53bfe21a47d11d79d855f6f9e",
"oa_license": "CCBY",
"oa_url": "https://nottingham-repository.worktribe.com/preview/4659619/Manuscript.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "fc429ece684ab8ea78fe43fe921404d1a098cf40",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
215408350 | pes2o/s2orc | v3-fos-license | Activation of cannabinoid receptor type 1 impairs spatial and temporal aspects of episodic-like memories in rats
The endocannabinoid system modulates many brain functions, including episodic memories, which contain memories of time and places. Most studies have focused on the involvement of the endocannabinoid system in spatial memory; however, its role in temporal memory is not well understood. Few studies have tested whether the unilateral endocannabinoid system is sufficient to modulate memory retrieval. Here, we tested whether type 1 cannabinoid receptors in the right hippocampal cornu ammonis area 1 region are enough to modulate the retrieval of episodic memories, specifically their spatial and temporal components. Because rats have innate preferences for displaced or old familiar objects, we changed the locations of "old familiar" and "recent familiar" objects in an open field and measured the rats' exploration times to evaluate spatial and temporal memory. To address the influence of the type 1 cannabinoid receptors on the retrieval of episodic-like memories, two doses of arachidonylcyclopropylamide, a selective type 1 cannabinoid receptor agonist, were infused into the cornu ammonis area 1 of rats ten minutes before the discrimination trials. We observed that rats injected with a low dose of arachidonylcyclopropylamide spent less time investigating displaced objects, suggesting spatial memory impairment, whereas those receiving a high dose explored old familiar objects less frequently, suggesting temporal memory impairment. This indicates that unilateral activation of type 1 cannabinoid receptors in the cornu ammonis area 1 impairs the spatial and temporal aspects of episodic memories. This research mimics the influence of marijuana intoxication effects in humans, such as spatial and temporal disintegration.
Introduction
The endocannabinoid system (ECS) in the brain modulates various functions, including motor (Martinez et al., 2012), sensory (Green et al., 2003), and cognitive (Green et al., 2003;Jacobus et al., 2009;Messinis et al., 2006) functions. Memory, regardless of whether it is declarative or nondeclarative, is a cognitive function that has been frequently reported to be modulated by the ECS. In human subject studies, declarative memory is usually impaired by marijuana intoxication (Ranganathan and D'Souza, 2006). In rodent models, whether ECS facilitates or impairs declarative memory is inconclusive; it seems that the administration method profoundly influences the results of behavioral tasks. To test the mechanisms of ECS-modulated memory processing, some reports used systemic injections (Deadwyler et al., 2007;Lichtman, 2000;Lichtman and Martin, 1996), while others used local infusion (Atsak et al., 2012;de Oliveira Alvares et al., 2005, 2006. Interestingly, opposite effects can be demonstrated with different administration routes. However, the route of administration does not always result in opposite effects. For example, systemic or local injections of fatty acid amide hydrolase inhibitor, which increases endogenous levels of the cannabinoid receptor agonist, into the cornu ammonis area 1 (CA1) or basolateral amygdala decreases the retrieval of fear memory (Segev et al., 2018). The abovementioned literature reveals an interesting question: which brain areas contribute to marijuana intoxication?
The hippocampus is a brain area that expresses a high density of type 1 cannabinoid (CB1) receptors (Herkenham et al., 1990;Mailleux and Vanderhaeghen, 1992;Tsou et al., 1998), which are the major receptors that bind endocannabinoids. In the hippocampus, CB1 receptors are mainly distributed in the synaptic terminals of gamma aminobutyric acidergic (GABAergic) (Chen et al., 2003;Katona et al., 1999) and glutamatergic neurons (Kawamura et al., 2006). Substantial literature indicates that the ECS in the hippocampus modulates memory processing from acquisition, consolidation, and retrieval to extinction (Marsicano and Lafenetre, 2009). In laboratory animal studies, hippocampal-dependent memory tasks could be impaired by systemically administering CB1 receptor agonists (Deadwyler et al., 2007;Lichtman and Martin, 1996), and CB1 receptor antagonists could facilitate memory processing (Deadwyler et al., 2007;Lichtman, 2000). It is nearly impossible to locally inject drugs into the human brain to study anatomically specific mechanisms of marijuana-related memory effects. However, anatomically restricted studies on cannabinoidmodulated memory in rodents are also limited (Quillfeldt and de Oliveira Alvares, 2015). Although systemic cannabinoid effects could arise from multiregional actions, the hippocampus is hypothesized to be an important mediator of marijuana-related memory effects, owing to its abundant CB1 receptors (Herkenham et al., 1990;Mailleux and Vanderhaeghen, 1992;Tsou et al., 1998). Also, memory is widely defined as "past experiences that persist over time" (Buzsaki, 2006). Different brain regions contribute to processing different types of memory. For example, procedural memory depends on the striatum and cannot be retrieved consciously. Therefore, this kind of memory is classified as nondeclarative memory. On the other hand, declarative memories, which are believed to be hippocampus-dependent, can be retrieved consciously and thus declared. In a simplified stage model of memory, "encoding," "storage," and "retrieval" are necessary for processing memory (Melton, 1963).
This research is focused on the retrieval of declarative memory. We locally infused CB1 receptor agonists into the hippocampal CA1 in rats to show whether the memory-retrieval functions were impaired by activation of CB1 receptors in the CA1. Regarding the infusion location, Klur et al. (2009) reported that the dorsal hippocampus might show behavioral lateralization of memory retrieval. Their results suggest that inactivation of the right or both hippocampi disrupts retrieval in a spatial water maze task. Unilateral microinjections of endocannabinoid-related drugs into the central nucleus of the amygdala (right side) (Hsiao et al., 2012) and ventral hippocampus (right side) (Roohbakhsh et al., 2009) showed effects in behavioral tests. It would also be interesting to test whether activation of the CB1 receptors in the right CA1 impairs memory retrieval. Therefore, we targeted the right CA1 and tested whether the right-side ECS in the CA1 is sufficient for disrupting memory retrieval. We hypothesized that this study would show whether the right CA1-based ECS affects memory retrieval.
Although there are conflicting reports on the role of CB1 receptors in learning and memory (de Oliveira Alvares et al., 2005, 2006Deadwyler et al., 2007;Lichtman, 2000;Lichtman and Martin, 1996), the inability to estimate time, also referred to as "temporal disintegration," is consistently found in marijuana users, mostly regarding overestimating the amount of time that has elapsed (Atakan et al., 2012). Nevertheless, temporal information is one of the critical elements of episodic memory, in addition to spatial memory and object recognition. In the brain, the hippocampus is a vital brain region that processes episodic memory, supported by the discovery of place preference (O'Keefe and Dostrovsky, 1971) and time preference (MacDonald et al., 2011;Pastalkova et al., 2008) cells in this region. Also, lesions of the hippocampus have been shown to impair time estimation in rats (Meck et al., 1984). The abovementioned findings support the idea that the ECS in the hippocampus may play a critical role in modulating episodic memory (another type of declarative memory), including temporal information. The mechanisms involved in processing spatial and temporal memory are complex.
To mimic the influences of marijuana intoxication on the aspects of spatial and temporal memory, we subdivided memory into three components: type (episodic), phase (retrieval), and brain area (hippocampal CA1). We adapted Dere et al. (2005) three-trial object exploration task, which can be used to evaluate episodic memory in terms of recognition (what), spatial (where), and temporal (when) information. We targeted the effects of memory retrieval because human studies have revealed that cannabis users are less impaired when learning new information but have difficulty recalling newly acquired information (Ilan et al., 2004;Miller et al., 1977). However, the timing of drug injection profoundly affects the outcome of behavioral performance (de Oliveira Alvares et al., 2008). For example, infusing drugs immediately after training is reported to manipulate memory consolidation, whereas administration before the test trial influences memory retrieval. Thus, we infused a CB1 receptor agonist into the CA1 before the discrimination trials. Our results partially mimic the influence of marijuana intoxication on spatial and temporal disintegration.
Animals
Fourteen male Sprague-Dawley rats (250-300 g; BioLASCO Co., Ltd, Taiwan) were randomly separated into two groups (lowdose group n = 7; high-dose group n = 7). Before surgery, the animals were housed in home cages that were placed in a temperature-maintained room (23 ± 1°C) with a 12:12 h light: dark cycle. Food and water were available ad libitum. After undergoing surgery, the animals were individually housed in home cages in the same room. To avoid the influence of sleep deprivation on hippocampus-dependent memories (Hagewoud et al., 2010), all experiments were performed during the dark period.
Surgery
Animals were sedated with 5% isoflurane and subcutaneously injected with an analgesic (buprenorphine, 0.03 mg/kg) and atropine (0.04 mg/kg) to prevent the accumulation of saliva. Isoflurane (1.5 to 2.5%) was used for the maintenance of anesthesia during surgery. Five stainless steel screws were surgically anchored onto the frontal, parietal, and interparietal bones. Klur et al. (2009) have reported that inactivation of the right dorsal hippocampus disrupts memory retrieval in a spatial water maze task, which suggests lateralization of the hippocampus for memory retrieval. Therefore, a microinjection guide cannula (26 gauge, O.D. 0.46 mm, I.D. 0.24 mm; Plastics One, Roanoke, USA) was implanted above the right CA1 (AP, -3.8 mm; mL, 3 mm; DV, 2.5 mm relative to bregma; Fig. 1A). The coordinates were adopted from the Paxinos and Watson rat atlas (Paxinos and Watson , 2008). The screws and cannula were then cemented to the skull with dental acrylic (Tempron, GC Co., Tokyo, Japan). At the end of the surgery, the incision was treated topically with gentamicin. Carprofen (5 mg/kg) was given subcutaneously for postsurgery analgesia. The animals were allowed to recover for seven days before the initiation of experiments. All procedures performed in this study were approved by the National Taiwan University Animal Care and Use Committee.
Histology
At the end of the experiments, the rats were anesthetized by an intraperitoneal injection of a cocktail of Zolitel (40 mg/kg; Virbac, Carros, France) and Xylazine (10 mg/kg; Sigma-Aldrich, St. Louis, Missouri, USA). Then, the animals' reflexes were checked by pinch tests. After the rats lost withdrawal reflexes, they were perfused with 4% formalin through the heart, and their brains were collected and sliced to confirm the location of the microinjection of cannula (Fig. 1A). The brains were cut into 30 µm coronal sections in a cryostat microtome and then treated with Nissl stain. The rats were excluded if cannula implantation missed CA1 or resulted in severe lesions in CA1. 14 out of 15 animals were involved.
Apparatus and objects
All experiments were executed in a temperature-(23 ± 1°C) and illumination-(140 ± 10 lux) controlled behavioral room with a digital camera on the ceiling. The open field was surrounded by four plastic boards (60 cm × 50 cm), which were attached and to the floor, resulting in a 60 cm × 60 cm × 50 cm space. The floor was covered with a black, nonreflective material. The plastic boards were decorated with some visual cues. The plastic boards and visual cues used in the control sessions were different from those used in the ACPA sessions. Object sets (four identical objects in each set; two sets in the control sessions and two sets in the ACPA sessions; Fig. 2) were assembled from LEGO-like bricks. These objects were attached to the open field with double-sided tape during the experiments so that the rats could not move them. The positions of novel objects were randomized when executing the experiments on different rats to eliminate the potential confounding factor of area preferences. The floor, the walls of the open field, and all objects were first cleaned with water and then with 75% ethanol solution at both the beginning and the end of each trial to eliminate the odor of subjects of interest during trials. The double-sided tape was also replaced when cleaning. The investigators wore lab gowns, masks, and gloves during the whole experiment. Figure 2. Schematic drawing of the protocols of the three-trial object exploration task. The subjects had two encoding trials with different objects placed in different locations. Ten minutes before the discrimination trial, a CB1 receptor agonist or PFS was infused into CA1. One week later, a similar protocol was conducted, but with different object sets. Moreover, if the subjects received PFS before, they were alternately injected with the CB1 receptor agonist and vice versa. The walls of the open field and their visual cues used in the control sessions were different from those in the ACPA sessions.
Experimental procedures
The experimental procedures were adapted from Dere et al. (2005) (Fig. 1B and 2). Briefly, an experimental session consisted of two encoding trials and one discrimination trial. One microliter of PFS or 1 µL of ACPA was chosen at random and alternatively administered into the right CA1 10 minutes before the discrimination trials. Each rat received PFS in the control session and 3.125 ng/µL or 12.5 ng/µL ACPA in the ACPA session, depending on its group. Therefore, the order for the executing control session or ACPA session was random for each rat. Sessions were carried out at least one week apart. During the experiment, the rats were allowed to freely explore the open field for 10 minutes in each trial. Between trials, the rats waited for 50 minutes in their home cages. The open-field arena was then divided into a 3-by-3 grid, and four identical copies of the object were placed into the grids during the encoding trials. The specific locations of the objects are detailed below. In the first encoding trial, the objects were arranged in a triangle-shaped spatial configuration. The original article described these four places as (Fig. 1B) the center of the northern wall (NC), the center of the southern wall (SC), the southwest corner (SW), and the southeast corner (SE). In the second encoding trial, each of the four objects in the other set was placed in the four corners (northwest (NW), northeast (NE), SW, SE). In the discrimination trial, the objects in the NE and SW corners were replaced with objects from the first trial.
Data collection and statistics
The object exploration time was defined as the amount of time the rats directed their noses toward objects at a distance of less than 2 cm (Leger et al., 2013;Lueptow, 2017). The time spent climbing or leaning on objects was not included, unless the rats also directed their noses toward the objects. The object exploration time was measured offline by well-trained investigators using stopwatches. A single-blind procedure was used for measuring the exploration time to minimize subjective bias. All values acquired from video files are presented as the means ± standard error of the means (SEMs) for the indicated sample sizes. The statistical analyses were performed with SPSS (Version: 10.0.7, IBM, New York, USA). A two-tailed paired t-test was performed to test for significant differences in within-subject data, and an unpaired t-test was used to test significant differences in between-subject data. P < 0.05 was considered to indicate a statistically significant difference.
Estimates of episodic memory for "What & When" and "What & Where" discrimination
To test whether a single infusion of ACPA disturbs memory retrieval, we planned to observe the rats' tendency of exploration for old familiar, recent familiar, stationary, and displaced objects after microinjecting ACPA or PFS into the right CA1. Two doses of ACPA (low dose, 3.125 ng/µL; high dose 12.5 ng/µL) were tested in separate groups of rats. Moreover, two control experiments, a lowdose control (PFS) and a high-dose control (PFS), were needed to test whether the baseline innate preferences of these groups were different. The order for executing the control session or ACPA session was random for each rat. According to Dere et al. (2005), the baseline innate preferences are stronger for old familiar and displaced objects. The coordinates of the novel objects are illustrated in Fig. 1B. Alternating the objects among different places across different trials created place, sequence, and object differences (Dere et al., 2005). Fig. 2 displays the experimental protocols. Injections in the right CA1 (Fig. 1A) were performed 10 minutes before the discrimination trials (Fig. 2). The infusion of either PFS or ACPA was chosen at random, and one week later, a similar protocol was performed, but the alternate injection type was administered (i.e., drug or vehicle). Although within-subjects designs have the advantage of using fewer lab animals due to their high power and less variability in detecting significance, habituation of the experiment or prelearned effects could still be confounding factors for the subjects' exploration times. Therefore, we first measured the difference between the exploration time of the animals that received PFS first (n = 8) and that of the animals that received PFS after finishing an ACPA experiment (n = 6). We found no difference with regard to the order of administration (old familiar T(12) = 1.25, P = 0.24; recent familiar T(12) = 1.59, P = 0.14; displaced T(12) = 1.75, P = 0.11; stationary T(12) = 0.20, P = 0.85). We then inspected recency discrimination in the control experiments. By comparing exploration in the SW+NE regions to that in the NW+SE regions ( Fig. 3A and 3B), top illustration, black objects), we measured the rats' interests in old familiar objects (SW+NE) and recent familiar objects (NW+SE) during the last trial, i.e., the discrimination trial. The rats that received vehicle injections spent significantly more time investigating older objects than recent objects (Fig. 3A, left 2 bars; vehicle, old: 40.53 ± 9.30 sec vs. recent: 23.65 ± 5.25 sec, T(6) = 2.78, P < 0.05; Fig. 3B, left 2 bars; vehicle, old: 15.70 ± 2.91 sec vs. recent: 10.63 ± 2.70 sec, T(6) = 2.91, P < 0.05). Then, the rats' spatial discrimination ability ( Fig. 3C and 3D) was estimated by observing the duration they spent exploring objects in the NE (displaced object) or the SW (stationary object) regions. Both control groups showed significant interest in displaced objects (Fig. 3C, left 2 bars; vehicle, displaced: 55.74 ± 13.60 sec vs. stationary: 25.32 ± 7.09 sec, T(6) = 2.71, P < 0.05; Fig. 3D, left 2 bars; vehicle, displaced: 26.49 ± 5.64 sec vs. stationary: 4.91 ± 1.28 sec, T(6) = 3.76, P < 0.01).
The findings from the control experiments suggest that the injection of the vehicle into the right CA1 did not impair memory for differentiating objects' recency and location. We also noticed that the exploration time of the low-dose controls was longer than that of the high-dose controls. Although we randomized the experimental order of the PFS and ACPA tests, we did not random-ize the experimental order between the low-dose group and the high-dose group. In this case, the high-dose ACPA group and its control test were finished earlier. For some unknown reasons, the high-dose group of animals showed less interest in objects (this will be explained in the Discussion section). However, the confounding factor did not affect the trend of the control results; that is, rats with vehicle injections in the low-or high-dose group all had more significant interest in old familiar objects ( Fig. 3A and 3B, left 2 bars) or displaced objects (Fig. 3C, left 2 bars).
ACPA impairs temporal and spatial memory
We further tested the effects of the hippocampal CB1 receptors on memory retrieval. After infusing 3.125 ng ACPA into CA1, the rats still spent a significant amount of time exploring old familiar objects (Fig. 3A, right 2 bars; low dose, old: 46.65 ± 11.72 sec vs. recent: 24.52 ± 3.40 sec, T(6) = 2.56, P < 0.05). These results suggest that a low dose of a CB1 agonist may not disrupt the retrieval of temporally associated memories. In contrast, the retrieval of spatial memories may be influenced by a low dose of ACPA because in their control periods (1 µL of PFS), the animals spent more time exploring displaced objects (Fig. 3C, left 2 bars); however, the same group of animals spent a similar amount of time investigating stationary and displaced objects after receiving lowdose ACPA (Fig. 3C, right 2 (Ranganathan and D'Souza, 2006) and animal studies (Atsak et al., 2012;de Oliveira Alvares et al., 2005, 2006. We hypothesized that the level of intoxication results in different percentages of CB1 receptors being activated, further leading to different influences on memory. Therefore, we tested rat memory retrieval functions after a high dose of ACPA. The data showed that during their control periods, the rats explored old familiar objects much more often than recent familiar objects (Fig. 3B, left 2 bars), but this effect was inhibited by high-dose ACPA (Fig. 3B, right 2 bars; high dose, old: 18.85 ± 2.81 sec vs. recent: 23.07 ± 4.29 sec, T(6) = -1.49, P = 0.19), which suggests that the retrieval of temporal memories may be disrupted by high doses CB1 receptor agonists in the right CA1. ACPA rats even spent more time exploring recent familiar objects than the control rats ( Fig. 3B; control recent vs. ACPA recent; T(6) = -2.52, P < 0.05). However, a high dose of ACPA had little effect on spatial memories, given that we still observed a long duration of exploring the displaced objects (Fig. 3D, right 2
Experimental design and the results
Our results demonstrate that a high dose of the CB1 agonist impaired the retrieval of sequentially ordered events, partially mimicking the temporal disintegration effects seen in marijuana intoxication (Atakan et al., 2012), and a low dose of ACPA disrupted the retrieval of spatial memories. Our data support previous findings showing that marijuana intoxication damages memory retrieval (Curran et al., 2002). We also demonstrated that activation of the right CA1 is sufficient for impairing memory retrieval. Al- Figure 3. Exploration times during the discrimination trials. The top panel is an illustration showing which exploration times were compared (exploration times of the black objects were calculated). The time the rats spent exploring the old familiar and recent familiar objects demonstrated their ability to discriminate object sequences that require memories of "what" and "when" (A and B). Rats require memories of "what" and "where" to differentiate displaced and stationary objects (C and D). (A) and (C) depict the results for the low-dose group and (B) and (D) depict those of the high-dose group. The bars depict the means ± SEMs. * represents a significant difference, P < 0.05. though the rodent hippocampus might lateralize the function of memory retrieval (Klur et al., 2009), the compensatory and remaining functions of the contralateral CA1 still cannot be examined in the present study. Nevertheless, two key points need to be discussed: 1). we reused the animals to test exploration time after administrating PFS (or ACPA), and 2). the total exploration time was less in the high-dose group than in the low-dose group. Although testing the vehicle effects in a new group could have ruled out the potential habituation or prelearned effects from the reused animals, we still chose to retest in the same animal since the design is less affected by individual variation.
To minimize the abovementioned retesting confounding factors, we randomized the PFS and ACPA administration order. Moreover, we randomized the objects, the walls of the open field, and the environmental cues. Also, we compared the object exploration times between rats infused with PFS first and those infused with PFS later; we found no significant difference between these two groups. By retesting the same animals, measuring the vehicle effects of the low-dose and high-dose groups provides the levels of baseline interest of each group. Therefore, the data represent the tendency in exploration time after ACPA administration (Table 1). We think it is critical to obtain the baseline interest because the between-subjects variability might mask the results of the experiment is not controlled perfectly. Taking our data as an example, the total exploration time was less in the high-dose group than in the low-dose group (Fig. 3A and 3B).
In our opinion, this result was caused by not randomizing the examining order between groups. We finished the experiment of the high-dose group first (including the ACPA and PFS tests) and then performed the same procedure for the low-dose group. For some unknown reasons, the high-dose group did not explore for a similar amount of time as the low-dose group. We suspect that the low-dose group had more time to adapt to the investigators and showed many natural, innate responses to the objects. However, the data still provide some evidence of the ACPA effects, because we tested the vehicle effect of the two groups and demonstrated the same tendency of exploring objects after the same manipulation (Table 1). Our data also show that the low-dose group had impaired spatial memory and that the high-dose group had disturbed temporal memory. We suspect that different memories may engage the CB1 receptors from different neurons. Although it is difficult to test this hypothesis, some reports have proposed hypothetical mechanisms for how CB1 agonist doses influence neurons differently (Busquets-Garcia et al., 2018).
One of the causes of this phenomenon might be linked with the distributions of the CB1 receptor on different cell types. CB1 receptors are present in the synaptic terminals of GABAergic (Chen et al., 2003;Katona et al., 1999) and glutamatergic neurons (Kawamura et al., 2006) of the hippocampus. Comparing the CB1 receptor densities on the GABAergic and glutamatergic neurons in CA1, the GABAergic neurons present with a higher density (Kawamura et al., 2006). Specifically, CB1 receptors are mainly present on perisomatic interneurons that contain cholecystokinin (CCK) (Katona et al., 1999). CCK-positive interneurons serve to fine-tune glutamatergic neurons in the CA1 (Freund, 2003), so activation of CB1 receptors may disinhibit CA1 functions. However, CB1 density is not the only factor that determines which CB1receptorexpressing neurons are engaged. Steindel et al. (2013) reported that the CB1 receptors in hippocampal glutamatergic neurons have a higher efficacy of functioning than nearby GABAergic neurons (Steindel et al., 2013). Although the mechanisms of the differential recruitment of CB1 receptors in different neuron types remain unclear, different doses of CB1 receptor agonists theoretically can lead to different or even opposite functions. Riedel and Davies (2005) reviewed several rodent studies about CB1 receptor-modulated spatial memories. The tools used to study this effect include water mazes, radial arm mazes, T and Y mazes, and delayed match-to-position tasks. Most studies systemically administered drugs and showed impairment of memories following the administration of CB1 receptor agonists (Riedel and Davies, 2005). Lichtman et al. (1995) showed that both the systemic and intrahippocampal administration of a CB1 agonist impairs spatial memories.
CB1 receptors modulate spatial memories
In this study, we used ACPA (in Tocrisolve™100) dissolved in normal saline, which has better binding potential and selectivity for CB1 receptors and rules out vehicle effects that water-insoluble CB1 agonists typically possess. In our results, the injection of 12.5 ng ACPA into CA1 still preserved the knowledge of "where". Robbe et al. (2006) finding also hints that the activation of CB1 receptor impairs sequential memory, but that spatial memory may still be intact. They reported that CB1 receptor agonists impair spike timing coordination but not the firing rate of place cells, which encode spatial information about the environment (Robbe et al., 2006). The use of Dere et al. (2005) three-trial object exploration task can also test the effects of drugs on temporal memories, which have been less often addressed by other reports. Thus, we are more interested in the aspects of CB1 receptor-modulated temporal memories.
CB1 receptors modulate temporal memories
Impairment in recalling the sequential order of events has been tested in place cell studies (Robbe and Buzsaki, 2009;Robbe et al., 2006), not only at the behavioral level, as we presented above; Robbe et al. (2006) influential results indicated that CB1 receptor agonists disrupt the coordination of CA1 cell assemblies, including the discharge time of theta sequences, and destroy their firing patterns in a theta cycle (i.e., theta precession interference). The firing time of a theta sequence corresponds to the order in a place field; therefore, the activation of CB1 receptors impairs the order of memories. Robbe et al. (2006) utilized an intraperitoneal injection approach; thus, it is still unclear whether the local ECS in the hippocampus contributed much to their findings and which phase of memory processing was affected. Hajos et al. (2000) immersed hippocampal slices in a solution containing a CB1 receptor agonist and subsequently abolished kainic acid-induced CA3 gamma oscillations. This result suggests that CB1 receptors negatively modulate the retrieval of spatial and temporal memories because CA3 gamma oscillations occur during memory retrieval (Bieri et al., 2014). Some investigators trained animals to test the memory of sequences. Farovik et al. (2010) trained rats to recognize 10 twoodor sequences and showed that both CA1 and CA3 were essential to distinguish ordered events, but CA1 is especially crucial for differentiating two-odor sequences at longer intervals, such as 10 sec. Another report illustrated that damaged hippocampi significantly decreased the rate at which rats could correctly distinguish odors in a sequence task but not in an odor recognition task (Fortin et al., 2002). The present method uses rats' innate behaviors to test sequential order memory and saves time in training animals, but other "when-related" functions, such as timing, must be investigated using more complex tasks. According to our results, we speculate that the ECS in the hippocampus is involved in the process of processing time information in the brain.
Limitations of the three-trial object exploration task
This task has a possible confounding factor because the exploration time of old familiar objects is different from that of both the stationary and displaced objects. Therefore, Dere et al. (2005) further compared the time spent exploring stationary old familiar objects and the mean time spent exploring the two recent familiar objects. Their results indicated that rats spent significantly more time exploring the stationary old familiar object (Dere et al., 2005), but we did not see this tendency in our studies. We suggest that if we want to see a significant difference between the time rats spent exploring the old stationary object and the mean time spent exploring recent familiar objects, the rats must spend much more time exploring the old stationary objects. However, it is not practical to expect the rats not to show more attention to old displaced objects, since rats also have innate preferences toward displaced things. Also, Dere et al. (2005) used a one-tailed t-test to measure the significance of these factors. This may imply that the importance between the old stationary objects and the recent objects is low.
Conclusions
Our main findings suggest that activation of the right CA1 disrupts the retrieval of episodic memories and that different doses of ACPA may result in the impairment of spatial or temporal memories.
Ethics approval and consent to participate
All procedures performed in this study were approved by the National Taiwan University Animal Care and Use Committee (Approval No: NTU107-EL-00108). | 2020-04-08T19:10:46.903Z | 2020-03-30T00:00:00.000 | {
"year": 2020,
"sha1": "7a4ee73722e83c1c2b8d180c8ea49436f2824224",
"oa_license": "CCBYNC",
"oa_url": "https://jin.imrpress.com/EN/article/downloadArticleFile.do?attachType=PDF&id=3697",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d2f9dfabb26219d7c713905e0cc30cdc2873fce2",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
119358494 | pes2o/s2orc | v3-fos-license | Non-Gaussian perturbations from multi-field inflation
We show how the primordial bispectrum of density perturbations from inflation may be characterised in terms of manifestly gauge-invariant cosmological perturbations at second order. The primordial metric perturbation, zeta, describing the perturbed expansion of uniform-density hypersurfaces on large scales is related to scalar field perturbations on unperturbed (spatially-flat) hypersurfaces at first- and second-order. The bispectrum of the metric perturbation is thus composed of (i) a local contribution due to the second-order gauge-transformation, and (ii) the instrinsic bispectrum of the field perturbations on spatially flat hypersurfaces. We generalise previous results to allow for scale-dependence of the scalar field power spectra and correlations that can develop between fields on super-Hubble scales.
I. INTRODUCTION
Inhomogeneities in the distribution of matter and radiation in the universe today provide valuable information about the dynamical history and physical processes in the very early Universe. In particular temperature anisotropies in the cosmic microwave background (CMB) sky provide detailed information about the primordial density perturbations at the time of last scattering of the CMB photons. At present the simplest explanation for the origin of these perturbations seems to be that they originated as quantum fluctuations in one or more light fields during a period of inflation in the very early Universe [1,2]. These vacuum fluctuations on small (sub-Hubble) scales were stretched to large (super-Hubble) scales by the accelerated expansion, where they subsequently evolved as effectively classical perturbations.
Vacuum fluctuations of massless scalar fields in a de Sitter geometry enter a squeezed state on scales larger than the Hubble length where the decaying mode can be neglected. Thus they can be treated as an effectively classical distribution on large scales with Gaussian statistics. For a Gaussian random field the power spectrum is sufficient to describe all the statistical properties of the distribution. But non-linearities in the evolution of the initial fluctuations will lead to a non-Gaussian distribution. Until recently studies of non-Gaussian perturbations from inflation were largely restricted to self-interactions of scalar fields [3,4,5] neglecting non-linear gravity. Gravitational effects can be included in a stochastic approach using local equations to study the evolution of the coarse-grained inflaton field on long wavelengths [6,7,8,9].
Non-Gaussianities inevitably arise at second-order in cosmological perturbation theory but the nature of the non-Gaussianity will depend upon the choice of variables [10]. Acquaviva et al [11] constructed a gauge-invariant quantity at second-order on large scales, but its physical interpretation was left open and it was only approximately constant during slow-roll inflation [10,12]. Maldacena [13] gave an analysis of second-order perturbations from slow-roll inflation working in terms of the field perturbation on unperturbed spatial hypersurfaces and relating these to the curvature perturbations on uniform field hypersurfaces during single field inflation. Both these quantities have a clear physical definition and hence are implicitly gauge-invariant. He showed that there is a curvature perturbation, which becomes constant on large scales and whose bispectrum for squeezed triangles (k 1 ≪ k 2 , k 3 ) is proportional to the tilt of the scalar power spectrum, and hence small. Maldacena's result has subsequently been verified by a number of authors [14,15,16,17,18], going beyond slow-roll [14], and extending it to loop corrections [19] and non-minimally coupled fields [15]. For a recent review see Ref. [20].
Only multiple field models of inflation seem likely to generate significant non-Gaussianity in the density perturbation [5,16,21,22,23,24,25,26]. In particular Lyth and Rodriguez [22] have recently emphasized that the non-Gaussianity of the curvature perturbation may be simply written down in the so-called "δN -formalism" [27,28,29,30] where one uses the "separate universes" approach [31] to follow the non-linear evolution on super-Hubble scales in terms of locally homogeneous solutions.
In this paper we show how the non-linear evolution of primordial perturbations from inflation may be characterised in terms of manifestly gauge-invariant perturbations at second-order. We adopt the formalism of Malik and Wands [32] who showed that there is a conserved curvature perturbation at second-order for adiabatic density perturbations on long wavelengths. In section II we define the gauge-invariant curvature perturbation at second-order and give the leading order expressions for the power spectrum and bispectrum. In section III we specialise to the case of adiabatic perturbations on large scales from single-field inflation. We relate the conserved curvature perturbation on large scales to the gauge-invariant field perturbations on spatially flat hypersurfaces during slow-roll inflation, consistent with the δN -formalism. In particular we distinguish the purely local form of the bispectrum generated by the second-order gauge transformation from the intrinsic bispectrum of the field perturbations on unperturbed spatial hypersurfaces. We use a simple argument to estimate the bispectrum of field due to large-scale variations in the local Hubble rate during inflation, and hence recover Maldacena's result for the non-linearity of the curvature perturbation. We generalise these results to the multiple-field case in Section IV, allowing for scale dependence of the power spectra and cross-correlations between the fields which may develop on large scales. These results can be simplified by decomposing the field perturbations into an inflaton component (along the instantaneous background trajectory) and orthogonal isocurvature perturbations [33]. We conclude in Section V.
II. GAUGE-INVARIANT PERTURBATIONS
Discussions of relativistic perturbations have historically been plagued by gauge-dependence of apparently physical quantities such as the density perturbation. For instance, in a spatially homogeneous background where the density evolves in time, the inhomogeneous density perturbation depends on the choice of time coordinate at first-order, and spatial coordinates at second-order. This "gauge-problem" can be avoided by using a physical choice of gauge and building gauge-invariant definitions of the perturbations in that gauge. Thus Bardeen constructed gauge-invariant cosmological perturbations at first-order [34] and Malik and Wands have shown how to construct gauge-invariant cosmological perturbations at second-order [32]. This still leaves a number of different possible gauge-invariant definitions of the density perturbation, for example, corresponding to different physical gauge choices, though the perturbations are nonetheless gauge-invariant.
We will split any scalar φ into a spatially homogeneous background and first-and second-order inhomogeneities: The split between first and second order inhomogeneities for any given initial perturbation is arbitrary, so we can use this freedom to specify that the first-order perturbations are Gaussian random fields and any non-Gaussianity is described by the second-order perturbation. In particular we will take the Gaussian part of the perturbations to originate from free field fluctuations at early times during inflation, and assume the second-order perturbations are vanishing in this early time limit. Under a temporal gauge transformation, α = α 1 + (α 2 /2), a scalar φ transforms at first-and second-order as We will adopt the method of Malik and Wands [32] to define gauge-invariant cosmological perturbations up to second order by constructing a physical perturbation in a physically defined coordinate system. There are (at least) two different ways of writing the perturbed line element on any spatial hypersurfaces: where F i is the divergence-free, vector perturbation, and h ij is the transverse and trace-free tensor perturbation. One may refer to ψ as the curvature perturbation and δN as the perturbed expansion, though the two are trivially related as ψ = (e 2δN − 1)/2. To first and second order we write and hence The curvature perturbation, ψ or δN , on uniform-density hypersurfaces is non-linearly conserved on large scales (where gradient terms can be neglected) for adiabatic perturbations as a direct consequence of local energy conservation [29,30,31,32], (see also [35,36]). Thus we will be particularly interested in calculating this perturbation.
Most authors [13,29,30,37] have worked in terms of δN on uniform-density hypersurfaces, which we denote by This definition leaves a residual dependence at second order on the choice of spatial gauge, i.e., the choice of spatial coordinates threading the uniform-density hypersurfaces. This can be eliminated by a physical choice of spatial gauge at first-order [32], such as working with worldlines orthogonal to the uniform-density hypersurfaces. However in what follows we will consider perturbations on large scales where in any case the residual spatial gauge dependence becomes negligible [29].
Malik and Wands [32] gave a gauge-invariant definition of the curvature perturbation at second order on uniformdensity hypersurfaces,ψ| ρ , and showed this is conserved for adiabatic perturbations on large scales. This is identical to −ζ defined above at first-order,ψ 1 | ρ = −ζ 1 , but differs at second order simply due to the difference in choice of curvature or expansion perturbation (9). Thus we have [10] Hence both ζ andψ| ρ are conserved at second order for adiabatic perturbations on large scales. For a stochastic random field, as would be expected to arise from vacuum fluctuatuations during inflation in the early universe, the perturbations are most usefully described in terms of their power spectrum, and higher-order moments, in Fourier space. At linear order the Fourier coefficients of ζ are defined by the same equation (10) as in real space, while at second order we have where " * " represents a convolution where d 3 k represents d 3 k/(2π) 3 . We will be using the corresponding notation δ 3 (k) to represent (2π) 3 δ 3 (k). The power spectrum of the perturbation is given by For an isotropic distribution the power spectrum is a function solely of k ≡ |k|. Note that the variance per logarithmic interval in k-space is given by The bispectrum of the curvature perturbation is which by the requirement of statistical isotropy is non-zero only for a triangle of modes, The bispectrum is zero for Gaussian perturbations and hence to leading order we have If the second order part of the curvature perturbation can be written purely in terms of the square of the first order part in real space where f N L is a function of background parameters then the bispectrum will be given by calculated for closed triangles, k 1 + k 2 + k 3 = 0. f N L is then a quantifier of this form of "local" non-Gaussianity and is called the non-linearity parameter.
III. SINGLE-FIELD INFLATION
We will now calculate the bispectrum for the scalar field perturbations on large-scales in single-field models of inflation, and hence reproduce the bispectrum for the large-scale curvature perturbation, ζ, [13].
A. Curvature perturbations from inflaton perturbations
During single field inflation, driven by field ϕ, density perturbations become adiabatic in the large-scale limit at first- [33] or second-order [12]. An important consequence is that ζ is conserved and the uniform-density hypersurfaces coincide with uniform-field hypersurfaces on large scales. Thus we have These expansion perturbations on uniform-field hypersurfaces are closely related to the field perturbations on uniform-expansion hypersurfaces, which have the gauge-invariant definitions At first-order δ 1 Q is the usual Mukhanov-Sasaki variable [38,39] used to quantise the semi-classical linear perturbations during inflation, and δ 2 Q is its natural extension to second-order [13,32].
We can re-write the metric perturbation ζ in Eqs. (22) and (23) somewhat more succinctly in terms of these gauge-invariant field perturbations, to give (see also Ref. [40]) where we have usedδ Introducing the slow-roll parameters and using the fact thatζ 1 = 0 for adiabatic perturbations on large scales, we finally have We can show that the non-linear adiabatic curvature perturbation on large scales during inflation can simply be written as the perturbed expansion history, N = Hdt, due to the gauge-invariant field perturbations [22,29] This is a non-linear extension of the δN -formalism [27,28] for calculating perturbations from inflation. Hence to first and second order we have where
B. Non-Gaussianity of field perturbations
During slow-roll inflation the initial state of the scalar field perturbations can be set by imposing the Minkowski vacuum state on small scales (much smaller than the cosmological Hubble scale, k/a ≫ H) for the scalar field perturbations on spatially-flat hypersurfaces, δN = 0. For any weakly-coupled scalar field this gives an effectively Gaussian distribution of initial field fluctuations. For any light field (with effective mass less than the Hubble scale) living in a de Sitter spacetime, the Hubble damping leads to a squeezed state on scales larger than the Hubble length and hence we can treat the field as a classical random field with and hence P ζ (k * ) = (H/2π) 2 , where a * subscript denotes Hubble-crossing, k * = aH. We will give a simple extension of the argument due to Maldacena [13,14] to calculate the amplitude of the secondorder (non-Gaussian) contribution to the scalar field perturbation due to gravitational coupling at Hubble-exit.
It will be convenient to write the first-order field perturbation at Hubble exit as whereê k is a classical Gaussian random variable with unit variance: However at second order there is a local correction to the amplitude of vacuum fluctuations at Hubble exit due to first-order perturbations in the local Hubble rate,H. This is determined by the local scalar field value due to longer wavelength modes that have already left the horizoñ where k c < k * is a cut-off wave-number.
Thus for k * ≫ k c we have upto second order and hence we can identify the second order scalar field perturbation at Hubble exit: where we have taken δ 1 Q k * −k ′ = (H/2π)ê k * −k ′ for k * ≫ k ′ and a "•" represents a cut-off convolution, that is a convolution-type integral defined by where k c ≪ k. This effect gives a non-zero bispectrum (17) for the scalar field perturbations. In the squeezed triangle limit, k 2 ≃ k 3 and k 1 ≪ k 2 , the correlation of the field perturbations at the point when the two smaller modes cross the horizon (k 2 = aH) gives, to leading order, Note that the second order part of the perturbation on the larger scale, δ 2 Q k1 , will be uncorrelated with the modes on smaller scales. Substituting our expression (42) for the second-order field perturbation at horizon crossing into Eq. (44) yields Comparing this with the definition of the bispectrum in Eq. (17) we see that Note that this is not strictly of the local form given in Eq. (21) due to the asymmetry between long wavelengths, k 1 , and shorter wavelengths, k 2 and k 3 . However P (k) ≫ P (k * ) for k ≪ k * as P ζ (k) ∝ k 3 P ζ (k) generated during slow-roll inflation is nearly scale-invariant, and hence this difference is not significant in the squeezed triangle limit. One can verify that this coincides, in the limit k 1 ≪ k 2 , k 3 , with the bispectrum for scalar field perturbations given by Seery and Lidsey in Ref. [16] for the case of a scale-invariant spectrum.
C. Non-Gaussianity of curvature perturbations
The power spectrum of the curvature perturbation due to inflaton fluctuations during single field inflation is given by the first-order relation (26). Substituting this into Eq. (18) gives The power spectrum of scalar field perturbations at Hubble exit is given by Eq. (37) and thereafter the non-linear curvature perturbation remains constant, on super-Hubble scales, for adiabatic perturbations even after slow-roll inflation ends [29]. Thus for super-Hubble modes with k ≪ aH we can write the power spectrum in terms of quantities at Hubble-exit Slow-roll gives an almost scale-invariant power spectrum of curvature perturbations on super-Hubble scales, whose tilt is given by We can write the bispectrum (17) for the curvature perturbation in terms of a local transformation between secondorder gauge-invariant variables, Eq. (31), and the intrinsic bispectrum of the scalar field perturbations. Substituting Eqs. (26) and (31) into Eq. (19) we obtain We see that if the field perturbations on spatially flat hypersurfaces were strictly Gaussian at second order, B Q = 0, then the second order curvature perturbation ζ 2 would be of the "local" form defined in Eq. (20) with f N L = 5(η−2ǫ)/6. In the squeezed triangle limit, k ≪ k * , the scalar field bispectrum at Hubble exit is given by (47) and thus where we have used P ζ (k) ≫ P ζ (k * ) for a nearly scale-invariant spectrum. Thus comparing with Eq. (21) in this squeezed triangle limit, we can identify [13] f N L = 5 12 (−6ǫ + 2η) .
As noted by several authors [14,17] this gives a consistency relation between the primordial non-Gaussianity and the tilt of the power spectrum (50) which does not rely on slow-roll during inflation, but does rely on the adiabaticity of the perturbations on super-Hubble scales and thus the existence of a conserved curvature perturbation after Hubble-exit.
IV. MULTI-FIELD INFLATION
This discussion of of the non-Gaussianty of primordial curvature perturbations from inflation can be readily extended to the multi-field [22] case by generalising Eq. (32) where we identify ζ with the perturbed expansion of a uniform density hypersurface with respect to an initial uniformexpansion hypersurface during inflation, and we define N ,i ≡ dN/dϕ i . Hence to first and second order we have The gauge-invariant definition of field perturbations on uniform-expansion hypersurfaces, δQ, was given in Eqs. (24) and (25).
It is important to realise in the multi-field case that the primordial curvature perturbation is evaluated on a uniformdensity hypersurface at the some primordial epoch after inflation has ended. Thus the integrated expansion to this uniform-density hypersurface as a function of the initial field local values, N (ϕ i ), must in general be calculated not only during inflation, but throughout the subsequent cosmological history, including reheating. Only in the case of adiabatic perturbations on super-Hubble scales, where the perturbed evolution follows the background trajectory in phase space [31,33], can we equate δN with the perturbed expansion at Hubble-exit, independently of the subsequent expansion history, as was done in Eqs. (32)(33)(34)(35)(36) for the single-field case. Nonetheless, with this caveat we can formally write down the primordial perturbation in terms of the integrated expansion N and it's dependence on the different fields.
Thus substituting Eqs. (56) and (57) into the definitions of the power spectrum (18) and bispectrum (19), we obtain where we define the multi-field cross-correlation and bispectrum at leading order Equation (59) allows for an arbitrary bispectrum, B i,j,m Q , for the field perturbations on spatially flat hypersurfaces and generalises the result of Lyth and Rodriguez [22] to allow for correlations between the fields on super-Hubble scales. In principle it does not rely on slow-roll, only that the subsequent expansion can be expressed as a function solely of the local field values. Nonetheless in the rest of this paper we will restrict our analysis to slow-roll in order to calculate the bispectrum of the field perturbations.
Following Ref. [33] we can use the unperturbed background solution, ϕ i (t), to define the instantaneous adiabatic or "inflaton" perturbation lying along the background trajectory in multi-field inflation.
All orthogonal directions in field-space describe instantaneous isocurvature or entropy field perturbations [2,33] which we denote generically by δχ.
In the slow-roll approximation the background trajectory is given by the gradient of the potential, or equivalently the Hubble rate. Thus Thus the bispectrum (66) is only non-zero for long-wavelength field perturbations δQ i which are correlated with the instantaneous inflaton direction, σ: analogous to the single-field case (47), where the slow-roll parameter (29) is given by Isocurvature field perturbations that remain decoupled from, and hence uncorrelated with, the inflaton perturbation (i.e., C χσ (k) = 0), retain a purely Gaussian distribution with B χ (k, k * ) = 0 unless we consider non-gravitational interactions, neglected here in our slow-roll approximation.
B. Non-Gaussianity of curvature perturbation in slow-roll
Finally we can write the bispectrum (59) for the curvature perturbation for multi-field slow-roll inflation in the squeezed triangle limit, k ≪ k * , as One verify that this reduces to Eq. (52) in the single field case where we have N ,σ and N ,σσ given by Eqs. (35) and (36). This single-field result for the non-Gaussianity still holds in a multi-field setting so long as the expansion history is independent of all the other fields. An alternative limit is the case where the expansion history is dominated by an isocurvature field, such that |N ,χ | ≫ |N ,σ |. This would apply for instance to the curvaton scenario [43]. In this case the primordial curvature perturbation is given by There is no non-Gaussianity due to gravitational coupling, B χ = 0, for an isocurvature field that remains decoupled from the from the inflaton, so that C χσ = 0. Hence the non-Gaussianity of the primordial curvature perturbation (59) is given by This is purely of the local form (21) and we can identify [22] f N L = − 5 6 Even in the general case it can be shown that the contribution of the intrinsic non-Gaussianity of the field perturbations to the bispectrum of the curvature perturbations must be small in the slow-roll approximation [24].
V. CONCLUSIONS
We have shown how the non-Gaussianity of primordial curvature perturbations is described using gauge-invariant cosmological perturbations at second-order. The split between zeroth, first and second-order parts of inhomogeneous fields is arbitrary, however if we expand about a FRW cosmology, the zeroth-order part of cosmological fields can be taken to be spatially homogeneous. In this paper we have further assumed that the first-order part can be taken to be a Gaussian random field, and the second-order part describes the non-linear evolution of inhomogeneities, and hence the non-Gaussian part of the distribution.
The arbitrariness of the choice of coordinates leaves a residual gauge-freedom. Nonetheless one can construct gauge-invariant quantities, at any order, by specifying an unambiguous, physical quantity [32]. In particular we have given a manifestly gauge-invariant definition of ζ, the perturbed expansion on uniform density hypersurfaces on large scales, used by many authors to describe the primordial metric perturbation as it remains constant, at any order, for adiabatic perturbations [29,30,35,36]. This is related to the dimensionless density perturbation on spatially flat hypersurfaces, however the non-linear relation between the two leads to a non-Gaussian metric perturbation for a purely Gaussian density perturbation, and vice versa.
In the case of single-field inflation, the field perturbations become adiabatic in the large-scale limit [12] and the primordial ζ long after inflation has ended can be equated with the perturbed expansion on uniform-field hypersurfaces at Hubble exit. We have shown how this is equivalent to the non-linear extension of the δN -formalism [22,27,28,30] which allows one to calculate the primordial perturbation in terms of δQ, the field perturbations on spatially flat hypersurfaces, at Hubble exit and the unperturbed FRW solution for the integrated expansion, N (ϕ). This formalism is especially compact for multiple fields where the primordial metric perturbation (say, during primordial nucleosynthesis) may be completely different from that at Hubble exit during inflation. On the other hand one should bear in mind that N (ϕ i ) is a non-linear function of several scalar fields which must be determined by solving the background field equations to determine the integrated expansion up to some fixed primordial density as a function of the field values during inflation. Nonetheless Lyth and Rodriguez [22] have recently shown how powerful this method is for describing the non-Gaussianity of the primordial perturbation due to non-linear evolution on super-Hubble scales.
In single-or multi-field inflation one can give the leading-order expression for the bispectrum of the primordial metric perturbation in terms of the power spectrum and the intrinsic bispectrum of the fields at Hubble-exit during inflation. We have used a simple argument to estimate the intrinsic bispectrum of the field perturbations at Hubble exit due to gravity in the limit of degenerate triangles during slow-roll inflation with single or multiple fields, and hence give an expression for the non-linearity of the primordial density perturbations from slow-roll inflation in this limit. In line with previous results we find that the non-Gaussianity of field perturbations is suppressed during slow-roll, and we have shown that it vanishes at leading order for isocurvature perturbations during inflation. Our final result in Eq. (71) generalises earlier work to allow for non-scale invariance of the scalar field power spectra and correlations between the fields that can develop on super-Hubble scales. | 2019-04-14T01:43:59.273Z | 2005-09-23T00:00:00.000 | {
"year": 2005,
"sha1": "650290f40b2eca819e2ccaa78b1e1fc58084d9a6",
"oa_license": null,
"oa_url": "https://pure.port.ac.uk/ws/files/127374/0509719.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "82fb978616f970a322ede84ce70f43e0fa75e05f",
"s2fieldsofstudy": [
"Physics",
"Mathematics"
],
"extfieldsofstudy": [
"Physics"
]
} |
17891829 | pes2o/s2orc | v3-fos-license | Physical activity and sedentary behavior among children and adolescents living in an area affected by the 2011 Great East Japan earthquake and tsunami for 3 years
The purpose of this study is to examine the change in physical activity levels among children and adolescents living in the area affected by the 2011 earthquake and tsunami for 3 years immediately following the disaster. Children and adolescents graded four to nine and attending school in the Pacific coastal area of northern Japan were included in a total of four serial prevalence investigations: the first at 6 months after the earthquake/tsunami (I, n = 434) and additional surveys at 1 year (II, n = 437), 2 years (III, n = 401), and 3 years (IV, n = 365) after the earthquake. Students were also required to undergo assessment of their accelerometer-determined daily steps and sedentary time using a self-administrated questionnaire. Accelerometer-determined median daily steps of children and adolescents were significantly different (p < 0.05) on both weekdays and weekends over 3 years. The median daily steps of children of both genders on weekdays and those of girls on weekends at period IV were significantly lower than those at period I. In addition, the median daily steps of adolescents on weekdays among girls and weekends among boys at period IV were significantly lower than those at period I. It appears that children and adolescents who survive the earthquake and tsunami experience a decrease in physical activity levels. Future research should elucidate longitudinal demographic and sociocultural factors that contribute to changes in physical activity levels among children and adolescents living in the areas affected by these disasters.
Introduction
The 2011 Great East Japan Earthquake caused serious damage. Furthermore, the prefectures of Miyagi, Fukushima, and Iwate along the Pacific coast suffered devastating damage as a result of the tsunami caused by the earthquake. In association with the disaster, the daily life of many residents of these areas has altered.
A few previous studies have reported the health status of survivors after a disaster. For example, Wu et al. (2006) showed that the deteriorating quality of life (QOL) scores of the Chi-Chi earthquake survivors aged 16 years or older were decreased 3 years after the event. Another study (Ardalan et al., 2010) showed that QOL scores of elderly survivors 5 years post-earthquake disaster were also decreased.
Physical activity is an important factor affecting health indicators. Worldwide, public health guidelines recommend a physically active lifestyle for children and adolescents for overall health benefits (World Health Organization, 2010). QOL is one of the health indicators, and a study (Chen et al., 2005) has indicated that a consistently positive association exists between physical activity level and health-related QOL among children. Furthermore, sedentary behavior, such as sitting, is associated with health consequences and outcomes independent from the health benefits of physical activity (Tremblay et al., 2011;Owen et al., 2010).
Although physical activity is recommended for health, little is known about physical activity levels among victims of the 2011 earthquake and tsunami. In this study, we described changes in physical activity among children and adolescents living in the area affected by the 2011 earthquake and tsunami during the 3 years after the disaster.
Methods
Participants were elementary school students in 4th, 5th, and 6th grades and junior high school students in 7th, 8th, and 9th grades. Four-time surveys of this serial prevalence study were conducted at about 6 months (I, n = 434), 1 year (II, n = 437), 2 years (III, n = 401), and 3 years (IV, n = 365) after the earthquake in Onagawa, in the Miyagi prefecture, in the Pacific coastal area of northern Japan. Ethics approval for the present study was obtained from the Tohoku Gakuin University Ethics Committee, the Human Informatics postgraduate course, and the Catholic Education Commission. All participating students and their parents/legal guardians provided written informed consent.
All students were asked to report their gender, age, grade, height, and weight. BMI was calculated using their height and weight, and Preventive Medicine Reports 2 (2015) 720-724 students were categorized as overweight, obese, or within the normal range according to international guidelines (Cole et al., 2000). In addition, total minutes per day spent in sedentary behavior were assessed by a self-reported questionnaire. The students were asked to reply to the question, "During the last week, how much time did you spend sitting on a weekday and weekend? Sitting includes watching TV, playing game, having a chat, reading, and listening to music at both school and home." This question was phrased according to the international physical activity questionnaire in a previous study. In addition, physical activity questions from the WHO health behavior in n.s.: not significant.
schoolchildren (HBSC) survey, which has acceptable reliability and validity among children and adolescents, were also included in our questionnaire (Booth et al., 2001).
Accelerometer-determined steps per day were evaluated using a triaxial accelerometer device (Activity Style Pro HJA-350IT; Omron Healthcare, Kyoto, Japan) consisting of Micro Electro Mechanical Systems-based accelerometers (LIS3LV02DQ; ST-Microelectronics, Geneva, Switzerland) (Hikihara et al., 2014;Ohkawara et al., 2011). The accelerometers were initialized to concurrently record steps in 10-s intervals. Students were asked to wear the device for 7 or more consecutive days including at least 2 weekend days, removing them only for water activities and before going to bed at night (Rowlands, 2007). The students were shown how to set the accelerometer to their waists during the training session. At least 3 days with more than 10 h of daily data collection, including 2 weekdays and 1 weekend day, were required for inclusion in the analysis. Data recorded on the first and last days were removed towing to some reactivity risks and incompleteness on these days.
All analyses were conducted using SPSS version 22.0 for Windows (Predictive Analytics Software, SPSS Inc., Chicago, IL, USA). Chi-square test was used to assess differences in gender, grade, and BMI categories (Cole et al., 2000). The total min per day spent sitting and the number of steps per day were analyzed using the non-parametric Kruskal-Wallis test. Post hoc comparisons for dichotomous variables were based on Mann-Whitney U tests with a Bonferroni corrected significance level. A p value b .05 was considered statistically significant. Table 1 shows the characteristics and physical activity levels among elementary school students. There were significant differences (p b 0.05) among children in the median time spent sitting on weekdays (Girls: χ 2 = 12.3, df = 3.0) alone and accelerometer-determined median daily steps on both weekdays (Boys: χ 2 = 12.4, df = 3.0; Girls: χ 2 = 27.5, df = 3.0) and weekends (Boys: χ 2 = 8.0, df = 3.0; Girls: χ 2 = 30.0, df = 3.0). The median daily steps declined significantly for both genders on weekdays and for girls on weekends at period IV as compared with median daily steps at period I (p b 0.05), whereas the time spent sitting decreased on weekdays. Significant difference was seen in terms of the daily steps of the boys on weekends, whereas there were no differences among periods in the post hoc test. Table 2 shows the characteristics and physical activity levels among junior high school students. There were significant differences (p b 0.05) among adolescents in the time spent sitting on weekdays (Boys: χ 2 = 16.8, df = 3.0; Girls: χ 2 = 11.8, df = 3.0) and in accelerometer-determined median daily steps on both weekdays (Girls: χ 2 = 17.9, df = 3.0) and weekends (Boys: χ 2 = 15.6, df = 3.0). The median daily steps declined significantly among girls on weekdays and among boys on weekends at period IV as compared with the median daily steps at period I (p b 0.05), whereas for girls, the time spent sitting on weekdays decreased.
Discussion
The present study showed that accelerometer-determined steps per day among children living in an area affected by earthquake and tsunami on both weekday and weekend decreased significantly, whereas the sedentary time among girls on weekdays only decreased over the 3 years of the study. Similarly, the daily steps of adolescents were decreased, whereas the sedentary behavior of both genders on weekdays was decreased over the 3 years. This decrease of the daily steps may result from the environment around the neighborhoods in which the participants reside. Ding et al. (2011) reported positive associations between neighborhood environment and physical activity among youths. The neighborhood environment in the coastal area catastrophically damaged by the tsunami continues to undergo repair, which could make the neighborhood less favorable for physical activities. In addition, some students may have poor neighborhood environments due to living in temporary dwellings. The schools survived the tsunami because the buildings were located on a hill. Therefore, it seemed that it was important for children and adolescent girls to ensure performing physical activity at an appropriate time in school owing to a significant decrease in their daily steps on weekdays.
To meet current physical activity guidelines (Ganley et al., 2011;World Health Organization, 2010) for children and adolescents to accumulate a minimum of 60 min of moderate-to-vigorous physical activity, Tudor-Locke et al. (2011) recommended 13,000; 11,000; and 10,000 steps per day for boys (aged 6-11 years), girls (aged 6-11 years), and adolescents of both genders (aged 12-19 years), respectively. Another study (Adams et al., 2013) recommended 11,500 accelerometer-determined daily steps for youths and both genders. At 3 years after the earthquake, the median steps of the children and adolescents in this study were lower than these guidelines.
There are few studies about the daily steps of Japanese children and adolescents during the pre-disaster period. For example, one crosssectional study (Sasayama et al., 2009) reported an accelerometerdetermined mean daily steps of Japanese children aged 9-10 years (n = 288) on both weekday (Boys: 18,333 ± 3869 steps; Girls: 13,957 ± 2970 steps) and weekend (Boys: 11,932 ± 4827; Girls: 9767 ± 4542). Another study (Sasayama and Adachi, 2011) reported the mean daily steps of Japanese adolescents aged 12-15 years (n = 314) on both weekday (Boys: 13,772 ± 4764; Girls: 11,209 ± 2636) and weekend (Boys: 8311 ± 4743; Girls: 7159 ± 3338). In comparison with those in the previous studies, the mean daily steps in children and adolescents living in an area affected by the disaster are considered to be lower. Particularly, the mean daily steps of children of both genders on weekdays at periods IV in the present study were approximately 6500 steps lower than those of the previous study.
In this study, the data was collected from a serial prevalence study. The cause and effect relationship of changes in physical activity is not clear. Future research should elucidate longitudinal demographic and sociocultural factors that contribute to changes in physical activity and sedentary behavior among children and adolescents living in the damaged coastal area. Furthermore, time spent sitting was determined based on the responses to an item in the self-administered questionnaire. Certainly, it may be somewhat difficult for children and adolescents to recall their sedentary time. However, even in a situation of a limited methodology, such as that of an earthquake disaster, we need to evaluate physical activity for health promotion. It seemed to be important to report sedentary time using a self-administered questionnaire for comparison with data for a current or future disaster. | 2016-06-02T04:50:12.473Z | 2015-08-14T00:00:00.000 | {
"year": 2015,
"sha1": "8458ce84309a2246c4aad88357d2d3ed324c8464",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.pmedr.2015.08.010",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8458ce84309a2246c4aad88357d2d3ed324c8464",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
21122160 | pes2o/s2orc | v3-fos-license | A large eddy lattice Boltzmann simulation of magnetohydrodynamic turbulence
Large eddy simulations (LES) of a lattice Boltzmann magnetohydrodynamic (LB-MHD) model are performed for the unstable magnetized Kelvin-Helmholtz jet instability. This algorithm is an extension of Ansumali et. al. (2004) to MHD in which one performs first an expansion in the filter width on the kinetic equations followed by the usual low Knudsen number expansion. These two perturbation operations do not commute. Closure is achieved by invoking the physical constraint that subgrid effects occur at transport time scales. The simulations are in very good agreement with direct numerical simulations.
Introduction
Recently [2] we derived a first-principles two dimensional (2D) magnetohydrodynamic (MHD) large eddy simulation (LES) model based on first filtering the lattice Boltzmann (LB) representation of MHD [3][4][5][6][7][8][9][10] after which one applies the Chapman-Enskog limits to recover the final LES-MHD fluid equations. In essence, we extended to MHD the 2D Navier-Stokes (NS) LES-LB model of Ansumali et. al. [1] who exploited the non-commutativity of these two operations. (Of course, if one first applied the Chapman-Enskog limit to LB and then filtering one would land in the conventional quagmire of an LES closure problem.) A technical difficulty with the Ansumali et. al. model is that in 2D NS there is an inverse energy cascade to large spatial scales thereby rendering subgrid modeling non-essential. In 2D MHD, however, the energy cascades to small spatial scales as in 3D -and so makes it attractive to perform the LES-LB-MHD simulations in which there can be a substantial amount of excited subgrid modes. Here we present some preliminary LES-LB-MHD simulations of our model and compare the results with some direct numerical simulations (DNS). As Ansumali et. al. [1] did not perform any simulations on their LES-LB-NS model, these are the first such LES-LB-MHD simulations when one has first filtered the underlying LB representation, followed by the conventional small Knudsen number expansion.
The backbone of any LES [2,[11][12][13][14][15][16][17][18][19][20][21][22][23][24][25][26][27][28][29] is the introduction of a spatial filter function to smooth out field fluctuations on the order of the filter width ∆. Thus for the mean velocitȳ In general, the filtering results in the standard closure problem. Previous LB-LES-NS modeling [27,28] have first considered the Chapman-Enskog expansion followed by filter and thus have concentrated on the Smagorinsky closure for the subgrid stresses. It has been pointed out [29] that in the conventional NS-LES closure, the subgrid stresses are assumed to be in equilibrium with the filtered strain. However, in LB-LES-NS the stresses relax towards the filtered strain at a rate dictated by the current eddy-viscosity, thereby permitting some spatio-temporal memory effects that are absent when applying LES directly to the continuum equations. In essence this gives an edge to any LB -approach.
In the Ansumali approach, however, one first performs a perturbation expansion in the filter width ∆ followed by the standard LB Chapman-Enskog expansion in the Knudsen number Kn. These two perturbation expansions do not commute. Closure is now achieved by making the physically plausible assumption that eddy transport effects occur at the transport time scale and this results in the scaling ∆ O Kn 1/2 . One still retains the LB effects of spatial -temporal memory as noted earlier [27,28] .
LES-LB-MHD Model
For completeness we briefly review the essentials of our LES-LB-MHD model [2] that yields the following closed set of filtered MHD equations (without further approximations) for the filtered density (ρ), momentum (ρu) and the magnetic field (B) in multiple relaxation time (MRT) where s 3 . . . s 8 are relaxation rates, and in this isothermal model, the pressure is directly related to the density p = ρc 2 s = ρ 3 , in lattice units (c s is the sound speed). The transport coefficients (shear viscosity ν, bulk viscosity ξ and resistivity η) are determined from the LB-MRT for the particle distribution function (the s s) and the single magnetic distribution function relaxation rate s m : We now summarize our computational LB-LES-MHD model that underlies Eqs. (2). For 2D MHD, we consider an LB model with 9-bit lattice with the moments Here the summation convention is employed on the vector nature of the fields (using Greek indices) while for Roman indices, correspond to the corresponding lattice vectors for the kinetic velocities c i , there is no implicitly implied summation. The lattice is just the axes and diagonals of a square (along with the rest particle i = 0). s ij are the MRT collisional relaxation rate tensor for the f i while the SRT s m is the collisional relaxation rate for g i . These kinetic relaxation rates determine the MHD viscosity and resistivity transport coefficients. (Of course, more sophisticated LB models can be formed by MRT on the g k equations, but for this first reported LB-LES-MHD simulation we will restrict ourselves to the simpler SRT model) A convenient choice of the relaxation distribution functions, will under Chapman-Enskog, yield the MHD equations where the w s are appropriate lattice weights. In the operator-splitting solution method of collidestream, it is most convenient to perform the collision step in moment-space (because of collisional invariants of the zeroth and 1st moment of f i , and the zeroth moment of g i .), while the streaming is optimally done in the ( with the 1-1 constant transformation matrices, T and T m given by and The In terms of conserved moments, we can write M (eq) 0 N (eq)
Filtering LB
In applying filtering to the LB Eqs. (6) and (7), only the nonlinear terms in the relaxation distributions, Eqs. (8) and (9), require further attention. On applying perturbations in the filter width ∆ we immediately see that and (17) for arbitrary fields X, Y , and Z. Moreover since collisions are performed in moment space, we need first to transform from f (eq) , g (eq) to M (eq) , N (eq) and then apply filtering in terms of the filtered where the O(∆ 2 ) term arises from the nonlinearities. In particular for the M (eq) 3 -term : Similarly for the other filtered equilibrium moments.
LES-LB-MHD Simulation
The filtered LB equations are now solved, with streaming performed in distribution space and collisions in moment space. As this is the first simulation on the LB-filtered-LES approach we have made a significant number of simplifications. We first restrict the evolution of the filtered scalar distribution function to an SRT collision operator. In this case the relaxation rates s i are all equal so that the 3rd term in Eq. (2b) is automatically zero. Moreover since nearly all LB simulations are quasi-incompressible at the fluid level, we neglect (filtered) density gradients in the moment representation of the collision operator. Thus, for example, we approximate M Also, since the last term in Eq. (2c) is dependent on the filtered density gradient its effects at the filtered MHD level will not be significant when we code the filtered LB system. It should be noted that as in regular LB-MHD, the filtered ∇ · B = 0 is maintained to machine accuracy.
There is a little subtlety in that not all the spatial derivatives in the filtered collision moments can be determined from local perturbed moments [4,21]. This limitation is thought to arise from the low D2Q9 lattice. It is expected that on a D3Q27 lattice the linearly independent set of derivatives can be represented by the now larger number of local perturbed moments.
While we solve the filtered LB equations, resulting in the filtered LES MHD Eqs. (2), there is some similarity in our final MHD model with that of the "tensor diffusivity" model of Müller-Carati [12]. However it must be stressed that we are performing a first principles derivation of the eddy transport coefficients from a kinetic (LB) model while Muller-Carati propose an ad hoc scheme of minimizing the error between two filters at each time step in their determination of their model's transport coefficients.
We will now evolve in time our filtered LB equations and consider the magnetized Kelvin-Helmholtz instability in a sufficiently weak magnetic field so that the 2D velocity jet is not stabilized [30]. The initial jet velocity profile is U y = U 0 sech 2 2π L 4x . The corresponding vorticity is shown in Fig. 1. The initial Reynolds number is chosen to be Re = U 0 L/ν = 50k = const., with U 0 = 4.88 × 10 −2 and B 0 = 0.005U 0 . The viscosity and resistivity on a grid of 1024 2 are ν = η = 10 −3 and scale with the grid to maintain a constant Re and a constant magnetic Reynolds number U 0 L/η. The initial perturbation to the fields are: U y = 0.01U 0 sin 2π L 4x , B y = 0.01B 0 sin 2π L 4x , U x = 0.01U 0 sin 2π L 4y , and B x = 0.01B 0 sin 2π L 4y . Note that initially ∇ · B = 0 = ∇ · U . In Fig, 2 we compare the evolution of vorticity in time from DNS on a 2048 2 grid with that determined from our LES-LB-MHD model on a 1024 2 grid. The DNS simulations are determined by solving the direct unfiltered LB Eqs. (6) and (7). For constant Reynolds number simulations at different grid sizes, the kinematic viscosity is adjusted appropriately. Thus on halving the spatial grid, a DNS time step of 2t 0 corresponds to time step t 0 in LES-LB-MHD.
At relatively early times the jet profile width slightly widens while within the vorticity layers the Kelvin-Helmholtz instability will break these layers into the familiar vortex street (Fig. 2). Since we have chosen a weak magnetic field insufficient to stabilize the jet, the vorticity streets break apart with like vortex-vortex reconnection (Fig. 2). There is very good agreement between DNS and LES-LB-MHD with filter width ∆ = 2 (in lattice units) on a grid L/2.
Finally, we show the corresponding vorticity (Fig. 3), total energy spectrum (Fig. 4), and current ( now influencing larger scales with some accuracy. The spectral plots ( fig. 4) are somewhat similar in all simulations with a very localized Kolmogorov energy spectrum. Presumably this is because the turbulence is limited and relatively weak. There appears to be good agreement in both the vorticity and current between DNS and LES-LB-MHD with ∆ = 2 on half the grid.
Conclusion
Here we have presented some preliminary 2D filtered SRT LB-MHD simulation results based on an extension of ideas of Ansumali et. al. [1] that leads to a self-consistent LES-LB closure scheme based solely on expansions in the filter width ∆ and invoking the constraint that any eddy transport effects can only occur on the transport time scales. We find very good agreement between DNS and our LES-LB-MHD models. This warrants further investigation of other filters used in LES, as well as in dynamic subgridding commonly used in LES of Navier-Stokes turbulence. Finally, an exploration of the effects of MRT on this LES algorithm should be quite interesting as a somewhat unexpected term related to the gradient of a pressure appears in the subgrid viscosity. This term reveals that higher-order moments (not stress related) can have a first order effect on the subgrid viscosity when MRT is employed. Given that this subgrid pressure term relies on the existence of higher order moments, it suggests that the extra parameters in lattice Boltzmann (ie. the distribution velocities/moments) are introducing new physics naturally absent from LES in computational fluid dynamics. It would be very interesting to see whether this new term enhances the LES accuracy or increases stability at even higher Reynold's flow. Further study could include how this term effects other, well-established LES approaches in computational fluid dynamics. These ideas are under consideration.
Acknowledgments
This work was partially supported by an AFOSR and NSF grant. The computations were performed on Department of Defense supercomputers. | 2017-12-02T15:54:28.709Z | 2017-05-27T00:00:00.000 | {
"year": 2017,
"sha1": "c05b38e4f8a143a2eb9675115873f9fdedcbef91",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1705.09807",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "c05b38e4f8a143a2eb9675115873f9fdedcbef91",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
156055437 | pes2o/s2orc | v3-fos-license | Dipeptidyl peptidase-4 plays a pathogenic role in BSA-induced kidney injury in diabetic mice
Diabetic kidney disease (DKD) is appeared to be higher risk of declining kidney function compared to non-diabetic kidney disease with same magnitude of albuminuria. Epithelial-mesenchymal transition (EMT) program of tubular epithelial cells (TECs) could be important for the production of the extracellular matrix in the kidney. Caveolin-1 (CAV1), dipeptidyl peptidase-4 (DPP-4) and integrin β1 have shown to be involved in EMT program. Here, we found diabetic kidney is prone for albuminuria-induced TECs damage and DPP-4 plays a vital role in such parenchymal damages in diabetic mice. The bovine serum albumin (BSA) injection induced severe TECs damage and altered expression levels of DPP-4, integrin β1, CAV1, and EMT programs including relevant microRNAs in type 1 diabetic CD-1 mice when compared to non-diabetic mice; teneligliptin (TENE) ameliorated these alterations. TENE suppressed the close proximity among DPP-4, integrin β1 and CAV1 in a culture of HK-2 cells. These findings suggest that DPP-4 inhibition can be relevant for combating proteinuric DKD by targeting the EMT program induced by the crosstalk among DPP-4, integrin β1 and CAV1.
Results
The DPP-4 inhibitor teneligliptin suppressed the bovine serum albumin (BSA)-induced tubular damage and fibrosis. At sacrifice, we obtained the following 6 groups: control, control + BSA, control + BSA + teneligliptin (TENE), STZ, STZ + BSA, and STZ + BSA + TENE. The histopathological examination of the kidneys using the Masson's trichrome staining (MTS) and picrosirius red (SR) staining revealed that the BSA injection induced mild tubular atrophy (Fig. 1a-c,g-i,m-o,s) and interstitial fibrosis ( Fig. 1g-i,m-o,t) in the control mice as previously reported 24 . The treatment with the DPP-4 inhibitor TENE ameliorated these alterations (Fig. 1c,i,o,s,t). In this short interval experiment, the STZ-induced diabetic mice that did not receive the BSA injection displayed a minor phenotype (Fig. 1d,j,p,s,t). Compared to the diabetic mice without BSA injection, the BSA-injected diabetic mice showed remarkable tubular damage and interstitial fibrosis; TENE treatment ameliorated these fibrogenic alterations (Fig. 1e,f,k,l,q,r,s,t). We also analyzed heart and liver histopathology; in this short duration of experimental protocol, there were minor differences between all groups analyzed in hearrt and liver ( Supplementary Fig. 1). Compared to the control mice, the BSA-injected control mice displayed higher urine mouse-specific albumin excretion; TENE ameliorated the BSA-induced urine albumin levels (Fig. 1u). Although the diabetic mice displayed higher urine albumin excretion than the control mice, no significant difference was www.nature.com/scientificreports www.nature.com/scientificreports/ observed among all diabetic mice (Fig. 1u). No significant difference was observed in the urine BSA levels among all BSA-injected groups regardless of diabetes or TENE treatment ( Supplementary Fig. 2). No significant difference was observed in the blood pressure, body weight (BW) and blood glucose (BG) between the BSA-injected control and diabetic mice (Fig. 2a-c). The kidney weight did not significantly differ among the control groups (Fig. 2d). In the STZ-induced diabetic mice, the BSA injection increased the kidney/BW ratio and liver weight/ BW ratio. The TENE treatment decreased the kidney weight/BW ratio (Fig. 2d,e). Compared to the control mice, the heart weight was decreased trend in the diabetic groups (Fig. 2f). The biomarkers of liver and heart damage such as alanine aminotransferase (ALT), aspartate aminotransferase (AST) and N-terminal prohormone brain natriuretic peptide (NT-proBNP) were analyzed and found no significant differences between all groups ( Supplementary Fig. 3).
DPP-4 is involved in the mechanisms of the BSA-induced TGF-β/smad3 signaling pathway and EMT program of kidney fibrosis in diabetes.
To explore the pathological role of DPP-4 in kidney fibrosis in the BSA-injected mice and the molecular mechanisms underlying the renoprotective effect of the DPP-4 inhibitor, we performed a microarray analysis and compared the gene expression profiles in the kidneys from BSA-injected diabetic and control mice (Fig. 3a). According to the microarray analysis, the BSA injection into the diabetic mice led to an induction of DPP-4, integrin β1, the TGF-β/smad3 signaling pathway and the EMT program (induction of EMT-related gene and mesenchymal markers and suppression of epithelial marker). qPCR analysis was performed, and compared to the BSA-injected control mice, the BSA-injected diabetic mice exhibited an induction of DPP-4, integrin β1 and TGF-β receptor 1 genes (Fig. 3b-d). These alterations were reversed by the TENE treatment. In the diabetic mice, the BSA injection increased the levels of EMT-associated genes, such as snai-1, snai-2, and Zeb-1, which encoded snail, slug and zeb-1. These changes were reversed by the TENE treatment ( Fig. 3e-g). In contrast, the expression of the proximal tubular marker proteins AQP1 was decreased in the BSA-injected diabetic mice and increased by the TENE treatment (Fig. 3h). In the control mice BSA injection increased fibronectin levels; TENE did not suppress the level of fibronectin in BSA injected control mice. Diabetes alone did not increased fibronectin levels. BSA-injected diabetic mice exhibited elevated levels of fibronectin; TENE significantly suppressed fibronectin levels (Fig. 3i). Unexpectedly, the expression levels of the EMT markers snai-1 and snai-2 were increased in the BSA-injected control mice treated with TENE, whereas the mesenchymal markers, such as cadherin 11 and fibronectin 1, and the EMT marker ZEB-1 were unaltered (Fig. 3a,e-i). After analyzing the involvement of DPP-4 in the EMT program in the kidney, we became interested in CAV1, which is a scaffolding protein within the caveolae plasma membranes. The expression pattern of CAV1 was similar to that of DPP-4 and integrin β1 (increased by the BSA injection and diabetes and decreased by the TENE treatment) (Fig. 3a,j). The gene expression of smad3 was increased by diabetic group but not altered by either BSA injection or TENE treatment (Fig. 3k). Each group, n = 7, was analyzed with an unpaired two-tailed t-test. *P < 0.05, **P < 0.01. Data are presented as mean ± s.e.m.
www.nature.com/scientificreports www.nature.com/scientificreports/ To analyze the molecular mechanism by which TENE suppressed the EMT program, we focused on the miR-34 and -200, the miRs relevant for EMT program 25 . Compared to the control mice, the BSA-injected diabetic mice exhibited lower expressions of the miR-34 and -200, while TENE restored these expressions ( Fig. 3l-p). Therefore, TENE suppressed the EMT program induced by the BSA injection in the diabetic kidney via the induction of anti-EMT miRs. miR-29s, the anti-fibrotic miRs 26 , levels were displayed a parallel trend with tissue damage (Supplementary Fig. 4) 15,16,27 . Western blot analyze revealed that smad3 phosphorylation and αSMA protein expression were induced by the BSA injection in both the control and diabetic groups; TENE treatment reversed these changes (Fig. 3q).
Similar to the observations in the microarray analysis, the immunohistochemistry analysis revealed that the DPP-4 and CAV1 expression levels were higher in the tubular epithelium in the BSA-injected diabetic mice than those in the BSA-injected control mice (Fig. 4a,c,e,g,q,r, Supplementary Fig. 5). Compared to the BSA-injected control mice, the BSA-injected diabetic mice exhibited an induction of snail and suppression of AQP1 levels ( Fig. 4i,k,m,o,s,t, Supplementary Fig. 5); TENE treatment restored these alterations ( Fig. 4a-t, Supplementary Fig. 5). These alterations of protein levels of DPP-4 and CAV1 in each group were confirmed by the western blot analysis ( Supplementary Fig. 6). The multiplex staining with E-cadherin, αSMA and CAV1 revealed that compared to the BSA-injected control mice, the BSA-injected diabetic mice displayed E-cadherin, αSMA and CAV1 triple-positive cells in the kidney, suggesting that the EMT program is associated with the induction of CAV1. The TENE treatment inhibited the EMT programs and CAV1 levels ( Fig. 5a-e). Furthermore, the damaged tubules exhibited a higher expression of DPP-4, integrin β1 and CAV1 (Fig. 5f,h-i), while the TENE treatment ameliorated these alterations (Fig. 5g,j). Thus, the EMT program induced in the damaged kidney tubular cells was associated with the crosstalk among DPP-4, integrin β1 and CAV1, and the TENE treatment suppressed the EMT program by inhibiting this crosstalk. www.nature.com/scientificreports www.nature.com/scientificreports/ TGF-β treatment induced DPP-4-dependent interaction among DPP-4, integrin β1 and CAV1 in the epithelial cells. To confirm the crosstalk among DPP-4, integrin β1 and CAV1, a Duolink In Situ proximity ligation assay was performed. Similar to our previous report in endothelial cells, TGF-β1 induced close proximity between DPP-4 and integrin β1, while TENE suppressed the TGF-β1-induced proximity in HK2 cells (Fig. 6a-c). Furthermore, we found that CAV1 and either DPP-4 or integrin β1 displayed close proximity as a result of the TGF-β1 stimulation, while TENE inhibited the proximities of these molecules ( Fig.6d-i). The overexpression of CAV1 induced close proximity between DPP-4 and integrin β1; DPP-4 overexpression induced close proximity between integrin β1 and CAV1, while TENE suppressed them ( Supplementary Fig. 7). In HK-2 cell, DPP-4 overexpression decreased E-cadherin, increased αSMA (the induction of EMT) and increased Smad3 phosphorylation; SIS3, the selective inhibitor of TGF-β1 dependent smad3 phosphorylation, suppressed EMT program (Fig. 6j). DPP-4 overexpression-induced close proximity between integrin β1 and CAV1 was suppressed with SIS3 ( Fig. 6kn). Immunoprecipitation assay further revealed that TGF-β stimulation induced physical interaction among DPP-4, CAV1 and integrin β1 (Fig. 6o). Finally we confirmed that neutralization of TGF-β decreased the physical interaction between DPP-4, integrin β1 and CAV1 induced by DPP-4 overexpression (Fig. 6p), supporting the significance of TGF-β/smad3 signaling pathway in the crosstalk among these three molecules.
Discussion
Diabetic patients with macroalbuminuria have a poor kidney prognosis [28][29][30] . Therefore, establishing a novel therapeutic strategy for diabetic patients with advanced albuminuria or proteinuria appears to be highly significant in diabetic research. Our research group has focused on the endothelium and reported that DPP-4 plays fibrogenic roles by inducing EndMT, which is associated with the suppression of anti-fibrogenic miR crosstalk [31][32][33] . Furthermore, we reported that the interaction between DPP-4 and integrin β1 regulates TGF-β/smad3 signal transduction and induces EndMT 16 . In this study, we focused on the proximal tubular epithelium where the cells are exposed to diverse urine derived molecules, including albumin. We found that (1) Diabetic mice exhibited severe fibrosis by BSA injection when compared to BSA injected control mice associated with induction of EMT program, (2) the TENE treatment ameliorated the proximal tubular damage and tubulointerstitial fibrosis induced by the BSA injection in the control and diabetic mice, (3) the TENE treatment suppressed the EMT program induced by the BSA injection in the diabetic mice by increasing anti-EMT miRs and (4) The crosstalk www.nature.com/scientificreports www.nature.com/scientificreports/ among DPP-4, integrin β1 and CAV1 was TGF-β/smad3 signaling dependent. These data provide novel insights into the pathogenesis of DKD and the pathogenic role of DPP-4 in the progression of DKD.
In our study, the BSA-stimulated fibrogenic/EMT molecular inductions were rather prominent in the STZ-induced diabetic mice. This phenomenon is clinically relevant since DKD with albuminuria is an independent risk factor for eGFR decline compared to non-diabetic CKDs with similar levels of albuminuria 2,29 . Furthermore trends of higher risk in the onset of ESRD along with urine albumin levels have been shown in meta-analysis of large population 34 . The particular molecular mechanisms and the differences observed in this study are unclear; the diabetes-preconditioned kidney tubular cells could be prone to BSA-induced protein overload-induced tubular damage. The diabetic mice exhibited a significant induction of TGF-βR1 and suppression of miR29s. BSA-induced tubular damage has shown to be associated with the induction of TGF-β1 35 . Therefore, the higher levels of TGF-βRs in the diabetic kidney are preconditioned for TGF-β1-induced fibrogenic signaling and the subsequent stimulation of smad3 phosphorylation. miR29 has been shown to play renoprotective roles, and we have shown that miR29 plays central protective roles in EndMT and suppression of DPP-4 in endothelial cells. In addition, miR29 targets diverse fibrogenic and proinflammatory genes 36 . Therefore, the alterations in the gene expression profiling of diabetic kidney may be preconditioned, accelerating the fibrogenic/ EMT programs and subsequent parenchymal damage in our model.
In this study, we focused on the EMT program. EMT has been considered the source of myofibroblasts in the kidney fibrosis process 6 . The contribution of EMT and the presence of tubular-derived fibroblasts remain controversial. However, the EMT program in tubular cells is clearly associated with the production of extracellular matrix proteins. According to Grande et al. and Lovisa et al., the EMT program, even a partial program, could induce kidney parenchymal damage and kidney fibrosis 10,37 . In tubular cells, the EMT inducer snail plays an important role in tubular damage, tubular cell cycle arrest and damage in neighboring cells 10,38 . In our model, the inhibition of DPP-4 by TENE successfully inhibited kidney fibrosis and EMT associated with the suppression of the EMT inducer (snail, slug, twist, and zeb) and the restoration of epithelial markers (E-cadherin and AQP1) and anti-EMT miRs such as miR-34 and miR-200. Therefore, despite the controversy regarding the contribution of EMT to kidney fibrosis, the alteration of the EMT program by TENE observed in our study is relevant for TENE-mediated kidney protection.
In our analysis, the DPP-4 inhibitor blocked the crosstalk among DPP-4, CAV1, and integrin β1, suggesting that DPP-4 could be a key adaptor molecule in the interaction among these three molecules. In vitro experiment revealed DPP-4 overexpression and TGF-β1 stimulation induced EMT program and inhibition of DPP-4 or Smad3 phosphorylation suppressed EMT program. Furthermore, by utilizing the Duolink In Situ Proximity Ligation Assay, we found close proximity of DPP-4, CAV1 and integrin β1 were increased by DPP-4 overexpression and TGF-β1 stimulation; such proximity were suppressed by inhibition of either DPP-4 or smad3 phosphorylation. Immunoprecipitation analysis also showed that TGF-β1 incubation increased physical interactions among these three molecules and TGF-β neutralization suppressed such interactions induced by DPP-4 overexpression. These results indicate that crosstalk among DPP-4, CAV1 and integrin β1 plays a key role in DPP-4 and TGF-β1-induced epithelial cell signal transduction and in the induction of EMT (Fig. 7). The in vivo DPP-4 inhibition also blocked EMT and the associated alteration in fibrogenic molecules and kidney fibrogenic signals in the BSA-injected diabetic mice. Interestingly and unexpectedly, TENE significantly increased the snai1 and snai2 levels without inducing the TGF-β/smad3 signaling pathway in the BSA-injected non-diabetic mice. Regard with this, Long et al. reported that DPP-4 inhibitor improve diabetic wound healing via induction of EMT program in the skin wound edge 39 . Wang et al. reported that the inhibition of DPP-4 increased tumor metastasis associated with the expression of mesenchymal molecules 40 . Our recent study also demonstrated that DPP-4 inhibitor could induce EMT in breast cancer cells 41 . Therefore, DPP-4 inhibition associated anti-EMT and anti-fibrotic effects could be dependent on the model and disease conditions. www.nature.com/scientificreports www.nature.com/scientificreports/ BSA injection is a well-known model in nephritis and chronic serum sickness via the immune system (BSA nephritis) 42 . Although proximal tubular damage can be observed in this animal protocol, we cannot distinguish whether these damages were caused by an albumin-overload or BSA nephritis, which is a limitation of this study protocol. However, the TENE treatment ameliorated renal fibrosis in the BSA-injected diabetic mice without affecting either urine murine albumin or BSA levels. Therefore, at least in this short interval experiment, the TENE-mediated kidney protection was independent of the alteration in the urine albumins (both murine and bovine) filtered from the glomerulus.
In conclusion, we identified a novel molecular mechanism of the renoprotective effect of the DPP-4 inhibitor in BSA-injected diabetic mice in which the crosstalk among DPP-4, integrin β1, CAV1 and the EMT program were inhibited. Therefore, the DPP-4 inhibitor could be a relevant drug for the treatment of diabetic patients with proteinuria. Animal experiment. Eight-week-old male CD-1 mice (Sankyo Laboratory Service, Tokyo, Japan) were used in all in vivo experiments. The mice were injected with STZ [200 mg/kg BW] intraperitoneally. The diagnosis of diabetes was confirmed by a blood glucose level >16 mmol/L 2 weeks after the STZ injection. At 4 weeks after the induction of diabetes, the diabetic mice were divided into two groups (TENE [30 mg/kg BW/day in drinking water] and untreated). TENE was diluted directly in the drinking water. Simultaneously, the mice received an FFA-bound BSA injection (0.3 g/30 g BW). BSA was purchased from Sigma-Aldrich (St. Louis, MO, USA). BSA was injected intraperitoneally for 11 days of 14 days, and subsequently, we observed tubular damage associated with inflammation, apoptosis and fibrosis.
Materials and
All mice were sacrificed 2 weeks after the BSA injection and initiation of the TENE (Mitsubishi Tanabe Pharma, Tokyo, Japan) treatment. Before sacrificing the mice, their blood pressure was monitored via the tail cuff method using BP-98A (Softron Co., Beijing, China). The blood glucose levels were measured using a portable glucose meter (Antisense III, HORIBA, Ltd., Kyoto, Japan). All samples were collected and stored at −80 °C until use.
Measurement of urine albumin to creatinine ratio. The murine specific urinary albumin level was measured using a murine microalbuminuria ELISA kit (albuwell M Test kit, Exocell, Inc. Philadelphia; Cosmo Bio Co., LTD). The urinary creatinine levels were measured using a QuantiChromTM Creatinine Assay Kit (BioAssay System). We used SoftMax pro 6.4 to analyze the urinary albumin and creatinine levels.
Measurement of urine BsA level. The urine BSA levels were measured using a specific ELISA kit for BSA (Arigobio, Hsinchu City, Taiwan). We used SoftMax pro 6.4 to analyze the urinary albumin and creatinine levels.
Measurement of serum AST, ALT and NT-proBNP level.
The serum AST, ALT and NT-proBNP were measured using a specific ELISA kit for AST, ALT and NT-proBNP (MyBioSource, CA, USA).
Histopathology. The kidney, Liver and heart were fixed in 10% formaldehyde and embedded in paraffin.
For the Masson's trichrome staining MTS, SR staining and immunohistochemistry, all tissues were cut into 5 μm thick sections. The SR staining was performed using a Picrosirius Red Stain Kit (Philadelphia; Cosmo Bio Co., LTD). Six MTS or SR stained 200× visual areas from each mouse were analyzed to calculate the fibrotic area using ImageJ software. The tubular atrophy scores were calculated according to the chronic allograft damage index in 200× visual fields.
www.nature.com/scientificreports www.nature.com/scientificreports/ Immunohistochemistry and multiplex staining. Formaldehyde-fixed, paraffin-embedded (FFPE) kidney sections (5 μm thick) were deparaffinized and rehydrated (2 min in xylene, four times; 1 min in 100% ethanol, twice; 30 s in 95% ethanol; 45 s in 70% ethanol; and 1 min in distilled water), and the antigen was retrieved in a 10 mM citrate buffer pH 6 at 98 °C for 60 min. To block the endogenous peroxidase, all sections were incubated in 0.3% hydrogen peroxide for 10 min. The immunohistochemistry was performed using a Vectastain ABC Kit (Vector Laboratories, Burlingame, CA). The primary antibody was diluted as mentioned above. In the negative controls, the primary antibody was omitted and replaced with the blocking solution. For the multiplex staining, an opal in situ kit was purchased from PerkinElmer (Waltham, MA, USA). FFPE slides were deparaffinized, and the antigen was retrieved as described above. αSMA and DPP-4 were labeled with opal 520 (TSA-FITC), E-cadherin and integrin β1 were labeled with opal 570 (TSA-Cy3), CAV1 was labeled with opal 670 (TSA-Cy5), and the nuclei were labeled with DAPI. overexpression experiment. CAV1 DNA plasmid was purchased from Origene.The pCMV6-DPP-4-GFP plasmid was purchased from ORIGENE (Rockville, MD). To generate the pCMV-Myc-DPP-4 plasmid, we amplified the full-length DPP-4 cDNA by PCR using the pCMV6-DPP-4-GFP plasmid as a template and the specific primer pair (Fw. 5′-C CGA ATT CGG ATG AAG ACA CCG TGG AAG GTT CTT C -3′; Rev 5′-AT CTC GAG CTA AGG TAA AGA GAA ACA GTT TTT TAT G -3′). Both the amplified DPP-4 cDNA and pCMV-Myc cloning vector (Clontech, Mountain View, CA) were digested with EcoRI and XhoI, and the resulting product was ligated (TOYOBO, Japan). The ligated products (pCMV-Myc-DPP-4) were transformed into competent cells and amplified, and the sequence was confirmed. proximity ligation assay. Duolink In Situ kits were used to detect the proximity of DPP-4/integrin β1, DPP-4/ CAV1 and CAV1/integrin β1 in vivo following the manufacturer's protocol 14 . Briefly, cells from the human renal tubular epithelial cell line (HK-2) (CRL-2190TM; ATCC) were cultured in Keratinocyte Serum-Free Medium (K-SFM; Invitrogen; Cat# 17005-042) supplemented with 0.05 mg/ml of bovine pituitary extract (BPE) and 5 ng/ml human recombinant epidermal growth factor (EGF). When the HK-2 cells on the 8-well Culture Slides (BD Falcon, New York, USA) reached 70% confluence, 10 ng/mL recombinant human TGF-β1 was added to the experimental medium for 48 h (HuMedia-MVG in serum-free RPMI at a 1:3 ratio) with or without a TENE (0.1 μM) preincubation for 2 h. Vehicle (PBS) was added to the control well. The cells were washed with PBS, fixed with 4% paraformaldehyde, permeabilized with 0.2% Triton-X100 and blocked with the blocking solution from the Duolink In Situ Kit. Subsequently, the cells were incubated with the primary antibodies (i.e., goat polyclonal anti-DPP-4 antibody/rabbit polyclonal anti-integrin β1 antibody, goat polyclonal anti-DPP-4 antibody/rabbit polyclonal anti-CAV1 antibody or goat polyclonal anti-CAV1 antibody/rabbit polyclonal anti-integrin β1 antibody) at 4 °C overnight. The slides were mounted with DAPI and analyzed by fluorescence microscopy. For each slide, the original magnification of ×400 pictures was obtained from 6 different areas, and quantification was performed.
Immunoprecipitation. The immunoprecipitation assay was performed as previously described 16 . HK-2 cell were treated with TGF-β (10 ng/ML) and DPP-4 overexpression vector. At 48 h post-treatment or transfection, cells were rinsed with ice cold PBS and lysated with ice-cold cell lysis buffer and then collected. The samples were sonicated on ice three times for 5 s and microcentrifuged for 10 min at 14,000 g. We used 500 μL of the supernatant for immunoprecipitation and incubated with goat-anti DPP-4 antibody overnight. Then Protein A was added and incubated for 1-3 h at 4 °C and then microcentrifuged for 30 s at 4 °C. The pellet was then washed and resuspended with SDS sample buffer, and sample was then analyzed by western blotting.
Microarray analysis.
For the microarray analysis, total RNA was extracted from the mouse kidneys (n = 2) using RNAlater-ICE (Invitrogen) and the RNeasy Lipid Tissue Mini Kit (Qiagen, Hilden, Germany) according to the manufacturer's instructions. An Agilent 2100 Bioanalyzer was used to evaluate the quality of the obtained RNA. The RNA concentration was measured using a NanoDrop 1000 Spectrophotometer. A GeneChip analysis was performed using a GeneChip WT PLUS Reagent Kit, GeneChip analysis Mouse Gene 2.0 ST Array and GeneChip Hybridization, Wash, and Stain Kit (Affymetrix, California, USA). The images were acquired and quantified using a GeneChip Scanner 3000 7 G and GeneChip Command Console. The statistical data mining and analysis were processed by GeneSpring GX Version12.6 (Agilent, California, USA) and David. A heat map was generated by GraphPad Prism7.
RNA and miR isolation and Quantitative PCR (qPCR). RNA was extracted from frozen kidneys using TRIzol (Life Technologies, 15596-018, Waltham, MA) according to the manufacturer's instructions. The RNA concentration was quantified using a NanoDrop 1000 Spectrophotometer. cDNA was generated using a PrimeScript RT Reagent Kit (TAKARA, RR037A, Shiga, Japan). The gene expression was quantified using a SYBR Green PCR kit using 10 ng of cDNA. The primers used for the quantification were designed by Hokkaido System Science Co., Ltd. (Sapporo, Japan). All experiments were performed in duplicate, and 18S ribosomal RNA (Qiagen) was utilized as an internal control. MiR was extracted using a miRNeasy Mini kit (Qiagen) according to the manufacturer's instructions. The cDNA was generated using a miScript II RT kit (Qiagen). The miScript SYBR Green PCR Kit (Qiagen) was used to quantify the miR expression using 3 ng of cDNA. The primers used to quantify Mm_miR-200b-1, Mm_miR-200b-3, Mm_miR-29-a, Mm_miR-29-b, Mm_miR-29-c, Mm_miR-34-a, Mm_miR-34-b and Mm_miR-34-c were included in the miScript primer assays (Qiagen). All experiments were performed in duplicate, and Hs_RNU6-2_1 (Qiagen) was utilized as an internal control (Supplementary Table). statistical analysis. All data are expressed as the mean ± SEM. Prism 7 software was used for the statistical analysis. The differences among the groups were analyzed by performing one-way analysis of variance (ANOVA) followed by Tukey HSD testing for multiple comparisons unless otherwise indicated in the legend. Comparisons with P-values < 0.05 were considered statistically significant. | 2019-05-18T14:16:19.885Z | 2019-05-17T00:00:00.000 | {
"year": 2019,
"sha1": "01f533c4e8bd55a9a722c01469cf7fdb9b2539ec",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-019-43730-5.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c6a87cf508202ffdaf889a2da311ee39c177a455",
"s2fieldsofstudy": [
"Biology",
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
263282951 | pes2o/s2orc | v3-fos-license | Construction and validation of a novel prognostic model for intrahepatic cholangiocarcinoma based on a combined scoring system of systemic immune-inflammation index and albumin-bilirubin: a multicenter study
Background The degree of inflammation and immune status is widely recognized to be associated with intrahepatic cholangiocarcinoma (ICC) and is closely linked to poor postoperative survival. The purpose of this study was to evaluate whether the systemic immune-inflammatory index (SII) and the albumin bilirubin (ALBI) grade together exhibit better predictive strength compared to SII and ALBI separately in patients with ICC undergoing curative surgical resection. Methods A retrospective analysis was performed on a cohort of 374 patients with histologically confirmed ICC who underwent curative surgical resection from January 2016 to January 2020 at three medical centers. The cohort was divided into a training set comprising 258 patients and a validation set consisting of 116 patients. Subsequently, the prognostic predictive abilities of three indicators, namely SII, ALBI, and SII+ALBI grade, were evaluated. Independent risk factors were identified through univariate and multivariate analyses. The identified independent risk factors were then utilized to construct a nomogram prediction model, and the predictive strength of the nomogram prediction model was assessed through Receiver Operating Characteristic (ROC) survival curves and calibration curves. Results Univariate analysis of the training set, consisting of 258 eligible patients with ICC, revealed that SII, ALBI, and SII+ALBI grade were significant prognostic factors for overall survival (OS) and recurrence-free survival (RFS) (p < 0.05). Multivariate analysis revealed the independent significance of SII+ALBI grade as a risk factor for postoperative OS and RFS (p < 0.05). Furthermore, we conducted an analysis of the correlation between SII, ALBI, SII+ALBI grade, and clinical features, indicating that SII+ALBI grade exhibited stronger associations with clinical and pathological characteristics compared to SII and ALBI. We constructed a predictive model for postoperative survival in ICC based on SII+ALBI grade, as determined by the results of multivariate analysis. Evaluation of the model’s predictive strength was performed through ROC survival curves and calibration curves in the training set and validation set, revealing favorable predictive performance. Conclusion The SII+ALBI grade, a novel classification based on inflammatory and immune status, serves as a reliable prognostic indicator for postoperative OS and RFS in patients with ICC.
Introduction
Intrahepatic cholangiocarcinoma (ICC) is the second most prevalent primary liver cancer distinguished by its aggressive nature, accounting for approximately 15-20% of all biliary malignancies (1,2).The worldwide incidence of ICC has been consistently rising at a yearly rate of 15% over the past few decades (1).Curative surgical resection currently stands as the gold-standard treatment for ICC.However, only about 20%-40% of patients who get curative surgical resection survive 5 years or more (3,4).Therefore, the identification of novel prognostic indicators for distinguishing ICC patients who would benefit from curative surgical resection is crucial for developing personalized treatment strategies.
Increasing evidence suggests that in addition to common factors such as lymph node metastasis, tumor size, and vascular invasion, nutritional status and inflammatory levels play a significant predictive role in the prognosis of curative surgical resection for tumors (5,6).Among them, the Systemic Immune-Inflammation Index (SII) is a fresh quantitative indicator used to assess individual immune status and inflammation levels (7, 8).It is calculated based on parameters such as platelet, neutrophil, and lymphocyte counts.SII is frequently used to assess patients' preoperative nutritional status and precisely evaluate their individual surgical risks (8).Additionally, the Albumin-bilirubin (ALBI) grade is a composite indicator that comprehensively evaluates patients' liver function and reserves.Its introduction was first compared to Child-Pugh classification in hepatocellular carcinoma (HCC) patients in 2015, demonstrating superior predictive capability for survival following liver resection and postoperative liver failure (9).A growing body of literature indicates a close association between SII, ALBI, and the prediction of prognosis and survival in patients with HCC, ICC, and other malignancies (9)(10)(11)(12)(13).However, whether the combined application of SII and ALBI can improve the prognostic prediction in patients with ICC remains inconclusive.This research seeks to identify the combined application of SII and ALBI in predicting postoperative survival after curative resection for ICC and attempt to construct a survival prognostic model based on SII and ALBI.
Patient selection
This study included all patients who received curative surgical resection for ICC between January 2016 and January 2020 at People's Hospital of Zhengzhou University, Cancer Hospital of Zhengzhou University, and The First Affiliated Hospital of Zhengzhou University.Following were the inclusion criteria: 1) Patients whose pathological confirmation with ICC followed a curative surgical resection; 2) Patients aged 18 years or older; 3) No prior anticancer treatment before surgery; 4) No concurrent occurrence of other malignant tumors.Following were the exclusion criteria: 1) Perioperative mortality; 2) Patients with hematological disorders and autoimmune diseases; 3) Incomplete clinical or laboratory data; 4) Patients requiring a second surgery for tumor recurrence; 5) Incomplete follow-up information.258 patients from People's Hospital of Zhengzhou University and Cancer Hospital of Zhengzhou University were chosen as the training set, while a total of 116 patients from The First Affiliated Hospital of Zhengzhou University were chosen as the validation set.The 8th edition of the American Joint Committee on Cancer (AJCC) staging method was used to evaluate all patients who were included, and all patients were monitored until January 2023.
This study received ethical approval from the Institutional Review Boards of Zhengzhou University People's Hospital (Ref No. 2023-012), Zhengzhou University Cancer Hospital (Ref No. 2023-203), and Zhengzhou University First Affiliated Hospital (2021-KY-1137-002).Written informed consent was obtained from all patients prior to their participation in the study.
Clinical variables
Patient clinical and pathological data included age, gender, HBV infection, obstructive jaundice, tumor differentiation, tumor number, tumor size, perineural invasion, microvascular invasion, and the AJCC 8 th TNM Stage.Laboratory test results were collected from one week before surgery, including carbohydrate antigen 19-9 (CA19-9), carcinoembryonic antigen (CEA), alpha-fetoprotein (AFP), alanine transaminase (ALT), aspartate transaminase (AST), albumin, bilirubin, white blood cell count (WBC), lymphocyte count (LY), neutrophil count (NEUT), platelet count (PLT), hemoglobin (HGB), prothrombin time (PT), international normalized ratio (INR) and activated partial thromboplastin time (APTT).Additionally, the calculation methods for the two immune-inflammatory markers, ALBI and SII, were as follows: ALBI = log 10 bilirubin (mol/L) * 0.66 -albumin (g/L) * 0.085, SII = platelet count * neutrophil count/lymphocyte count.Subsequently, Subsequently, the X-tile software (Yale University, New Haven, CT, USA) was employed to compute the optimal cutoff values for overall survival (OS) and recurrence-free survival (RFS) with respect to SII and ALBI.Based on the results, ALBI ≥ -2.50 was defined as the high ALBI group, and ALBI < -2.50 as the low ALBI group.Similarly, SII ≥ 470 was defined as the high SII group, and SII < 470 as the low SII group.In the subsequent analysis, the combination of low SII and low ALBI was defined as SII+ALBI Grade A, the combination of high SII and high ALBI was defined as SII+ALBI Grade C, and the remaining combinations were defined as SII+ALBI Grade B.
Statistical analysis
The Kolmogorov-Smirnov test was used in research to determine if continuous variables were normally distributed.Mean and standard deviation (SD) were used to represent normally distributed data, whereas interquartile range (IQR) was used to represent nonnormally distributed variables.For group comparisons, the Mann-Whitney rank sum test and the student t-test were used.The baseline features of categorical variables were compared using the chi-square test and Fisher's exact test.Cox proportional hazards regression analysis was used for the univariate analysis.Cox backward stepwise regression models were employed for the multivariate analysis.GraphPad Prism (version 8.0) was used to create Kaplan-Meier survival curves for OS and RFS based on the grouping of ALBI, SII, and ALBI+SII.Additionally, ROC survival curves were drawn, and the three groups' areas under the curve (AUC) were contrasted.Statistical significance was defined as p<0.05.
Follow-up
For the included patients, follow-up began after the surgical procedure.Within the first year postoperatively, monthly follow-up visits were conducted, followed by follow-up visits every three months for the next two years.The last follow-up was performed on January 2023.Overall survival was determined as the interval between the date of curative surgical resection and the last examination or the date of death from any cause.Recurrence-free survival was determined as the interval between the date of curative surgical resection and the most recent follow-up, the occurrence of tumor recurrence or advancement in any way, or the patient's death for any reason.
Development and assessment of nomogram
Based on the results of the Cox backward stepwise regression model, predictive models for OS and RFS were constructed using nomogram models.The accuracy of the models was assessed by plotting ROC survival curves and calibration curves for the training and validation sets based on the models.The construction and evaluation of the models were performed using R software (version 4.2.1).
Result
A total of 374 patients (172 male and 202 female) who underwent curative surgical resection for pathologically confirmed ICC from January 2016 to January 2020 were included in this study.The median age of the patients was 59 years, ranging from 28 to 80 years.The median follow-up time was 12 months (1-91 months).The 1-year, 2-year, and 3-year OS rates were 52.1%, 23.3%, and 10.9%, respectively.The 1-year, 2-year, and 3-year RFS rates were 29.2%, 15.5%, and 5.3%, respectively.As c a n b e s e e n i n T a bl e 1 , t h e b a s e l in e d a t a a n d clinicopathological traits of the training set (n=258) and validation set (n=116) were examined for their association.The two cohorts' distributions were balanced (p>0.05).
In addition, we plotted the ROC survival curves for SII, ALBI, SII+ALBI grade, Child-pugh Grade and AJCC 8 th TNM stage.By comparing the area under the ROC curves, we found that SII+ALBI grade demonstrated a superior survival predictive effect (Figure 3).
Correlation analysis of SII, ALBI and SII+ALBI with clinical and pathological features
Through chi-square tests, we found that compared to SII and ALBI, SII+ALBI grade exhibited better correlations with age, obstructive jaundice, HBV infection, CA19-9, CEA, Child-Pugh Grade, tumor size, tumor differentiation, perineural invasion (p<0.05,Table 3).
Development and assessment of nomogram
Based on the results of Cox multivariate survival analysis, we established a nomogram prediction model using R software for postoperative OS and RFS in patients with ICC, incorporating various variables including SII+ALBI grade (Figure 4).In addition, we plotted the ROC survival curves for the training and validation sets based on the predictive model.The AUC values for 1-3-year OS in the training set were 0.804, 0.820, and 0.763, respectively, while for the validation set, they were 0.731, 0.793, and 0.781.The AUC values for 1-3-year RFS in the training set were 0.751, 0.742, and 0.822, respectively, and for the validation set they were 0.768, 0.738, and 0.745 (Figure 4).We also plotted the calibration curves of the training and validation sets for 1-3-year survival using both models, and the results consistently demonstrated the excellent predictive ability of the model for postoperative survival in ICC patients (Figures 5, 6).
Discussion
Curative surgical resection represents the gold standard for the treatment of ICC (14).The decision to proceed with surgical resection is often based on the patient's imaging data and the presence of accompanying symptoms.However, even among patients with similar disease stages and grades, there exists significant heterogeneity in the prognosis and clinical response to curative surgical resection (15).Therefore, the identification of a robust intraoperative and postoperative risk prediction tool holds paramount importance.
As a composite index of platelet, lymphocyte, and monocyte counts, SII provides a direct reflection of the body's inflammatory status.Increasing evidence suggests that platelets and monocytes can interact with tumor cells through various mechanisms, promoting tumor cell survival and metastasis, enhancing cancer cell invasion, proliferation, and immune evasion, thereby modulating the interplay between the host and tumor (16)(17)(18)(19).On the other hand, lymphocytes play a crucial role in cellmediated immune destruction of cancer cells by activated T cells and other lymphocytes, while tumors can also release cytokines such as IFN-g and TNF-a to regulate various immune functions in the body (20,21).Furthermore, numerous studies have confirmed that SII is an independent prognostic factor for postoperative survival in various digestive system malignancies, including HCC, ICC, and gallbladder cancer (8,(22)(23)(24)(25).Similarly, in our study, a lower SII was significantly associated with improved postoperative survival and reduced recurrence rates, further validating this observation.
Albumin-bilirubin, calculated based on serum albumin and bilirubin levels, provides an intuitive reflection of a patient's immune status and liver function; ALBI was initially proposed by Johnson et al. in 2014 as an alternative to the Child-Pugh classification for assessing liver function in HCC patients, overcoming its limitations (26).Increasing evidence suggests that ALBI is a reliable indicator of liver functional reserve.A multicenter cohort study demonstrated that the predictive performance of the Barcelona Clinic Liver Cancer (BCLC) staging system based on ALBI score is comparable to or even superior to that based on the Child-Pugh classification (27).Subsequently, the predictive ability of ALBI for the prognosis of HCC and ICC patients has been validated in multiple independent cohorts, including those from Japan, China, and other countries (28-30).Consistent with the findings of these studies, in our research, the low ALBI group exhibited significantly higher OS and RFS rates compared to the high ALBI group.
In our study, we took into consideration the patients' inflammatory status, immune capacity, and liver function, by combining SII and ALBI, which were categorized into three grades: A, B, and C. Through the construction of Kaplan-Meier survival curves and ROC survival curves, we found that the SII+ALBI grade had better predictive ability and discrimination when compared separately to SII and ALBI.Therefore, we included the SII+ALBI grade as an independent grade index in our model and confirmed that the nomogram predictive model incorporating SII+ALBI grade for OS and RFS demonstrated good predictive performance.Additionally, we analyzed the correlation between SII+ALBI grade and clinical and pathological characteristics.Surprisingly, for indicators such as microvascular invasion and 8th edition AJCC N stage, which showed no significant correlation with individual SII or ALBI, the SII+ALBI classification still exhibited a correlation.Therefore, we believe that the SII+ALBI classification can In addition, our study has the following limitations.Firstly, although it is a multicenter retrospective study, the sample size involved in the study is relatively small, with a total of 374 cases.Secondly, due to the retrospective nature of this study, selection bias is unavoidable, and we only included patients who underwent surgical resection without receiving other treatments prior to surgery.Thirdly, despite our efforts to minimize the impact of confounding factors on the study results, individual differences in various laboratory parameters cannot be completely eliminated.Therefore, further large-scale prospective multicenter studies are still needed to validate our findings.
Conclusion
In conclusion, this multicenter study included a sample of 374 patients with ICC who underwent surgical resection in three tertiary hospitals.Based on univariate, multivariate, and clinical significance analyses, multiple relevant indicators incorporating the SII+ALBI grade were incorporated to construct a nomogram predictive model for OS and RFS.The model demonstrated excellent accuracy in survival prediction.To our knowledge, this is the first clinical prediction model for ICC that includes the SII +ALBI grade.We believe that this model can provide better guidance for the management of ICC and has the potential for broad application.important intellectual content.All authors contributed to the article and approved the submitted version.
2
FIGURE 2 Kaplan-Meier recurrence-free survival (RFS) curves of patients with intrahepatic cholangiocarcinoma (ICC) after radical resection according to different prognostic factors in the training set and validation set.(A-C) Kaplan-Meier OS curves according to the Systemic Immune-Inflammation Index (SII) (A), albumin-bilirubin (ALBI) (B), and SII+ALBI grade (C) of training set.(D-F) Kaplan-Meier OS curves according to SII (A), ALBI (B), and SII +ALBI grade (C) of validation set.
4 5
FIGURE 4 Construction and validation of the nomograms.Nomograms incorporating SII + ALBI Grade and other clinicopathological parameters for OS (A) and RFS (B) prediction in the training cohort.ROC survival curves of the training set for OS (C) and RFS (D) based on the model.ROC survival curves of the validation set for OS (E) and RFS (F) based on the model.
TABLE 1
Comparison of clinicopathological characteristics in training and validation sets.
TABLE 2 Univariate
and multivariate analyses of the prognosis for intrahepatic cholangiocarcinoma (ICC) after radical resection in the training set.
TABLE 2 Continued
FIGURE 3Comparison of SII, ALBI, SII+ALBI grade, Child-pugh Grade and TNM stage in predicting OS.
TABLE 3
Relationship of SII, ALBI and SII+ALBI grade with clinicopathological characteristics of intrahepatic cholangiocarcinoma (ICC) after radical resection in the training set. | 2023-10-01T15:04:27.476Z | 2023-09-28T00:00:00.000 | {
"year": 2023,
"sha1": "b95ef1d4a9fe09fd0ada928ff8d700d7ef0a4745",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2023.1239375/pdf?isPublishedV2=False",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0e54c1167211bad5df1cf59695d5375d25fc7431",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
258564815 | pes2o/s2orc | v3-fos-license | Distributed economic predictive control of integrated energy systems for enhanced synergy and grid response: A decomposition and cooperation strategy
The close integration of increasing operating units into an integrated energy system (IES) results in complex interconnections between these units. The strong dynamic interactions create barriers to designing a successful distributed coordinated controller to achieve synergy between all the units and unlock the potential for grid response. To address these challenges, we introduce a directed graph representation of IESs using an augmented Jacobian matrix to depict their underlying dynamics topology. By utilizing this representation, a generic subsystem decomposition method is proposed to partition the entire IES vertically based on the dynamic time scale and horizontally based on the closeness of interconnections between the operating units. Exploiting the decomposed subsystems, we develop a cooperative distributed economic model predictive control (DEMPC) with multiple global objectives that regulate the generated power at the grid's requests and satisfy the customers cooling and system economic requirements. In the DEMPC, multiple local decision-making agents cooperate sequentially and iteratively to leverage the potential across all the units for system-wide dynamic synergy. Furthermore, we discuss how subsystem decomposition impacts the design of distributed cooperation schemes for IESs and provide a control-oriented basic guideline on the optimal decomposition of complex energy systems. Extensive simulations demonstrate that the control strategies with different levels of decomposition and collaboration will lead to marked differences in the overall performance of IES. The standard control scheme based on the proposed subsystem configuration outperforms the empirical decomposition-based control benchmark by about 20%. The DEMPC architecture further improves the overall performance of the IES by about 55% compared to the benchmark.
Exploiting the decomposed subsystems, we develop a cooperative distributed economic model predictive control (DEMPC) with multiple global objectives that regulate the generated power at the grid's requests and satisfy the customers cooling and system economic requirements. In the DEMPC, multiple local decision-making agents cooperate sequentially and iteratively to leverage the potential across all the units for system-wide dynamic synergy. Furthermore, we discuss how subsystem decomposition impacts the design of distributed cooperation schemes for IESs and provide a control-oriented basic guideline on the optimal decomposition of complex energy systems. Extensive simulations demonstrate that the control strategies with different levels of decomposition and collaboration will lead to marked differences in the overall performance of IES. The standard control scheme based on the proposed subsystem configuration outperforms the empirical decomposition-based control benchmark by about 20%. The DEMPC architecture further improves the overall performance of the IES by about 55% compared to the benchmark.
Introduction
With the rapid development of advanced hybrid energy grid technologies, the popularity of integration of multiple energy systems has grown to pursue higher energy efficiency and lower environmental costs [1,2]. Integrated energy systems (IESs) with tightly interconnected energy subsystems have emerged as a promising alternative to conventional single input-output energy systems due to their more flexible fashion of energy production and consumption [1,3]. Typically functioning as a prosumer, an IES comprises customers and various operating units, such as buildings, renewable generation, generators, chillers, heat supply units, energy storage, and other auxiliary units. The close integration of these units through material, information, and energy flows results in a complex process network. While IESs exhibit theoretical energy-efficient and operation-flexible properties, the intensive integration of the units with diversely dynamic characteristics into the unified network brings about complicated issues with dynamics and coordinated control [2,4,5].
As a rising influx of intermittent renewable resources into the power grid, grid response and demand response have attracted considerable attention across residential, commercial, and industrial entities [6][7][8]. These measures aim to reach electrical supply-demand balance in real time, improving the reliability and flexibility of the grid and the operators' ability to manage risks [9]. In particular, the grid-responsive building energy systems as potential participants have been widely investigated. On the one hand, building energy systems can proactively interact with smart grids to facilitate power balance, thanks to their demand flexibility [10]. On the other hand, building energy systems can also provide frequency regulation and demand response services at the request of the grids [11]. Even systems with small installed capacities can play a prominent role in offering grid/demand response and ancillary services to grid operators through aggregators [12]. It also provides additional profit opportunities for these participants [13].
In this energy context, the assimilation of operating units with diverse functions into a unified system that spans supply and demand sides makes IESs an attractive option for grid response [14,15]. The existing work on IESs mainly focuses on scheduling based on steady-state optimization.
The approach of multiple time-scale rolling optimizations has been used in scheduling IESs to enhance system robustness and match multi-stage energy markets [16,17]. In recent work [18,19], the dynamic inertial of microturbines was taken into consideration to improve system response.
Given that IESs provide multiple energy productions for customers, balancing responses to different energy demands, i.e., integrated demand response, has also been discussed in studies [20,21].
The studies on scheduling in IESs typically assume that the operating units can well reach the prescribed references. However, due to the complex dynamic interactions present in IESs, this places a demanding requirement on the real-time control system [22,23]. Despite this, research has seldom reported on the detailed dynamics and real-time control of IESs at the second-to-minute time scale. Recently, Wu et al. discussed the full dynamic characteristics of IESs and proposed a multi-time-scale framework based on economic model predictive control to operate a power-cooling IES, where the system was decomposed into three layers based on dynamic response [24]. Jin et al. developed a linear distributed tracking model predictive control (MPC) to regulate an off-grid power-heat system with a microturbine and a heat pump by partitioning the system into power and heating subsystems [25]. Lei et al. separated a combined heat and power system into heat and power divisions and designed a two-layer control scheme with an upper layer for heating control and a lower layer for power control [26]. Studies by Wu et al. [24] and Jin et al. [25] showed that IESs have multiple time scales of dynamic responses, i.e., the dynamic time-scale multiplicity, which may result in an ill-conditioned control problem when performing optimization-based control schemes [27]. Furthermore, Zhang et al. [28] and Paiva et al. [29] examined the dynamic performance of controllable hybrid systems in offering grid response, focusing on systems based on fuel cells and gas turbines, as well as renewable energy, respectively. Notably, similar to these hybrid systems, the operating units in IESs are typically dynamically complementary [24]. This characteristic endows IESs with significant potential for precise control over power generation in response to unscheduled power requests from the utility grid. However, since system uncertainty intensifies with the depth of providing response [30,31], IESs participating in the utility grid response necessitates enhanced capability to coordinate all the units to realize this potential while satisfying the system economical and local customers' energy demands. Due to the complexity of dynamics and structure with numerous variables, a conventional centralized scheme is unsuitable for real-time control of IESs [24,25,32]. Thus, it is essential to have an effective control strategy that coordinates the dynamic behavior of the units in a distributed fashion. However, there have been fewer efforts made toward distributed coordinated control of complex IESs to unleash their potential for rapid response.
Distributed and decentralized control architectures are becoming prevalent in addressing the increasing complexity of structure and a large number of decision variables in emerging energy systems [5,33]. Solar and wind hybrid systems have been extensively studied as practical distributed energy systems. Qi et al. [34] proposed a distributed MPC approach to control solar and wind subsystems, which has been adopted in a similar approach reported in [35]. In a hierarchical strategy for a power-heat system, Hu et al. [36] designed a distributed MPC for each unit at the bottom layer. Additionally, distributed schemes have been used in heating, ventilation, and air conditioning (HVAC) systems. Rawlings et al. [37] and Bay et al. [10] developed distributed MPCs for multi-building systems to maintain respective indoor temperatures. Kuboth et al. [38] proposed a parallel distributed MPC strategy to control heating and water supply subsystems in a hybrid HVAC system. Yang et al. [39] proposed a two-layer MPC to optimize lighting loads and indoor temperature in a multi-zone building, respectively. Moreover, De Lorenzi et al. [31] presented a supervisory MPC for each unit in a power and heating system with multiple buildings for offering demand response. Tang et al. designed a supervisory MPC for chillers and cold storage in an HVAC system for demand response [40]. A similar supervisor MPC was also reported in [41] to regulate microturbine, photovoltaics, and battery in a hybrid generation system. One critical point commonly disregarded in many studies on energy systems is that an appropriate subsystem decomposition is a prerequisite and fundamental element of a successful non-centralized optimization framework [42]. In previous research, systems were usually decomposed based on either the structural boundaries of each operating unit [31,35,36] or the types of energy supplied or consumed by the units [25,26]. Apart from these explicit inter-unit connections, systems also have implicit underlying inter-and intra-unit connections resulting from the dynamic variables, including input, disturbance, state, and output. However, most existing studies on the control of energy systems tend to overlook this fact. While such oversights may be acceptable in relatively simple energy systems, they can be problematic for IESs with intricate inter-and intra-unit connections involved in dynamics. Decomposition based on explicit inter-unit connections may lead to improper subsystem configuration, where interconnections between subsystems remain dense, or interactions within subsystems become sparse [43]. Such insufficient subsystem decomposition can cause non-robustness and deteriorated dynamic performance, as distributed controllers rely on multiple local decisionmaking agents to coordinate their actions [44,45]. Consequently, achieving synergy between units and the potential for IESs' response as a unified system becomes challenging with non-centralized control frameworks. Despite these issues, a systematic approach to subsystem decomposition in complex energy systems remains relatively unexplored, particularly given the twofold complexity arising from time-scale multiplicity and dynamic interconnectivity.
Model predictive control (MPC) has gained favor among extensive control techniques for energy systems due to its prediction and correction mechanisms [46]. In addition to the aforementioned relevant work, MPCs have been designed for optimal operation of multi-zone building aggregators in [47], building-storage systems in [48], and HVAC systems with concentrated solar power systems in [49]. Moreover, MPC has been investigated for its applications in district heating/cooling networks and building systems for demand response in [50,51]. It is noteworthy that a new nonlinear MPC with a global economic objective, known as economic MPC (EMPC), has been considered a flexible optimal control tool for smart manufacturing [22]. Unlike conventional tracking MPC, EMPC provides much flexibility in optimizing the operation of a process for achieving system-wide coordination [23,52]. Preliminary exploration of EMPC in the energy field have been made in the management of power stations [53], microgrids [54], and IESs [24]. Despite the success of MPC in managing energy systems, it remains unclear how the decomposed subsystems will impact the design of a distributed architecture when developing a distributed EMPC for IESs that reaches dynamic synergy between all operating units.
Based on these observations, it can be found that IESs require an effective real-time control strategy that coordinates all the operating units with diverse dynamics. Nevertheless, there has been limited research on this aspect. Due to the multitude of decision variables, the complexity of system structure, and the multiplicity of time scales, a straightforward application of optimizationbased control scheme to IESs will result in a significant computational burden [32], reduced robustness [45], and an optimization problem that is ill-conditioned [27]. A practical alternative is to partition an IES into several smaller subsystems and then design a distributed framework to control the IES in a modular manner. In this framework, inadequate subsystem configurations can negatively impact control performance in IESs [42]. Therefore, a generic approach to subsystem decomposition is necessary to effectively handle the time-scale multiplicity and intricate interconnectivity present in the dynamics and structure of IESs. However, IESs currently lack a reliable decomposition method for identifying the optimal subsystem configuration that facilitates the design of distributed cooperation schemes. It is also unresolved how to leverage all the decomposed subsystems to fulfill the operational requirements of electricity, cooling/heating, and profitability in tandem.
In order to tackle mentioned issues and achieve dynamically optimal modular management, this paper proposes a vertical-horizontal subsystem decomposition method and cooperative distributed economic MPC (DEMPC) for IESs. We take a grid-connected IES for electricity and cooling supply as the considered system. First, the IES is described as a directed graph that consists of nodes and unidirectional edges to reveal its dynamic variables topology. An adjacency matrix based on Jacobian matrices is then constructed to mathematically represent the directed graph. On this basis, we propose a control-oriented method for decomposing the entire IES into smaller subsystems. This method employs the time-scale separation approach to vertical decomposition of the system and the community detection technique for horizontal decomposition based on interconnections between the operating units. The decomposed subsystems exhibit consistent dynamic responses and strong interactions within each subsystem, but distinct dynamic responses and weak interactions between them. As a result, this approach simultaneously addresses the issues of time-scale multiplicity and structural complexity in IESs. Next, using the decomposed subsystems, we develop a distributed cooperation scheme based on economic MPC to precisely regulate the generated power in response to unscheduled power requests from the grid, while also meeting the local cooling requirements and economic demands of the system. The DEMPC involves multiple sequential and iterative agents with global objectives which cooperate in decision-making by exchanging their latest evaluated information. Consequently, all units in the subsystems are capable of collaborating in real time to attain the specified control objectives. Moreover, the latent influence of the subsystem decomposition on the design of distributed MPC for IESs is also discussed, whereby we provide a basic guideline on subsystem decomposition of complex energy systems. The applicability and effectiveness of the proposed decomposition and cooperation strategy are verified by simulations under varying working conditions.
The work presented in this paper has several key contributions as follows: 1. We introduce a novel directed graph representation of IESs using an augmented Jacobian matrix, which explicitly reveals the underlying interconnectivity between all dynamic variables within the structure of IESs.
2. We propose a generic method for subsystem decomposition based on the dynamic time scale and closeness of interconnections between the dynamic variables, which simultaneously addresses the complexity of the structure and dynamics of IESs, and efficiently determines an optimal subsystem configuration.
3. We illustrate how the features of the decomposed subsystems affect the design of distributed MPC for IESs, leading to a control-oriented basic guideline on the decomposition of complex energy systems for modular management. 4. We develop a scalable distributed cooperation scheme based on economic MPC for IESs, which covers all the decomposed subsystems to achieve system-wide synergy and enhance responsiveness and dynamic performance.
In summary, this work contributes novel methods for understanding the structure and dynamics of IESs, decomposing complex energy systems, and designing distributed control schemes for efficient and effective modular management.
The organization of the paper is as follows: Section 2 explains the considered IES and control problems; Section 3 accounts for the decomposition approach and relevant discussions; Section 4 develops the DEMPC; Section 5 conducts the simulations; Section 6 brings about conclusions. The IES is allowed to participate in the grid response. In the case of offering the grid response, the system is required to rapidly regulate its generated power according to the real-time instructions from the grid or aggregator; otherwise, the system will supply electricity to the grid or aggregator according to the prior planned power baseline [13]. Since the instructions are to be an external signal for the IES, we will not distinguish where it comes from after this. Moreover, it is assumed that in this system, when the generated power is insufficient for both the microgrid and grid needs, the electric demand in the microgrid will be met first for the reliability of the local microgrid. Then the surplus/deficient electricity is sent to/from the grid through tie-lines [55]. The assumption is made based on the consideration that the grid has other dispatchable generation to keep electrical power flows balanced [15].
As shown in Figure and then the grid. On the cooling supply side, the waste flue gas from the combustor of the MT is injected into the AB to produce chilled water. Meanwhile, the EC uses electricity to cool the returned water. Afterward, the chilled water from the chillers is mixed with the cold water from the cold tank (if the CS is in cooling discharge mode) or partly sent to the cold tank (if the CS is in cooling charge mode). The mixture or the rest of the chilled water will be piped to the downstream fan coil unit via the water pump to cool the indoor temperature. The supply water is heated up in the fan coil unit, becoming the return water. If the CS is in cooling discharge mode, the return water is directly piped to the chillers and the hot tank for the next cooling cycle; otherwise, it needs to mix with the water from the hot tank and then go to the chillers. The storage units, the BA and CS, also have to undertake the task of long-term load shifting.
Our previous work [24] presented very detailed nonlinear dynamic modeling of each operating unit. Please refer to it for the elaboration and parameters of them. The definition of involved states, manipulated inputs, disturbances, and output variables in the system are listed in Table 1.
The following part will briefly introduce the system structure.
Specifically, the photovoltaic module with the maximum peak power tracking implementation can be described as a steady-state nonlinear model. The power generated by the photovoltaic panels P pv can be expressed as: where n pp and n sp are amounts of parallel and series photovoltaic panels; I pv,max and V pv,max are the current and voltage through each photovoltaic panel under the given disturbances t a and S ra , which satisfies a specific set of nonlinear algebraic equations.
where V f c and I f c denote the output voltage and outside current of the fuel cell, in which V f c = For the microturbine combined with the absorption chiller, four ordinary differential equations are adopted to characterize its dynamic behavior by states P mtf , t abf , t abw , and t abt . P mtf and t abw rely on the the inputs G f m and G ab , respectively. Subsequently, the electrical power P mt and supply chilled water temperature t ab produced by the microturbine with the absorption chiller can be evaluated as follows: where P mt,0 and t ab,0 are the nominal electrical power and supply chilled water temperature, respectively. The cooling power of the absorption chiller can be calculated by where C w stands for the specific heat capacity of water; t rec is the return water temperature on the chillers side.
Next, a dynamic Thevenin equivalent model of the lithium-ion battery bank is employed in this system. Three ordinary differential equations for three states, v cap , C soc , and I ba , depict its transient performance. Then the power delivered by the battery bank P ba can be obtained from: where V ba is the voltage of the battery bank and computed by In this expression, n sb is the number of series batter cells; E m accounts for the electrical potential; R 0b is the parallel resistance; i ba denotes the current in each parallel battery cell, which is associated with I ba and the input P bar . The battery bank is discharged when P ba > 0, otherwise charged.
The electric chiller involves an evaporator, a compressor, a condenser, and an expansion valve.
Its dynamics can be formulated via sixth-order time differential equations composed of six states t e , t es , t ewm , t c , t cs , and t cwm . These states relate the two inputs N ec and G ec . Next, the consumed electricity of the compressor P cp and chilled water temperature t ec supplied by the electric chiller can be evaluated by: where G r , w i , and η cp are the mass flow rate of refrigerant, the specific power of the compressor, and the compressor efficiency, respectively; t rec is the same return water temperature on the chillers side as the absorption chiller. The evolution of G r , w i , and η cp depend on the above six states and two inputs. Similarly, the cooling power of the electric chiller can be attained from Q ec = To proceed, the cold water storage unit consists of a cold and hot tan. Its dynamics can be presented with three states derivative equations of C stc , C sth , and C sot . The cooling power of the storage units can be expressed as t sth denotes the water temperature in the pipe connected to the hot tank, t cp = z st t stc + (1 − z st )t slc is the water temperature in the pipe to the cold tank. In these formulations, z st and G stu stand for the integer and continuous inputs, respectively; t re is the return water temperature from the building, which is an state in the fan coil unit; t slc is the chilled water temperature supplied by the chillers; t sth and t stc denote the water temperature in the hot and cold tank, which the three states above can evaluate. When z st = 1, the cold water storage is discharged, and G st , Q st > 0, otherwise charged.
In the building and auxiliary pipeline networks, the thermal inertial of the building and fan coil unit are captured by two states with time evolution, t br and t re . And the building temperature t br , which is one of the system outputs, follows the differential equation below: where U br is the heat transfer coefficient; t a is the same disturbance as in the photovoltaic module; Q sl accounts for the final cooling power supplied to customers; Q o is another disturbance; C br denotes the building's heat capacity. Further, Q sl can be calculated by where G sl is the mass flow rate of the supply water sent to the building and G sl = G ab + G ec + G st , t sl is the final supply water temperature to the building and expressed as: In addition, as illustrated in Figure 1, the energy and mass balance of the supply and return water at different locations in the pipeline are formulated as follows: where the t rec and t slc denote the aforementioned return water temperature on the chillers side and the supply water temperature from the chillers; G all is the total flow rate of the circulated water in pipeline networks. Then the electric power of the water pump can be obtained from: where g e is the gravitational acceleration; H pmp and η pmp are the hydraulic head and pump efficiency, which is associated with G all .
Consequently, taking into account the assumption that the electric loads in the local microgrid P d will be satisfied first, the electrical power delivered to the utility grid, that is another system output, can be calculated below: To sum up, the dynamic characteristics of the IES are described by 23 nonlinear ordinary differential equations. Figure 2 provides an illustration of the interconnections between the operating units within the IES, as well as the energy and material flows that occur throughout the system.
Typical variables' values in the IES under nominal conditions are given in Table 2.
Formulation of control problem
The grid-connected IES has seven continuous manipulated inputs, four external disturbances (uncontrollable inputs), and two crucial controlled outputs. The system's dynamic behavior is char- t abw , t abt , t c , t cs , t cwm , t e , t es , t ewm , v cap , C soc , I ba , C sot , C stc , C sth , t re , t br ] T , and the controlled output vector as y = [P sl , t br ] T . Thus the grid-connected IES can be presented by a concise nonlinear state-space model below: y ∈ R ny , and n x = 23, n u = 7, n z = 4, n d = 4, n y = 2 are the number of states, continuous inputs, integer inputs, disturbances, and outputs.
The grid-connected IES can be viewed as a highly complex process network, where the various operating units are tightly interconnected through energy, material, and information flows, as depicted in Figure 2. On the one hand, it can be difficult to manage the intricate interconnectivity between numerous decision variables in IESs with a standard predictive control [32].
The centralized architecture can cause an undue computational burden [32]. Additionally, if such optimization-based strategies are used without considering the time-scale multiplicity, the resulting control problem can become ill-conditioned [27]. On the other hand, a non-centralized framework may lead to non-robustness and degraded control performance if the subsystems are not properly configured when dealing with the connectivity in the structure of IESs [42,45]. Furthermore, due to the volatile nature of renewable energy and local customers' demands, the IES is susceptible to unstable environmental conditions and electric and cooling demands in the local microgrid. In summary, the primary challenge of real-time coordinated control of IESs is to effectively tackle both the time-scale multiplicity and the strong inter-and intra-unit interactions exhibited in IESs while successfully overcoming external disturbances. The goal is to enable the units to collaborate closely to regulate their generated or consumed electricity in response to the grid's requests while maintaining the building temperature and system profitability. Three evaluation indices of system performance are formulated as follows: where ξ denotes the regulation capacity factor, which is the ratio of the unscheduled power requested by the grid in real time to the planned baseline power y b e previously committed to the grid; y sp,t is the desired building temperature within the acceptable range; p mg and p se are the electricity price in the microgrid and the wholesale electricity price in the grid; p cm is the compensation for offering grid response; ξ as = |ξ| is the absolute value of the regulation capacity factor; p f is natural gas price; p pn represents fines for failure to offer promised grid response [56]. J 1 and J 2 reflect deviations of the supplied electricity from the grid's instructions and the building temperature from the customers specified, respectively. J 3 evaluates the system's profitability, which equals the negative profit value. [33]. To perform these non-centralized strategies in IESs, determining an optimal subsystem configuration is a crucial first step. Additionally, dynamic time-scale multiplicity must be taken into account when dealing with interconnections between dynamic variables. To this end, this section presents a vertical-horizontal subsystem decomposition framework based on enhanced time-scale separation and community detection techniques to resolve the above difficulties simultaneously.
Construction of adjacency matrix for directed graph
While the directed graph Figure 3 presents detailed relationships between input, disturbance, state, and output variables, it can be challenging to use directly to analyze interconnectivity between variables and reach the subsystem decomposition goals. Therefore, in this work, an adjacency matrix is constructed based on Jacobian matrices of IESs to represent the directed graph mathematically.
Before constructing the adjacency matrix, it is important to note the following points in Figure 3.
First, since the focus is on interconnections and interactions between different variables (nodes) in the IES, the directed graph does not contain directed self-edges where the start and end nodes are the same. Second, the impact of disturbances on other variables is taken into account, as IESs are vulnerable to uncertain external conditions. Last, according to the definition of directed edges, a directed edge exists between two nodes (i.e., two variables) when the partial derivative of the start variable's function with respect to the end variable is not zero, and vice versa [57]. For instance, there is a directed edge from u 3 to x 23 in Figure 3, which equals to ∂f 23 (x, u, z, ω)/∂u 3 = 0 where f 23 (x, u, z, ω) is the 23rd element of the vector field f in Eq. (15).
Based on these considerations, an augmented Jacobian matrix of the entire IES of Eq. (15),à e , is constructed first as follows: (17) where n v = n u + n d + n x + n y is the total number of input, disturbance, state, and output variables; n g = n u + n d is the number of variables in an augmented input vector u g consisting of the original input and disturbance vectors, namely u g = [u T , ω T ] T ;Ã,B,C, andD are following Jacobian matrices of the vector field f and h in Eq. (15): where (x e , u e , z e , ω e ) denotes an equilibrium point of the system.Ã,B,C, andD reflect whether there exists following directed edges: state-to-state, input/disturbance-to-state, state-to-output, and input/disturbance-to-output, respectively. Suppose the partial derivative indicated by a certain element in these Jacobian matrices is non-zero. Then a directed edge between the corresponding two nodes (variables) should be placed in the directed graph, and vice versa. Accordingly, the adjacency matrix A e of the entire IES network can be attained by: (a) replacing all non-zero elements inà e with one; (b) letting the diagonal elements inà e be zero since the self-edges are excluded from the graph.
In the adjacency matrix A e , therefore, an element located at i-th row and j-th column is one, denoted as a e ij = 1, when there is a directed edge from node i to j, otherwise a e ij = 0. Consequently, the adjacency matrix of the IES under nominal conditions can be calculated and visualized as Figure 4, which will be employed in the vertical-horizontal decomposition of the IES later. Remark 1. The persistence of external disturbances in IESs poses one of the major challenges to reaching coordination control goals. Thus, the effects of disturbance are incorporated into the directed graph and adjacency matrix. While partitioning disturbance variables may be unnecessary due to their uncontrollability, considering connections between disturbances and systems can help achieve an informative and optimal subsystem decomposition.
Vertical decomposition based on time-scale separation
In this section, a time-scale separation approach proposed in our previous work [24] is exploited to deal with the dynamic time-scale multiplicity in IESs. The approach is improved by introducing the adjacency matrix, which facilitates the efficient realization of subsystem decomposition. This decomposition leads to the vertical separation of the entire IES into slow and fast subsystems along the time-scale reduction. As a result, these subsystems can align with the system's dynamic response at multiple time scales.
Singular perturbation formulation of the IES
where (0 < 1) stands for a small dimensionless parameter that is constructed to distinguish distinct time scale in the IES and computed as follows: is the slow output in the system, which depends on the slow dynamics; y f = y 1 , the electrical power sent to the grid, is the fast output associated with the fast dynamics. Furthermore, it should be mentioned that, in developing Eq. (19), the proposed adjacency matrix is also used to investigate how the slow and fast states are interconnected and how the inputs affect outputs.
Establishing slow and fast subsystems based on time-scale separation
Based on the singular perturbation expression of the IES, we can move forward with time-scale separation for the system. According to the singular perturbation theory [27], we can obtain the following reduced-order model of the slow subsystem by setting = 0 in Eq. (19): where the differential equation of the fast dynamics Eq.(19b) becomes an algebraic equation Eq.(21b). This algebraic equation indicates that the slow subsystem does not need to consider the fast dynamics.
To establish the reduced-order fast subsystem, we need to define a stretched time scale, τ f = τ / , to match the fast dynamics and substitute τ f into Eq. (19), then let → 0. The reduced-order model of the fast subsystem, therefore, can be presented below: . Analyzing system connectivity can be daunting for large-scale energy systems with numerous variables. However, the adjacency matrix makes vertical decomposition readily applicable, simplifying the process.
Modularity-based community detection
Community detection is a powerful technique in graph theory that allows the partition of an entire network into desirable communities, or subsystems, which exhibit strong intra-subsystem interactions but weak inter-subsystem interactions [59]. In this work, we introduce community detection to IESs, enabling us to explore and decompose the underlying interconnections within the system. By employing community detection, we can ensure dense intra-subsystem connections and sparse interconnections between different subsystems, as expected by non-centralized control architectures [43]. Among the existing community detection tools, the Newman-modularity-based approach has emerged as a dominant method. The core of this approach is an index called modularity, which measures the quality of configured subsystem candidates [60]. The typical Newman modularity of a configured subsystem Θ can be computed by: where i, j = 1, . . . , n v are the nodes in the directed graph, n v is the aforementioned total number of nodes (variables) in the directed graph; m stand for the number of edges; a ij is the (i, j)-th element in the adjacency matrix of the considered graph, and as a reminder, a ij = 1 if there is a directed edge from j-th element to i-th element, otherwise a ij = 0; k in i and k out j denote the number of edges heading for and departing from nodes i and j, respectively; c i and c j (c i , c j ∈ {c 1 , . . . , c nv }) are community tags representing the subsystems which the nodes i and j belongs to in the subsystem configuration Θ; δ(c i , c j ) is the Kronecker delta function to examine whether the nodes i and j belongs to the same subsystem and calculated as follows: In the modularity expression of Eq.(23), the term (a ij − k in i k out j /m) indicates the difference between the actual probability of edges between node i and j and the expected probability of edges between them, when all the edges in the network are randomly rewired with k in i and k out j fixed.
The modularity is the sum of this difference and, therefore, reflects the statistical significance of the subsystem partition Θ. And larger modularity represents better subsystem division, that is, dense connections within subsystems but sparse between different subsystems. Thus, community detection aims to find a subsystem configuration Θ and the corresponding community tag set of all nodes {c 1 , . . . , c nv } that maximizes the modularity in all candidate subsystem partitions, that is: The following sections will give an effective solution to this optimization problem.
Realization of community detection for subsystem decomposition
To perform community detection, we require the adjacency matrix of the target network, i.e., the adjacency matrix of the fast subsystem Next, we will implement community detection to further decompose the fast subsystem. Mathematically, Eq. (25) is an optimization problem computationally difficult [61], and so we need an approximation algorithm to solve it. In this study, we employ and improve a heuristic method known as the fast unfolding algorithm [62], which is acknowledged as an effective and efficient ap- is a heuristic algorithm that may find a local optimum. Additionally, the resulting subsystems will be adopted to design a distributed controller. Excessive subsystems would lead to an increase in communication costs between the local controllers. To enhance the fast unfolding method, we have implemented the following modifications: (a) To reduce the impact of the initial node order on decomposition results, the algorithm now employs a set of random node orders (Step 2.1).
(b) To avoid an excessive number of subsystems, which may increase maintenance costs in the Algorithm 1: Fast unfolding based community detection for horizontal decomposition 1 Step 1: Parameter setting: 2 Predetermine the upper limit of the number of desired communities (subsystems): N u c . 3 Specify the terminating condition of the main program loop: N l l , the lower limit of the main program looping N l ; N l m , the lower limit of the recurrence N m of the maximum modularity M max . 4 Create the node vector corresponding to the fast adjacency matrix A f : v = [x 1 , . . . , x nx , u 1 , . . . , u nu , ω 1 , . . . , ω n d , y 1 , . . . , y ny ]. 5 Create the community tag vector of v: c = [c x1 , . . . , c xn x , c u1 , . . . , c un u , c ω1 , . . . , c ωn d , c y1 , . . . , c yn y ]. 6 Create the initially optimal node and community tag vectors: c opt = c and v opt = v. 7 Set N l = 0, N m = 0, M max = 0. 8 Step 2: Main program: 9 while N l < N l l or N m < N l m do 10 Step 2.1: Initialization: 11 Randomly reassign the order of nodes in v, denoted asv; calculate the corresponding adjacency matrix f via A f .
12
Set k = 0; let c(k) = [1, 2, . . . , n v ], i.e., assigning the i-th node inv to the i-th community; compute M (k) and the number of community N c (k). 13 Step 2.2: Modularity maximization: 14 repeat 15 Set k = k + 1. 16 for each node i do 17 Move i into the neighboring nodes j (including i itself). 18 Find the maximum M (k); let i stay in the corresponding community. 19 Aggregate the nodes in the same community as a new node. 20 Update c(k), N c (k), and M (k). 21 until M (k) = M (k − 1); 22 Step 2.3: Community reduction: 23 while N c (k) > N u c do 24 Set k = k + 1. 25 for each node i do 26 Move i into the neighboring nodes j (excluding i itself). 27 Find the maximum modularity M ij (k). 28 Find the maximum M ij (k); let M (k) = M ij (k); place i into the corresponding community. 29 Aggregate the nodes in the same community as a new node. 30 Update c(k), N c (k), and M (k). 31 Step 2.4: Community configuration update: Update N l = N l + 1. 38 Step 3: End results: 39 Extract the relevant variables in each community (subsystem) from c opt and v opt .
control system, we have defined an upper limit for the number of subsystems (Step 2.3).
(c) To prevent the algorithm from getting stuck in a local optimum, we have set thresholds for the number of main program loops and maximum modularity recurrence (Step 2.4).
Based on these improvements, the proposed community detection can give near global optimal solution, which can efficiently and effectively decompose the target system according to the closeness of interconnections between variables. Table 4 lists the results of the fast subsystem partition. From Table 4, Based on the decomposition results and the anticipated distributed controller with global objective functions, we can establish uniform models for the fast subsystems 1, 2, and 3 as follows:
Establishing smaller fast subsystems based on community detection
where subscript f j , j = 1, 2, 3, represents the fast subsystem j; x f j is the states vector in the fast subsystem j;x f j denotes a state vector containing other subsystems' states that have an immediate impact on the fast subsystem j, or the local decision agent of the fast subsystem j will use; u f j stands for the manipulated input optimized by the local agent of the fast subsystem j;ū f j is an input vector determined by the local controller of other subsystems;ū f j either straightly affects the fast subsystem j or has the information required by the local controller of the fast subsystem j for decision-making. Specifically, (a) for the fast subsystem 1, It should be mentioned that, first,x f j andū f j in the fast subsystem j can be attained via Last, during the horizontal decomposition, we make slight adjustments to the results by sharing y 1 and photovoltaics with the fast subsystems 2 and 3. For further details, please refer to Remark 3.
Finally, the entire IES is decomposed into one slow subsystem Eq.(21) and three fast subsystems Eq.(26) by the vertical-horizontal decomposition approach.
Remark 3. The proposed decomposition method partitions nodes in energy networks into nonoverlapping subsystems, offering managers a clearer view of candidate subsystems. How to use the decomposition results will depend on the problem at hand. In this work, the primary goals of coordinated control of the IES are to accomplish the system-wide synergy and improve the responsiveness, which is significantly correlated with the power delivered to the grid y 1 . We note that y 1 is explicitly associated with the fast subsystem 2, via the generated electricity of the microturbine P mt , and the fast subsystem 3, via the power consumption in the electric chiller P cp . In addition, y 1 is straightly affected by the photovoltaics power P pv that depends on the external disturbances.
However, if we directly follow the horizontal decomposition results for the fast subsystem, y 1 and the photovoltaics will be placed solely in the fast subsystem 1. This configuration may cause some local control agents with local objective functions, the performance of which is generally slightly inferior to that of the global objective functions [32]. Therefore, we have shared y 1 and photovoltaics with the fast subsystem 2 and 3, as given in Table 4, to facilitate developing a distributed cooperation scheme with global objectives. Although we did not entirely follow the initial decomposition results, it provided crucial and adequate guidance for subsystem decomposition.
Discussions on subsystem decomposition for IESs
This section will cover how subsystem decomposition affects the design of distributed control architectures in IESs and introduce a basic and generic procedure for decomposing complex energy systems. In order to handle time-scale multiplicity and structural complexity, the subsystem decomposition method was proposed to partition IESs vertically and horizontally. When vertically partitioning, the information flows sequentially in a top-down direction, typically leading to a sequential distributed control structure [32]. Horizontal partitioning involves multiple subsystems with mutual information exchange, resulting in an iterative distributed strategy [32]. In the proposed decomposition framework, it was assumed that the vertical decomposition is performed first and then the horizontal decomposition is implemented in the fast subsystem. Please note that if needed, we can apply the horizontal decomposition to the slow subsystem as well. An alternative to this procedure is to decompose the IES horizontally first and then perform vertical decomposition in each subsystem. The two decomposition procedures can be illustrated in Figure If we straightly follow this result, non-uniform time scales between and within the subsystems will cause a more complicated iterative distributed controller with asynchronous sampling time [63].
Integrated Energy
Furthermore, the number of subsystems in this instance is larger than in the former decomposition procedure. Accordingly, the communication cost will be more expensive. An alternative is to artificially merge the two slow subsystems to reduce the number of subsystems. However, it can be a challenging task for a larger energy system with a considerable amount of variables. Moreover, we note that the vertical decomposition first can help create IESs models in different time scales, which favors IESs to take part in energy markets that operate on multiple time scales, see [13]. Figure 7: Basic procedure for the generic subsystem decomposition of complex energy systems.
Based on these considerations, for IESs or energy systems with time-scale multiplicity and intricate structure, vertical decomposition is a prior task before horizontal decomposition to establish a consistent time scale within a subsystem. Then the horizontal decomposition is implemented to capture the interactions between and within subsystems for further partition. Figure 7 illustrates the fundamental steps of the generic vertical-horizontal decomposition approach we propose, which can be applied to other complex energy systems, like virtual power plants and aggregators [12,64].
In Figure 7, it should be noted that not all energy systems require both vertical and horizontal decomposition. For instance, large-scale wind and solar farms [35] or building groups [10] may not require time-scale separation since they have a unified fast or slow time scale, respectively. How to properly exploit the subsystem decomposition should also consider the optimization problems to be solved. For integrated energy systems across sectors, the decomposition may need to further take into account the potential issues about the attribution of jurisdiction [65]. The following section will illustrate how to use the features of the decomposed subsystems to design a distributed cooperation scheme based on economic MPC.
Cooperative distributed economic MPC
As previously discussed, a distributed cooperation scheme presents a more favorable alternative to a standard centralized controller in coordinating the operating units within an IES. The effective and reliable management of the IES requires the ability to respond rapidly to changes in the grid's power demands while also satisfying the local electricity and cooling requirements, maximizing profits, and mitigating external disturbances. This section proposes a cooperative distributed economic MPC to address these challenges. Our solution is based on subsystem decomposition and economic MPC, which allows scalable applications in other IESs. The proposed DEMPC framework for the IES is displayed in Figure 8. The DEMPC also takes into account information exchange with the day-ahead/long-term scheduling (or intraday stages if required). The day-ahead stage will plan the hourly power deliv-ered to the grid and schedule the approximate operational trajectory of the IES based on external conditions forecasts. In this study, we employ a day-ahead optimization adapted from a long-term scheduling approach in our previous work [24]. For more information on the scheduling used, please refer to it. The day-ahead scheduling aims to minimize the control objectives described in Eq. (16) to determine: (a) the hourly supplied power, i.e., the planned baseline power y b e in Eq.(16); (b) the on/off switch of units and charging/discharging of the cold storage, i.e., the integer variable z in Eq.(15); (c) the optimal trajectory references for the capacity states of energy storage for long-term load shifting, i.e., the references for x 17 (C soc ) and x 19 (C sot ), denoted as a vector x d d . Regarding the proposed DEMPC, it consists of four cooperative EMPC based on the verticalhorizontal decomposition results: one slow EMPC for the slow subsystem and three fast EMPCs for the fast subsystems. These local decision-making agents exchange information with each other to coordinate their actions for the dynamic synergy between all the subsystems. In the DEMPC, communication is characterized by one-directional information flow from the slow EMPC to the fast EMPCs at low frequencies, while fast agents exchange information with each other in a highfrequency, mutual manner. The former leads to a sequential distributed EMPC and the latter to an iterative distributed EMPC, which cooperates to leverage the operating units.
Specifically, according to the slow subsystem of Eq.(21), the slow EMPC with about 1-minute sampling time and 10 to 15-minute prediction window is developed to optimize its dynamic behavior under the current external conditions ω and slow states x s . The optimal actions of the slow inputs u s and the optimal references for the fast inputs and states, u s f and x s f , can be attained by the slow EMPC that minimizes a global objective function related to J 1 , J 2 , and J 3 . Subsequently, the slow input u s is immediately applied to the slow subsystem to control it. The needed information, such as u s , x s , u s f , and x s f , are sent to the fast EMPCs for further decision-making. During the optimization, the slow EMPC also considers the information about y b e , z, and x d d from the day-ahead scheduling. In the communication, the fast EMPCs collect the information from the day-ahead stage for y b e and z, and from the slow EMPC for u s , x s , u s f , and x s f . Additionally, a local fast agent j (j = 1, 2, 3) receives not only other local fast EMPCs' information but also broadcasts information about its newly optimized state and input sequences, x f j and u f j , for other local fast agents decision-making. At a sampling time, these three fast EMPCs exchange information and evaluate their actions u f j iteratively a few times until an iteration limit is approached or the actions of these EMPCs have converged. Consequently, the final optimal fast input u f j of the fast EMPC j can be obtained by such iterative means, and then u f j enters the corresponding fast subsystem j to manage the units belongs to it.
Sequential distributed slow EMPC
This section will develop the sequential distributed slow EMPC for the optimal operation of the slow subsystem. At the beginning of the controller design, the control objective of the control scheme needs to be established. For the slow EMPC at a time instance k, according to the slow subsystem model of Eq. (21) and global operational objectives of Eq.(16), the following control objectives are taken into consideration: where α s w (w = 1, . . . , 3) and R s stand for weighting factors and matrix. The objectives of Eqs.(27a)-(27c) evaluate the systems' performance in tracking the grid's real-time instructions for the supplied electricity, fulfilling the customers' cooling demand, and raising operational revenue, respectively.
They are derived from the discretization of the global objectives of Eq. (16). Eq.(27d) is a general tracking term for long-term load shifting, in which x d s = [x s,1 , x s,2 ] T is a vector representing the capacity states of battery and cold storage, x d d is their optimal references from the day-ahead scheduling.
Taking advantage of these objectives and the slow subsystem model, the sequential distributed slow EMPC is described as follows: where
Parameter and scenario settings
For the proposed DEMPC, the sampling time and prediction horizon are listed in Table 5 For the following simulations, the time evolution of the external conditions in 24 hours is illustrated in Figure 10 with the ambient temperature t a and solar radiation S ra , the electric and Figure 11. The electricity price in microgrid p mg is set as 80 CAD/MWh. The natural gas price p f is 0.2 CAD/kg.
The above prices refer to the relevant prices in Ontario, Canada, in the summer of 2021 [68,69].
And assume that the compensation and fine by the grid, p cm and p pn , associate with the wholesale electricity price p se , and p cm = 1.5p se , p pn = 1.5p se . To investigate Problems 1-4's performance, we apply Problems 1-4 to the IES, allowing it to provide about ±25% available regulation capacity for the grid. In this instance, the IES will send electricity to the grid according to its real-time instructions. If the grid needs more power than the planned baseline power y b e , the IES will increase the amount of power it sends, and if the grid needs less power than y b e , the IES will decrease the power it delivers. The adjustable range of the electricity supplied will be about 25% above and below the baseline power level y b e , i.e., the regulation factor ξ basically satisfying ξ ∈ [−25%, 25%]. The evolution of ξ is depicted in Figure 12 P2, the supervisory MPC based on the developed subsystem configuration, is inferior to P1 but still outperforms P3 and P4 in terms of precise control of the supplied electricity. The supervisory MPC base on the empirical decomposition, P3 and P4, display comparatively poor performance, which cannot precisely track the changing power instructions, sometimes, cannot even reach the current prescribed power point before the next comes. Regarding the building temperature, Problems 1-4 basically satisfy the customers' demand for keeping the indoor temperature within the acceptable range. Furthermore, it can be seen that all these control schemes drive the building temperature to approach its upper bounds for energy and cost savings similar to [70]. zoomed-in plots, we can find that the units under P1 are leveraged with more precise adjustments in the transient processes. In particular, the units exhibit complementary roles in dynamic behavior, i.e., synergy between the units, such as the fuel cell, battery, and compressor on the electricity side, and the absorption chiller, electric chiller, and cold storage on the cooling side. Moreover, from Figure 15, it can be seen that all these control strategies can tightly track the prescribed references for energy storage for long-term load shifting. It also proves that whichever controller is applied to the system, energy storage is neither overused nor underused in real-time control. The above observations demonstrate two key points: (a) the supervisory MPCs and proposed DEMPC have the capability to effectively manage the IES, but the proposed DEMPC and subsystem partition enhances the system's dynamic performance further; (b) the improvements are not attributed to a certain operating unit but a result of collaboration between all the units, which distinguishes the proposed method from what approaches are developed in existing work [71].
Case 1: Offering 25% available regulation capacity
To quantify the difference between Problems 1-4, we establish the following performance evaluation criteria based on the control objectives in Eq.(16): where E p accounts for the summation of the tracking deviation in the supplied electricity; N sd is the simulation duration; E t stands for the total tracking deviation in the building temperature, wherein ∆y 2 (k) is the distance from the real indoor temperature to the acceptable range, i.e., if y 2 (k) ∈ [y min sp,t (k), y max sp,t (k)], ∆y 2 (k) = 0, otherwise, ∆y 2 (k) = min{ y 2 (k) − y min sp,t (k) , y 2 (k) − y max sp,t (k) }; E e denotes the total profit, in which ∆C es represents the cost owing to the unused energy stored in the storage units at the end of the operating day; E glb reflects the global performance of the system operation, where β 1 = 0.05, β 2 = 3.5 and β 3 = 10 are the normalization factors. For these indices, smaller E p and E t indicate better performance in the rapid response to the grid's requests and meeting the customers' cooling demand. Larger E e means higher economic gain. Smaller E glb represents superior overall operational performance. we can see that P1 is capable of significantly reducing the power deviation from the grid's real-time instructions while slightly improving the system's profitability. P2 exhibits suboptimal performance on system economics and precise control of the supplied power. Additionally, P1 is comparable in fulfilling customers' demand for cooling with P2 and P3. As a result, P1's overall performance sur-
Case 2: Not participating in grid response
In practice, the managers of IESs may also decide to send power to the grid according to the prior planned power baseline instead of the grid's real-time instructions. Then the regulation factor ξ is always set as 0 during the operating day. In this case, the simulation results of Problems 1-4 are shown in Figures 16 and 17.
From Figure 16, we can observe that P1 can basically keep the power sent to the grid with the baseline, while P2 is inferior to P1 but marginally preferable to P3 and P4. Even if the power baseline within an hour holds constant, the IES's exhibition of tracking the baseline is also degraded by P3 and P4 since they need to struggle against the frequent variation of the external conditions and local customers' demands without the system synergy. Meanwhile, the building temperature is mainly held within the acceptable range under all these control frameworks. As shown in Figure 17, the operating units under P1 are still operated in a dynamically complementary and collaborative manner in the transient processes like in Case 1. It is worth noting that since the IES is not required to respond to the grid's requests any longer, which diminishes system uncertainty, the electrical or cooling power fluctuations of the units are less than those in Case 1. ing performance in the electricity-relevant index, which indicates P1 accurately follows the power baseline while overcoming changing external conditions. The satisfaction of the indoor temperature requirement and the system profitability under P1 is better than P3 and P4 a bit. The performance of P2 also slightly surpasses P3 and P4 in all aspects. Consequently, the overall performance of the IES is boosted by about 55% under P1 and about 20% under P2. These results reveal that the developed subsystem partition and DEMPC remain effective and robust and enhance the system performance when the IES does not participate in the grid response. Furthermore, we also note that all the indices are less than those in Case 1. Electricity and cooling indices decline as the control systems do not need to consider the time-varying power instructions by the grid. However, losing compensation for the grid response, the system earnings also declines. It is important to note that the significance of the grid response lies not only in raising the IES economic gain but also in contributing to the grid's reliability and flexibility.
Case 3: Offering varying available regulation capacity
To explore the system performance with different working conditions, we allow the IES to provide the grid with available regulation capacities from 0% to 30% to participate in the grid response.
The instances with the available capacities of 25% and 0% have been discussed in Case 1 and 2. Figures 18 and 19 depict the simulation results with the available capacities of 15% and 30%, respectively.
The system behavior with the available capacities of 15% and 30% under Problems 1-4, as shown in Figures 18 and 19, are akin to its performance in Case 1. Due to the close collaboration between the operating units, P1 outperforms other controllers in immediate response to the grid's requests. P2 remains superior to P3 and P4 in tracking to the grid's real-time instructions owing to the proposed subsystem configuration. At the same time, Problems 1-4 perform alike in fulfilling the demand for maintaining the indoor temperature within the customer's desired range. In the case of the available regulation capacity of 30%, we detect that the IES cannot meet the grid's demands for the supplied power occasionally, such as at 42700 s, since under given current external conditions, the grid's instructions for increasing/decreasing the supplied power may already exceed the limits of the system operation. In this particular case, the IES has to relax the requirement of the building temperature to regulate the generated electricity first as possible. Therefore, we can see that the indoor temperature has crossed the upper bounds at about 42700 s. Similar phenomena are also observed in other existing research on grid response, see [72]. Figure 20 describes changes of the evaluation indices over the available regulation capacity. From the figure, we can find that, first, as the growing available regulation capacity, the power deviation index E p under all the controllers displays an uptrend. This trend is expected since offering larger regulation capacity requires the controllers to have a greater capability to coordinate the entire system to precisely track the grid's dramatically changing instructions. In this regard, the proposed DEMPC substantially outperforms other controllers. The control framework based on the proposed subsystem configuration, P2, consistently surpasses P3 and P4, although they have the same control architecture as P2. Moreover, under P3 or P4, there is an apparent rise in E p when the available capacity is from 0% to 5%. It indicates that when the IES turns to participate in the grid response, the P3 or P4 based on the empirical decomposition is not robust regarding precise control of the generated power. Second, all the control strategies are similar in the index of meeting the cooling demand E t . And a sharp rise in E t appears after the available capacity of 25% due to sometimes ξ being beyond the reach of the system. Regarding the system profitability E e , as illustrated in the figure, the lager available regulation capacity the more earnings they can have because of the compensation for the grid response. The system revenue under P3 or P4 is slightly less than P2, while P2 is lower than P1. Additionally, the uptrend of E e slows down after the available capacity of 25%, as the IES sometimes cannot respond to the grid's requests, hence, is fined more. Last, the IES's overall performance index E glb increases with the available capacity, which indicates the IES that provides a higher available capacity requires an outstanding control strategy to ensure its steady performance. As shown in the figure, the performance of P3 and P4 is inferior to P2. This degradation demonstrates the effectiveness of the proposed subsystem decomposition in achieving optimal modular management of the IES. The proposed DEMPC surpasses P2, which exhibits the superiority of the DEMPC in exploiting the potential of synergy between the operating units.
Accordingly, the DEMPC has the superior capability to precisely control the supplied power at the grid's requests while raising the system's earnings and maintaining the building temperature.
On the iteration times of the fast EMPCs during the simulations, the reached minimum and maximum iterations are 2 and 12, respectively. The mean iteration times of the fast EMPCs are Table 8. It can be seen that the mean iteration time is about 2.3 under multiple working conditions. This stable convergence of the DEMPC shows its applicability in practice.
Conclusions
IESs typically show the potential for greater reliability and flexibility of the grid. However, tight interconnections and interactions between various operating units in IESs are unfavorable for designing a proper real-time control scheme. To address the dynamic and structural complexity, we propose a systematic subsystem decomposition method based on a directed graph representation of IESs. By the proposed approach, the entire IES is decomposed vertically based on the dynamic time scale and horizontally based on the closeness of interconnections between the units. In addition, the qualitative analysis of decomposition reveals that, in the case of IESs, vertical decomposition should be carried out first to establish a consistent time scale within each subsystem and then horizontal decomposition. This order of decomposition is more conducive to designing distributed cooperation schemes for IESs. Based on this conclusion, we draw a control-oriented basic guideline for decomposing complex energy systems into optimal subsystems. Utilizing the decomposed subsystems, we develop a scalable cooperative DEMPC with global objectives for enhanced responsiveness while meeting the cooling and economic requirements. In the DEMPC, multiple local agents cooperate sequentially and iteratively in leveraging the units for the system-wide synergy.
Extensive simulations demonstrate the applicability and effectiveness of the proposed subsystem decomposition and distributed cooperation framework. Whether or not the IES participates in the grid response, the control strategy based on the proposed subsystem configuration outperforms the same control architectures based on empirical decomposition. Due to collaboration between all the operating units, the developed DEMPC further significantly improves the system's dynamic performance, particularly on precise control of the generated power at the grid's requests. The investigations of the IES under changing working conditions exhibit that, compared with the empirical decomposition-based control, the proposed decomposition and cooperation scheme is more robust to cope with the IES providing multiple regulation capacities for the grid. Furthermore, we find that with the increasing available regulation capacity and the deepening grid response, the requirements for coordinated control systems become increasingly demanding. Effective control strategies will be essential for the precise, deep grid response. An overlarge regulation capacity beyond the reach of the system will lead to an apparent decline in overall performance of the IES, which also implies a deteriorated supplied power quality for the grid and a stagnant economy for the IES. | 2023-05-10T01:16:10.262Z | 2023-05-09T00:00:00.000 | {
"year": 2023,
"sha1": "33b56268f0cd37b3820549b531973d8008706ae3",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "33b56268f0cd37b3820549b531973d8008706ae3",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science",
"Economics"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering",
"Mathematics"
]
} |
257015326 | pes2o/s2orc | v3-fos-license | Carbon Sequestration in Resin-Tapped Slash Pine (Pinus elliottii Engelm.) Subtropical Plantations
Simple Summary Pine forests represent a major source of biomass, including timber and resin. Pine resin constitutes a sustainable source of a myriad of products used in several industrial sectors, such as chemicals, pharmaceuticals, food additives, and biofuels. Every year more than 150,000 tons of resin are tapped from Brazilian plantations. A pine tree can be tapped for resin over several years. Resin is a complex mixture of terpenes, which are carbon-rich molecules. Carbon sequestration in plant biomass is an important tool to remove the greenhouse gas CO2 from the atmosphere. Resin extraction from pine plantations has been missing as a component in their carbon budget analyses. This detailed study investigated carbon retention in different tree fractions, including extracted resin, of subtropical coastal slash pine plantations. Significantly higher carbon stock values were recorded in subtropical pine biomass compared to those reported for temperate zones. Resin tapping afforded a considerable annual increment in carbon stocks and should be accounted as a relevant component in sequestration assessments of this element in planted pine forests. Abstract Every year more than 150,000 tons of resin used in a myriad of industrial applications are produced by Brazilian plantations of Pinus elliottii Engelm. (slash pine), which are also used for timber. A pine tree can be tapped for resin over a period of several years. Resin is a complex mixture of terpenes, which are carbon-rich molecules, presumably influencing pine plantation carbon budgets. A total of 270 trees (overall mean DBH of 22.93 ± 0.11 cm) of 14-, 24-, and 26-year-old stands had their C content measured. Three different treatments (intact, wounded panels, and wounded + chemically stimulated panels, 30 trees each) were applied per site. Above- and belowground biomass, as well as resin yield, were quantified for two consecutive years. Data were statistically evaluated using normality distribution tests, analyses of variance, and mean comparison tests (p ≤ 0.05). The highest resin production per tree was recorded in the chemically stimulated 14-year-old stand. Tree dry wood biomass, a major stock of carbon retained in cell wall polysaccharides, ranged from 245.69 ± 11.73 to 349.99 ± 16.73 kg among the plantations. Variations in carbon concentration ranged from 43% to 50% with the lowest percentages in underground biomass. There was no significant difference in lignin concentrations. Soils were acidic (pH 4.3 ± 0.10–5.83 ± 0.06) with low C (from 0.05% to 1.4%). Significantly higher C stock values were recorded in pine biomass compared to those reported for temperate zones. Resin-tapping biomass yielded considerable annual increments in C stocks and should be included as a relevant component in C sequestration assessments of planted pine forests.
Introduction
The greenhouse effect is a natural phenomenon on Earth, generated and primarily controlled by plants as a function of their regular primary metabolism processes. The three main greenhouse gases of concern are carbon dioxide (CO 2 ), nitrous oxide (N 2 O), and methane (CH 4 ) [1]. The increase in global atmospheric CO 2 concentration is currently regarded as one of the major factors accelerating the greenhouse effect. According to the established climate change models, it is estimated that increased CO 2 levels cause faster ozone (O 3 ) layer depletion and rising temperatures with relevant consequences on a global scale [2,3]. The imbalanced progress of this natural process, in part attributed to anthropogenic activities, may be mitigated by increasing afforestation [4][5][6][7].
Forests function as carbon sinks [6,8] by fixing atmospheric carbon into both timber and nonwood-derived subproducts, as well as in soils [3,9,10]. Particularly in coniferous (Division Pinophyta, e.g., Pinus spp.) forests, carbon storage might be additionally increased by resin (gum resin) production and accumulation in plant tissues. Pine resin is a nonwoody terpene-based biomass that has a high value to the chemical industry [9,11,12]. Resin is also considered a great renewable energy source due to its high calorific (or heating) value, which surpasses that of forest tree woods and its components (e.g., bleached, and unbleached wood pulp) [13]. Despite being constitutively produced in high amounts by some Pinus species, its biosynthesis can also be induced by mechanical and chemical treatments [11,[14][15][16][17][18][19].
In southern Brazil, roughly 10 million pine trees are currently utilized for producing and exporting gum rosin and turpentine, the two main subproducts of pine resin [20,21]. According to the Brazilian Resin Producers Association (www.aresb.com.br/portal/estatisticas/, accessed 8 November 2022), the Brazilian 2017/2018 crude resin yield was 185,692 tons, most of it (circa 80%) collected from Pinus elliottii Engelm. (slash pine) and the remaining 20% was obtained from tropical pines. The nonwood biomass extracted from cultivated pine forests through resin tapping operations might represent an important contribution to the overall carbon fixation budget by these plantations.
Slash pine can reach up to 30 m in height, being characterized by long dark green needles (approximately 15 cm long), scaly reddish-brown bark, dense branching, trunks of 90 to 120 cm in diameter, and cones of approximately 12 cm in length, producing seeds dispersed by the wind. It is native to the coastal and southern U.S.A. In southern Brazil, plantations cover marginal areas of sandy and low-fertility soils along the coast, being explored for both wood and resin. In this habitat, slash pine became an invasive species, requiring some degree of mechanical control to avoid excessive spreading [9]. These pine trees are well known for their profuse resin production, yielding high-quality resin for industrial uses. Their bark and wood are rich in resin ducts that are lined with secretory cells and form a network of canals synthesizing mono, sesqui, and diterpenes [19].
To address this knowledge gap, this work aimed at evaluating carbon content and its distribution among different plant organs, as well as resin biomass contribution to total C in slash pine plantations growing in a subtropical climate. To the best of our knowledge, this is the first report on the destructive and direct assessment of biomass and carbon on pine forests tapped for resin.
Trees, Sites, and Treatments
The study was carried out at the research installations of two Brazilian forest companies (Irani Celulose S. The climate in these locations is subtropical humid of the Cfa type (Köppen classification). In sites A and B, thinning was performed 10 and 15 years after seedling establishment, respectively, whereas site C had never been thinned at the time of the experiments. Tree densities per hectare were 900 (site A), 600 (site B), and 900 (site C). For tree selection, the first 5 rows of individuals at the margins of the plantations were disregarded to avoid border effects (e.g., potential differences in wind, moisture, and irradiance). Ninety trees randomly distributed within the inner part of the stand were selected in each site based on a DBH (diameter at breast height, i.e., 1.30 m from the soil level) interval previously established (ranging from 22.77 ± 0.09 to 23.48 ± 0.12 cm), according to technical recommendations [57]. Chosen DBH range is considered well-suited for resin tapping and tree number provides statistical robustness for sampling seed-derived plantations. The use of a defined DBH range also eliminated the effect of this parameter on resin yield among trees of the different sites. Trees were distributed in three groups as follows: (IT) intact trees (control treatment) with 30 untreated trees; (BS) bark streak, with 30 mechanically wounded resin tapped trees; and (P) paste, with 30 mechanically wounded and chemically stimulated resin tapped trees. The paste used was a resin stimulant commercial formulation composed of CEPA (2-chloroethylphosphonic acid, an ethylene-releasing compound), sulfuric acid (H 2 SO 4 ), and inert components, which was applied to the trunk, after bark streak removal as previously described [14].
Resin Tapping
Once the treatments were randomly distributed among the trees within each site, the resin tapping operation started at biweekly intervals (BS and P treatments) [14], throughout the following two years (from spring 2009 to winter 2011). Resin collection was seasonally carried out as previously described [15], and each harvest year was named 'crop' since winter 2009 had passed before resin tapping started. Briefly, plastic bags were belted to trees under the wound panel to harvest resin exuded from periodically inflicted bark streaks (every 2 weeks). At the end of every season, the resin-collecting plastic bags attached to trunks were removed, rainwater was carefully drained, and the resin layer was weighed on a field digital scale (Balmak ELC-25, Santa Bárbara d'Oeste, Brazil).
Destructive Analysis and Carbon Quantification
In November 2010, the first set of 15 trees displaying the same initial DBH range (five from each treatment) and randomly distributed in each site (see item 2.1) was felled and entirely weighed (fresh weight) in the field. Tree heights were recorded using a tape measure. All trees were dissected into their different sections as shown in Figure 1. to trees under the wound panel to harvest resin exuded from periodically inflicted bark streaks (every 2 weeks). At the end of every season, the resin-collecting plastic bags attached to trunks were removed, rainwater was carefully drained, and the resin layer was weighed on a field digital scale (Balmak ELC-25, Santa Bárbara d'Oeste, Brazil).
Destructive Analysis and Carbon Quantification
In November 2010, the first set of 15 trees displaying the same initial DBH range (five from each treatment) and randomly distributed in each site (see item 2.1) was felled and entirely weighed (fresh weight) in the field. Tree heights were recorded using a tape measure. All trees were dissected into their different sections as shown in Figure 1. Aboveground biomass section boundaries were established once trees were felled. The tip section was the uppermost part with a thin and flexible stem. The other Aboveground biomass section boundaries were established once trees were felled. The tip section was the uppermost part with a thin and flexible stem. The other aboveground Biology 2023, 12, 324 5 of 26 biomass sections were defined by dividing the remaining tree height by three so that equal lengths were allocated to upper, medium, and basal sections. To obtain underground biomass, the whole root system was extracted from the soil with a backhoe and washed with a pressurized water hose. Once the excess water was drained (circa 20 min), belowground biomass was sampled similarly to what was done for shoots.
Every tree section was individually subsampled as described in Figure 1, weighed, and dried in an oven at 105 • C up to constant dry weight (DW). After complete drying, the subsamples were ground in a mill to fully pulverize the plant tissues. The resulting powder was passed through a 0.15 mm sieve and subsequently evaluated for total C content through dry combustion at 900 • C on a TOC VCSH analyzer (Shimadzu, Kyoto, Japan). In November 2011, the same procedure was carried out with the 15 remaining trees of each treatment, except that only the biomass distribution of trees was measured. No direct carbon quantification was done on this occasion. Due to the inflammability, and the highly adhesive characteristic of resin, as well as for safety reasons and technical limitations of the equipment, it was not possible to directly quantify the carbon content of the resin samples. Therefore, carbon content was estimated based on the general gum rosin (C 20 H 30 O 2 ) and gum turpentine (C 10 H 16 ) empirical formulas [20], PubChem, https: //pubchem.ncbi.nlm.nih.gov/compound/Gum-rosin, accessed 8 November 2022). The calculations considered mean values of 66% rosin, 22% turpentine, and 12% of other components. The proportions of C in the mixture (m/m) were 52.42% and 19.40%, resulting in a total of 71.81%. Hence, the estimates yielded 718 g of carbon per kg of resin.
Physicochemical Characterization of Soil from Pine Stands
Soil samples from 10 random spots were collected with a Dutch auger (TF 10 model, Sondaterra ® , Piracicaba, Brazil) in each site. The materials were collected from 4 different soil depths (20 cm, 30 cm, 60 cm, and 90 cm) and were individually homogenized and the same volume of samples within each depth was combined in a single flask. Aliquots of this material were then analyzed in triplicate.
The physicochemical characterization of combined soil samples and C content assessments were performed at the Laboratory of Soils, Faculty of Agronomy, Federal University of Rio Grande do Sul (UFRGS), using conventional methods [58,59].
Lignin Quantification
Lignin was quantified using the acetyl bromide method [60]. Briefly, 0.3 g of dry powdered samples from four replicates randomly selected out of 14-and 24-year-old trees under three different treatments (IT, BS, and P) were homogenized in a centrifuge tube containing 7 mL of 50 mM potassium phosphate buffer and stirred vigorously. The pellet was centrifuged at 1400× g for 5 min and washed by successive stirring and centrifugation. The pellet was dried for 24 h at 60 • C ("protein-free cell wall fraction"). Then, a 20 mg sample was hydrolyzed in 25% acetyl bromide (v/v in glacial acetic acid) and incubated at 70 • C for 30 min for digestion. After lignin solubilization and centrifugation, absorbance was measured at 280 nm and compared to a serial concentration standard curve of alkali lignin. Data were expressed as percent lignin in the cell wall. Due to the similar age of stands at sites B and C, only the lignin content present in plant tissues from trees of sites A and B was analyzed.
Statistical Analyses
Initially, data were submitted for the evaluation of normal distribution (Levene test, p ≤ 0.05). Data sets meeting normal distribution requirements were submitted to a onetailed t-test (comparisons involving only 2 treatments) or one-way ANOVA followed by the Tukey test. Similarly, for data sets without variance homogeneity (for 2 sample comparison, Figure S3), the Wilcoxon test was applied. In every case, p ≤ 0.05 was used. Tests were done using GraphPad Prisma software version 7.00 (Dotmatics, Boston, DC, USA). Resin yield was measured with 30 biological replicates. Biomass and carbon data were obtained
Tree Height
Tree density was 900 trees per hectare (ha) at sites A and C, and 600 trees per ha at site B. In the first year, on average, the highest trees were found in site C, the oldest pine plantation (22.38 ± 0.34 m) ( Table 1). Statistical differences in tree height among treatments were only noticed in site B during the first year, and site C in the second year of evaluation ( Figure S1a). In the first case, BS trees of 24 years were taller than those of P and IT. In the second case, P trees of 26 years were shorter than those of BS and IT.
Tree Biomass
Not surprisingly, among the three pine plantations, total dry tree biomass was higher in sites B and C (the ones with older trees) than in site A for both years ( Figure S1b). In addition, considering the plant parts separately, significant differences were only recorded for shoot biomass. In both evaluated years, shoots from sites C (26-year-old) and B (24-year-old) showed higher dry biomass than those from site A. Site A (14-year-old) average shoot dry biomass varied from 204.43 ± 9.85 kg (first year) to 213.41 ± 10.43 kg (second year) ( Table 1). In the same site, tree dry root biomass was 43.40 ± 3.14 kg in the second year (Table 1). Regarding the effects of the treatments on biomass accumulation, statistical difference was only observed in the second year of evaluation in the 26-year-old site ( Figure S1c). In this site, pine trees from bark streak and intact treatments exhibited total biomass of 358.42 ± 20.75 kg and 363.89 ± 10.74 kg, respectively, significantly higher than that of trees treated with a paste which had 261.39 ± 28.62 kg.
In the first year, the root-shoot biomass ratio (R:S) was 0.203 ± 0.012, 0.132 ± 0.006, and 0.154 ± 0.016 for sites A, B, and C, respectively. Site A differed from B and C which were equivalent. In the second year, these values increased slightly for all three areas, reaching 0.205 ± 0.014, 0.151 ± 0.024, and 0.160 ± 0.010 for sites A, B, and C, respectively, becoming statistically equivalent. The total wood biomass partitioning of belowground and aboveground compartments (disregarding the effect of the treatments) was similar for the three evaluated sites ( Figure S2).
Tree Diameter at Breast Height
Overall, IT trees from all evaluated sites showed the highest final DBH values ( Figure S3). Since in BS and P trees, part of the bark was removed to apply the treatments, this is not surprising, which also explains the final DBH being lower than the initial one for the P trees in sites B and C, in the first and second year, respectively. No differences were found among treatments in the wood lignin content of plants from sites A and B ( Figure S4).
Resin Yield
Overall, pine trees of P treatment yielded higher amounts of resin when compared to BS ones throughout the seasons and crop years evaluated (Figure 2A-C), except for site C in the winter of 2011 ( Figure 2C). The overall superior induction of resin by P versus BS was also conspicuous when total resin production was considered ( Table 2).
was also conspicuous when total resin production was considered ( Table 2). The most productive seasons for resin yield were spring and summer in the first crop year of the sites analyzed (Figure 2A-C). In contrast, in the second crop year, these seasons were not as productive (Figure 2A-C). The highest amount of chemically induced resin was found at site A in the summer of 2010 (1.997 kg per paste-treated tree) ( Figure 2A). In the second crop year (from winter 2010 to winter 2011), the induced resin yield was similar throughout the seasons for site A. Conversely, the 2010 spring yield at site B was sharply lower than that recorded for all other seasons ( Figure 2B).
Despite plantation age and its lower value measured for height and wood shoot biomass (Table 1), the overall highest total resin yield in the two years examined was recorded in the youngest pine plantation (site A) ( Figure 2A; Table 2). This was particularly observed in the trees that did not receive paste application.
One of the main physical edaphic differences among the soil samples collected from the three study localities was the clay percentage, which was higher at site A, for all analyzed soil layers (Table S1). In addition, only site A was submitted to an intermittent flooding period.
Albeit none of the sites of the present study were fertilized and the nutrient levels recorded indicated mostly poor substrates, some differences in soil physicochemical properties and composition were apparent. As expected from the higher amount of clay in site A, Cation Exchange Capacity (CEC) was more elevated in this site (Table S1). The availability of Mg was significantly higher in site A starting at approximately 60 cm of soil depth (Figure 3b), whereas Fe was more available throughout the soil profile, particularly in the upper strata ( Figure 3c).
Examining each site separately and considering the same plant compartment, the main differences in C percentage among treatments were observed in shoots of 14-year-old trees (Table 3). Overall, higher C values were found in the BS treatment at site A. The highest C percentage was found in needles under BS treatment (52.23 ± 0.89), followed by wood collected from the trunk basal section (51.56 ± 0.59) ( Table 3). In sites A and B, the needles of trees undergoing BS showed higher C percentage values than those found in the respective aboveground bark samples (Table 3). In site A, levels of C in plant sections were the same for IT and P treated trees, except for the median section, in which the latter had a higher C percentage (Table 3). For the 24-year-old (B) stand, considering the same plant compartment, differences were only observed for needles between the IT and BS treatments. In IT trees, the C percentage was lower for needles compared to that estimated for the taproot and secondary and tertiary roots, as well as for the wood from the median trunk section (Table 3). The lowest C percentage among all sites was found at site C in the taproot sample (41.14 ± 0.87) of BS trees. No statistical differences were observed in the 26-year-old pine plantation within the BS and P treatments (Table 3). Examining each site separately and considering the same plant compartment, the main differences in C percentage among treatments were observed in shoots of 14-yearold trees (Table 3). Overall, higher C values were found in the BS treatment at site A. The highest C percentage was found in needles under BS treatment (52.23 ± 0.89), followed by wood collected from the trunk basal section (51.56 ± 0.59) ( Table 3). In sites A and B, the
Carbon Content in Plant Tissues
The average percentages of total aboveground C content were 50%, 48%, and 44% for sites A, B, and C respectively. For belowground biomass, the total C content values were found to be approximately 48% (sites A and B), and 43% (site C) ( Table 3). The total belowground biomass C percentage was not affected by the treatments in any of the sites (Figure 4a). Differences among treatments within each site were only observed for total aboveground biomass. In sites A and B, trees submitted to the BS treatment showed a higher average C percentage than trees under the IT treatment, whereas in site C, trees under P treatment had a higher C percentage compared to their IT counterparts (Figure 4b).
found to be approximately 48% (sites A and B), and 43% (site C) ( Table 3). The total belowground biomass C percentage was not affected by the treatments in any of the sites (Figure 4a). Differences among treatments within each site were only observed for total aboveground biomass. In sites A and B, trees submitted to the BS treatment showed a higher average C percentage than trees under the IT treatment, whereas in site C, trees under P treatment had a higher C percentage compared to their IT counterparts ( Figure 4b). Overall, treatments had no major impact on C stocks in the biomass of trees from the different sites (Table 4). Considering the average of the three treatments per site, despite showing the lowest tree density (600 trees/ha), the highest C total stock was recorded for plantations in site B (24-year-old; 167.254 MgC·ha −1 ) in the second year of evaluation. This is consistent with the combined weight of trees growing at that site (S1a) and their total C percentage (Table 3). On the other hand, the lowest C content was found in site A (14-year-old; 123.339 MgC·ha −1 ) in the first assessed year ( Figure S5). These data are compatible with the lowest weight displayed by the trees growing in site A, although their mean total C percentage was the highest (circa 50%, Table 3) of the pine stands. Analysis of C stocks in the different plant sections showed higher C stocks in stems (basal and median sections) than branches and leaves (often referred to as living crowns) ( Figure 5). Table 4. Aboveground, belowground, and total carbon stock in biomass of slash pine plantations of Overall, treatments had no major impact on C stocks in the biomass of trees from the different sites (Table 4). Considering the average of the three treatments per site, despite showing the lowest tree density (600 trees/ha), the highest C total stock was recorded for plantations in site B (24-year-old; 167.254 MgC·ha −1 ) in the second year of evaluation. This is consistent with the combined weight of trees growing at that site (S1a) and their total C percentage (Table 3). On the other hand, the lowest C content was found in site A (14-year-old; 123.339 MgC·ha −1 ) in the first assessed year ( Figure S5). These data are compatible with the lowest weight displayed by the trees growing in site A, although their mean total C percentage was the highest (circa 50%, Table 3) of the pine stands. Analysis of C stocks in the different plant sections showed higher C stocks in stems (basal and median sections) than branches and leaves (often referred to as living crowns) ( Figure 5). Values of C sequestered by trees in sites A and C were not statistically different ( Figure S5). Trees growing in site A had relatively low weight ( Figure S1b). Trees in sites B and C were similar in age and mass. Despite showing 300 trees/ha less than site C, an equivalent total C stock was seen for site B.
Estimates of Carbon Stock in Resin Biomass
The estimated resin carbon stock was 718.1 g per kg. Therefore, considering the different site densities, as well as the annual average resin production per chemically stimulated tapped tree, the estimates of C stocks in resin biomass in the first year were approximately 3.362, 2.095, and 3.464 MgC·ha −1 for sites A (14-year-old), B (24-year-old), and C (26-year-old), respectively (Table 5). Given the reduced resin yield per individual in nonchemically induced trees, this treatment had lower C stocks in resin per planted area during the same year (1.660, 0.905, and 1.636 MgC·ha −1 for sites A, B, and C, respectively). Overall, site A was the most productive and site B the least. Similar profiles were recorded during the second year. The second year registered lower C stocks in resin biomass as expected from the diminished resin yield per tree of the different sites and treatments within the period (Table 5).
Soil Physicochemical Characterization and Its Carbon Content
As previously mentioned, site A had an intermittent flooding period. Sites B and C exhibited well-drained soils throughout the year.
Overall, soil samples from all tested sites displayed acidic pH values (from 4.30 ± 0.10 to 5.83 ± 0.06) (Table S1). Soil physicochemical characterization showed some heterogeneity among sites (Figure 3a-f). The main differences were observed in the concentrations of phosphorus (P) (higher at site C; Figure 3a), magnesium (Mg) (Figure 3b), and iron (Fe) (higher at site A; Figure 3c). As expected for acidic soils, very low concentrations of calcium (Ca) were found in all analyzed depths at all three sites (Figure 3d). Site A showed the highest concentration of Fe in all four evaluated depths (Figure 3c), as well as higher absolute K levels which, however, were not statistically significant in most cases. The acidic site C soil presented the highest pH values, P (Figure 3a), and Cu (Table S1) concentrations for all monitored depths. The concentration of Mg increased with depth in site A, particularly at 90 cm (Figure 3b).
Differences in cation exchange capacity (CEC) (concentration in cmol·dm 3 and saturation percentage) and clay content were also observed among the three analyzed sites (Table S1). The presence of clay can directly affect water availability in soil layers. Regardless of the evaluated depth, the highest clay percentage and CEC concentrations were found in soil samples collected from site A (Table S1). Both the saturation percentage of CEC and the Al levels were high in site C (Table S1 and Figure 3f). Nevertheless, higher levels of bases were found only at the three more superficial layers evaluated in this site (Figure 3f).
Regarding soil organic carbon (SOC) content, although significantly higher values of soil organic matter (SOM) were found at 60 and 90 cm depths in site C (Table S1), the highest available soil carbon percentage was found in site A (which assembles the youngest trees) in all the analyzed layers ( Figure 6). No statistical differences in carbon percentage were found between sites B and C in all of the tested depths ( Figure 6). Albeit different, the overall SOM percentage and soil carbon content were very low in all locations and depths (Table S1 and Figure 6).
CEC and the Al levels were high in site C (Table S1 and Figure 3f). Nevertheless, higher levels of bases were found only at the three more superficial layers evaluated in this site (Figure 3f).
Regarding soil organic carbon (SOC) content, although significantly higher values of soil organic matter (SOM) were found at 60 and 90 cm depths in site C (Table S1), the highest available soil carbon percentage was found in site A (which assembles the youngest trees) in all the analyzed layers ( Figure 6). No statistical differences in carbon percentage were found between sites B and C in all of the tested depths ( Figure 6). Albeit different, the overall SOM percentage and soil carbon content were very low in all locations and depths (Table S1 and Figure 6).
General Considerations
Although several studies have been carried out on carbon sequestration in native pine forests in temperate zones, there is little information available regarding the carbon stock of pines growing outside their original habitat. Even less information is available on the role of resin tapping in carbon levels and the distribution in trees. In the present work, the profile of pine carbon sequestration was determined under a subtropical climate, specifically in a coastal area. In addition, the increment in overall carbon sequestration represented by the stocked carbon in resin biomass, a copious and valuable nonwood pine product, was examined.
General Considerations
Although several studies have been carried out on carbon sequestration in native pine forests in temperate zones, there is little information available regarding the carbon stock of pines growing outside their original habitat. Even less information is available on the role of resin tapping in carbon levels and the distribution in trees. In the present work, the profile of pine carbon sequestration was determined under a subtropical climate, specifically in a coastal area. In addition, the increment in overall carbon sequestration represented by the stocked carbon in resin biomass, a copious and valuable nonwood pine product, was examined.
Carbon storage can be influenced by different factors such as climate, soil type and dynamics, physiological status of vegetation related to age [6], functional group [26], and fertilization [8,61,62], among others. Therefore, considering the different densities and ages of the three tested sites, legitimate comparisons of C content among treatments can only be made based on data acquired within the same pine stand.
This work provides a comprehensive description of carbon concentrations within the different plant compartments of pines tapped for resin production, using destructive analysis. Except for the aboveground biomass observed in the youngest analyzed site, the total carbon concentration percentage present above-and belowground was lower than the 50% predicted in the pertinent literature. On the other hand, intratree differences were seen at least in one treatment of the three evaluated sites. Comparing the plant sections, the only predominantly observed allocation pattern was lower carbon concentrations in roots than in shoots in all three analyzed sites.
Biomass Aspects
Considering equivalent ages, low values of biomass were found for pine species in temperate zones when compared to those recorded in the present work (in the 14-year-old plantation, circa 213 and 43 kg per tree, for shoot and root, respectively). A 15-year-old native forest of Pinus strobus L. (eastern white pine) displayed mean aboveand belowground dry biomass of 54 and 13 kg per tree, respectively [42]. In a 17-year-old native slash pine plantation, the biomass allocations for stems, branches, and needles were 75.6, 5.7, and 4.2 Mg·ha −1 , respectively [27], roughly equivalent to 51.3 kg of shoots per tree, considering the plantation spacing. As expected, higher values of dry tree biomass than the ones found here are registered only in much older pine forests from temperate zones. For example, a 65-year-old eastern white pine stand had a dry biomass of 529 and 99 kg per tree for above and belowground parts, respectively [42]. In the present study, the highest total biomass was recorded in sites B and C in both assessed years. This is not in agreement with the prediction for low-density tree-stand biomass, considering that site C displayed 300 additional trees per hectare in relation to site B. On the other hand, site C featured the highest tree average height in our study (Table 1).
In agreement with their higher total average shoot and root biomass production, sites B and C showed similar carbon stock values in the two consecutive years ( Figure S5), despite showing different carbon average percentages. Among the three tested sites, trees at site A invested the most in height (more than 1.0 m·tree −1 ·year −1 ) as well as in resin production (more than 4.8 kg·tree −1 ·year −1 ). Equivalent investment in wood biomass was seen for sites B and A (4.73% and 4.33%, respectively) in the second year. On the other hand, site C exhibited decreased biomass in the second year-by 6.73%. Work on Scots pines resin responses to artificially inoculated Ophiostoma brunnneo-ciliatum led to the proposal that at young ages pines share photosynthates from the current photosynthesis process between wood biomass acquisition and induced-resin biosynthesis, whereas mature trees mainly rely on stored carbohydrates for the latter [63]. This agrees with the results found in the current study for sites A and B which showed reduced oleoresin production in the second year compared to the first one and invested the most in wood biomass production compared to site C. The same is not valid for site C, with minimum biomass investment in both oleoresin and wood. In fact, it does not seem to show the typical growth-differentiation balance hypothesis profile regarding resin biosynthesis, at least for the second year of the experiment [64].
Carbon Ratio
Carbon percentage values observed for sites A and B were consistent with a destructive carbon analysis performed in maritime pine plantations ranging from 1 to 47-year-old plants. In those areas, the carbon content average was 48.1% and 50.5% for root and shoot biomass, respectively [65]. For plants of this same species growing in a 50-year-old native pine forest, mean carbon concentrations of 53.6% in shoots and 51.7% in roots were recorded [32]. Studies with Pinus spp. plantations in southern Brazil (mainly loblolly and slash pines not tapped for resin production) used different estimated average carbon contents per tree compartment, including needles (41%), branches (45%), roots (44%), and trunks (45%) [66]. These values were generally lower than those of the present study ( Table 3).
The higher carbon stocks observed in stems (basal and median sections) than in branches and leaves (often referred to as living crowns) agree with the findings for loblolly pine [43]. Overall, carbon stocks recorded in all three sites (Table 4) were higher than values reported for other pine stands, even if superior tree densities are considered. For instance, lower carbon storage was found in an exotic 21-year-old slash pine plantation with superior site density (1,439 trees ha −1 ) in a subtropical climate (116.77 ± 7.49 MgC·ha −1 ) [67]. Similar results were observed for a 15-year-old jack pine (Pinus banksiana Lamb.) stand with a density of 2,600 trees·ha −1 and carbon stock of 103 MgC·ha −1 . In the same study, 24-and 26-year-old Pinus resinosa Ait. (red pine) stands featuring 1,360 and 1,800 trees·ha −1 , stored 106.13 and 152.60 MgC·ha −1 , respectively [44]. In a native 50-year-old maritime pine stand with a density of 223 trees·ha −1 , carbon content was 74 MgC·ha −1 [32]. Studies on the development of allometric equations for Pinus spp. (growing on plantations in southern Brazil not tapped for resin production) also found lower carbon stocks for 15-year-old pine plantations, roughly 114 MgC·ha −1 [68] and 102 MgC·ha −1 [66]. An investigation of loblolly pine in southern Brazil reported carbon stocks in trunk biomass of 41.8, 91.4, and 91.9 MgC·ha −1 in 14-, 25-and 26-year-old exotic stands, respectively [69].
Water Availability
Usually, the most productive seasons for stimulated resin yields in southern Brazil are spring and summer [11,15], which was the case observed in the first year of the present study, but not in the second one. This may be explained by differences in rainfall. The average seasonal rainfall in sites A and B was 29% higher in the first year compared to the second one. A similar pattern was observed in site C that showed a seasonal average rainfall of 367.8 mm in the first year (35% higher than the one registered for the second year) (INMET, 2022, https://tempo.inmet.gov.br/TabelaEstacoes/A001, accessed on 3 December 2022).
Water availability seems to be a crucial factor affecting pine resin biosynthesis [11]. Both high water availability and moderate water stress have been shown to increase resin yields in different pines and other Pinaceae species. Under moderate water stress, sufficient to limit plant growth, constitutive resin flow was enhanced in full-grown Pinus taeda L. (loblolly pine) trees. On the other hand, inducible resin exudation in this species was higher during the season of greatest growth, in the fastest-growing trees [70]. A similar constitutive response was observed in Pinus sylvestris L. (Scots pine). In this species, changes in the terpenoid profile and concentration were only detected when plants experienced moderate to severe water stress, after photosynthesis limitation due to stomatal closure [71]. In Scots pine, a suitable water supply in dry sites indirectly affected resin biosynthesis by means of radial growth promotion [72]. In Abies grandis (Douglas ex D. Don) Lindley, a species belonging to the Pinaceae, water and light stress acted as negative modulators of constitutive-monoterpene cyclase activity in both saplings and adult trees [73].
Edaphic Factors
The presence of more clay and intermittent flooding in site A may have interfered with water availability at the rhizosphere, potentially stimulating resin biosynthesis in the shoots. Hypoxia conditions in flooded roots may induce the accumulation of ethylene precursors which move to the shoots and subsequently promote ethylene production, thereby stimulating resin biosynthesis and flow [74,75]. Thus, high water availability at this site might have promoted resin yield.
Most studies have reported negative or no effects of fertilization on resin flow [76]. In 6-and 12-year-old stands of loblolly pine trees, constitutive resin flow was increased by fertilization. However, only the younger trees were able to keep the resin flow after wounding and fungal inoculation treatments [77]. Terpene chemical profiles and emissions could also be altered by fertilization in 50-year-old Scots pine trees, and the profile of resin acids from sapwood was more responsive to nitrogen (N) treatment than monoterpenes from heartwood [78]. In Scots pine growing at polluted sites in Finland, fertilizer treatments containing N decreased resin flow in treated plants [79]. Eleven-year-old plants of loblolly pine that were N-, P-, K-, Mg-, Ca-, and B-fertilized yielded 30 to 100% less resin compared to untreated trees [80].
The higher CEC in site A (higher in clay relative to the other sites) may have contributed to the nutrient presence in the soil, as well as acting as a buffer against excessive acidification. The higher availability of Mg and Fe in the same site may also have modulated resin yield. Aside from being essential for numerous cellular functions that support growth, Mg and Fe are required for the activity of one or more classes of pine terpene synthases and their use as resin stimulant paste adjuvants has improved yields in slash pine [16]. Therefore, in addition to the impacts of DBH and water availability on the resin yield of site A trees, the higher soil availability of these two cations might have also contributed to resin biosynthesis in the 14-year-old plantation.
Of all elements assessed (Figure 3a-f and Table S1), potassium (K), copper (Cu), manganese (Mn), and iron (Fe) are known to be key cofactors of terpenoid biosynthetic enzymes involved in resin biosynthesis that can impact yield [16,18]. Soil mineral availability depends on different factors such as pH, mineral soil-plant mobility, and mineral complexation with soil particles or other chemical elements. Mycorrhizal associations with pines are also relevant, particularly for P, but also for N and K acquisition in poor soils [81,82].
Soil acidification promotes the formation of Al toxic species, which reduces the mineral availability in soils, including P. Regarding fertilization, the growth response in loblolly pine (an Al-sensitive species) was more correlated to extractable Al indices than to N or P availability [83]. Root injury preceded by mycorrhizal activity inhibition is a common indicator of Al toxicity. The uptake and distribution of Mg, Fe, and Mn in shoot and root tissues of Pinus massoniana Lamb. (masson pine) were altered by Al solution treatment. The typical root growth inhibition, related to the mitotic imbalance caused by chromosome aberrations, was also seen in masson pine seedlings because of Al accumulation in roots [84].
The negative effect of Al on P availability has been described for various forest stands and it may partly explain the low P concentration in site A soil. In maritime pine, depletion of soil P was observed to be more limiting for growth than for leaf terpene biosynthesis [85]. Lime application on a 20-year-old exotic plantation of slash pine in China was more effective to improve resin yields than NPK fertilization [86]. However, liming might be a counterproductive practice in terms of the maintenance of soil carbon stocks since it represents a direct source of CO 2 emissions to the atmosphere [1].
Soil acidic conditions are often not favorable to pine species cultivation. As previously noted, the overall highest exchangeable and available concentration of Al was found at site A, along with the lowest mean pH value in all soil depths. An overview of soil features of the experimental sites in the present study points to site A as the most stressful one, substrate-wise. Despite this condition, the site also yielded the highest total resin average per tree in both years of evaluation. As previously pointed out this profile may also have been affected by the local intermittent flooding events.
Along with carbon flows in forest vegetation, soil organic carbon (SOC) dynamics can be variable and dependent on different factors such as density, management practices, site conditions, and preceding use of the land [53]. The SOC values recorded (ranging from 30 to 115 Mg·ha −1 ) were relatively low compared to those found in soil samples (until 100 cm depth) from a five-year-old loblolly pine plantation (227.8 Mg·ha −1 ) not tapped for resin production, growing in a physiographic region in southern Brazil named "Campos de Cima da Serra" [87], which is located approximately 1240 m above sea level. In the current work, however, all locations were typical coastal sandy soils without recent prior plant cover, and poor in organic matter.
In 22 years-old plantations of masson pine (a native species) and slash pine (an alien species) grown in subtropical China, a similar contribution of both species to SOC was recorded [88]. Data from more than 400 sites in Poland showed that soil from pine stands contained less stored carbon than that of other coniferous species, like fir (Abies spp.) and spruce (Picea spp.). Soil stored carbon was also higher in deciduous tree stands such as beech (Fagus spp.) and oak (Quercus spp.) compared to pine areas. Moreover, the lowest carbon stocks were found in the low pH range (4.5-5.5) [89]. Lower SOC values were also recorded in forests of Pinus koraiensis Siebold and Zucc. (Korean pine) compared to birch (Betula platyphylla Sukaczev) and dahurian larch (Larix gmelinii (Rupr.) Rupr.) stands [90]. Clay may also affect the soil's carbon pool.
In temperate zones, the climate found in elevated areas, characterized by higher precipitation and lower temperature, is an important factor affecting the carbon stock in forest soils [89]. In China, it has been shown that the soil carbon stock increases with altitude in secondary coniferous forests such as Larix principis-rupprechtii Mayr, Picea meyerii Rehder & E.H. Wilson and Pinus tabulaeformis Carr. [51]. In the present study, all assessed sites were located at sea level, and close to a coastal region; hence, lower values in carbon estimates are expected compared to other landscapes.
Resin Yield
The fact that the youngest plantation (site A) yielded the overall highest resin in the two years monitored may be partly explained by the larger mean initial DBH found at this site [14] and possibly by higher numbers of radial resin ducts present in the wound panel [91]. Particularly for slash pine, the number and size of resin ducts (and therefore resin biosynthesis) are higher and usually more active in young trees. The number of ducts can decrease with age up to 20 years old, whereas resin canal size may decrease at least up to 30 years of age in trees [92,93]. Resin duct area and size have been shown to strongly correlate with resin yield in slash pines of three different locations in China [94]. Moreover, it is well known that pine resin biosynthesis responds to a multitude of intrinsic and environmental factors, such as plant genetics, age, water, temperature, and mineral nutrient availability, among others [13,18,74,76,78] as discussed above in Sections 4.3 and 4.4. As expected, sulfuric acid plus ethylene stimulant paste application increased significantly the production of resin. These adjuvants act by triggering and intensifying defense responses to wounding, which are mostly related to the exudation of this complex mixture of terpenes. Overall, the most productive resin yield seasons were the warmer ones, whereas winter yields were often reduced, in agreement with the usual profile [9].
Pine resin is made up of a volatile fraction (turpentine), majorly composed of monoterpenes and a few sesquiterpenes, and a nonvolatile fraction formed by diterpenic acids (rosin) [21]. The crude resin composition, in terms of turpentine-rosin proportion, was variable and site-and species-dependent. For instance, the analysis of 22 Chinese pine species from subgenus strobus showed that diterpenes comprise 59.5 to 80.9% of the produced resin [95]. In European black pine (Pinus nigra spp. laricio J.F. Arnold) this resin fraction is between 46 and 66% [96]. In maritime pine, more than 70% of the crude resin is made up of diterpene acids [97]. Turpentine yields in natural populations of Pinus merkusii Jungh. and de Vriese are in the range of 28.5 to 32.8% (v/w) [98]. In slash pine, turpentine represents 22 to 25% of the resin weight [20,99], and is mostly composed of αand β-pinenes [16,20].
Pine resin subproducts have several applications in the chemical industry. For instance, turpentine components are usually employed in the production of solvents or cleaning agents for paintings and varnishes, pine oils [21,57], insecticides, and essential oils of flavorings and fragrances [100]. Rosin constituents, in turn, are used as feedstock for more long-lasting products such as adhesives, synthetic rubber, coatings [100], waterproof materials, inks, paper sizing, and rubber emulsifiers [21]. Thus, regarding residence time [101], besides enhancing carbon sequestration in pine plantations, resin utilization also contributes to carbon fixation and permanency, mostly due to the long lifetime of its nonvolatile fraction derivatives.
Tree Development
Considering the highest density and carbon percentage observed in site A, higher carbon stocks were expected in the younger stand versus similarly spaced trees of the oldest site, C. The observed absence of difference in sequestered carbon was probably due to the low weight of the trees growing at site A. Sites B and C had similar ages and trees with comparable average weights. Although site B was less dense than site C, carbon stocks were equivalent. This discrepancy in carbon stocks related to the plant age found in the present study may be partly explained by the management status of the pine stands. Sites A (14-year-old) and B (24-year-old) were thinned at the ages of 10 and 15 years old, respectively, whereas site C (26-year-old) was kept undisturbed since stand establishment. Moreover, local climate, site density, distinct soil traits, and the impact of resin tapping activity previously performed in the tested areas (during two consecutive years), should also be considered potential factors influencing carbon stock capacity.
Age influence on carbon storage in our study was comparable to that found in forests of Pinus ponderosa Douglas ex C. Lawson (ponderosa pine) at different developmental stages. In ponderosa pine stands, total carbon stocks were higher in the older area (never logged) when compared to the younger one (previously clearcut) [26]. Similar results were observed in red pine stands. Carbon stocks increased with plant age in thinned stands. However, such an increase was only observed until the middle of the observed chronosequence in unmanaged stands [5]. Indeed, younger stands are expected to sequester larger carbon amounts compared to older ones, since their larger carbon uptake is associated with active growth. Older forests, in turn, generally display a limited growth rate and higher carbon stocks [29].
An average increase of 5.53% and 4.73% in wood biomass for sites A and B, respectively, and a reduction of 6.73% in site C, were observed within the time monitored ( Figure S1b). On the other hand, plants of all three sites reduced resin exudation from the first to the second year (crop), especially in the chemically induced pine trees ( Table 2). Considering the average amount of resin produced by chemically stimulated and nonchemically stimulated trees within the same site in the two assessed years, the youngest plantation (14-year-old) produced the highest total resin biomass in both crops, while the 24-year-old site produced the lowest ( Table 2). The greater values of resin observed in the first year (crop) might be explained by the constitutive storage of this biomass in tree trunks before resin tapping operations started. In addition, rainfall varied through the evaluated years.
Allometric equations are very useful to predict increases in the biomass of pine plantations, however, they are accurate only if developed for the site-and species-specific traits [30]. For example, data collection of 77 Scots pine stands aged from 3 to 20 years showed that tree stand biomass increases with tree height and volume as well as with tree age. On the other hand, tree biomass decreases with higher stand density in the evaluated chronosequence [37].
In both evaluated years, the highest total biomass was seen in sites B and C. This fact is not in agreement with the prediction for low-density tree stand biomass, considering that site C had 300 additional trees per hectare in relation to site B. On the other hand, site C featured the highest tree average height in our study (Table 1).
Despite its lower tree density, site B did not show a higher R:S biomass ratio. In Pinus pinaster (Ait.) (maritime pine) growing in southwestern Australia, the R:S was higher in sites featuring open-spaced trees rather than in those with close-spaced trees of the same size [65]. In agreement with the present study for pines growing in the same geographic area (sites A and B), an overall decrease in R:S with increasing age was reported in eastern white pine stands [42].
Silviculture, Landscape Management, and Policy
It is well established that different silvicultural practices [48,102] and plant ages [43,45] can influence carbon sequestration in pine stands. The date and intensity of thinning can also impact allometric relationships and carbon intake in pine stands [49,87]. It was shown that intensive management (fertilization and/or understory elimination) can increase carbon sequestration in 17-year-old slash pine plantations growing in sandy flatwoods soils [27]. In postfire regenerated forests of Pinus halepensis Mill. (Aleppo pine), early thinning increased the productivity of pine saplings. On the other hand, the total quantity of carbon sequestration and partitioning decreased following intensive thinning [103], and strong early thinning in preburned sites of maritime pine negatively affected the carbon biomass of saplings [104]. A similar result was observed in even aged pure stands of maritime pine and radiata pine (Pinus radiata D. Don), where lower thinning intensity and higher rotation age increased the aboveground biomass and carbon pools [48]. A comparison of carbon stocks between two pine species stands under different managements also showed that thinning reduced carbon sequestration. Similar carbon sequestration values were observed between a thinned 75-year rotation of Pinus palustris Mill. (longleaf pine) and unthinned 25-year-old rotation slash pine [36].
As mentioned above, Pinus is an exotic genus in Brazilian territory. Due to its invasiveness potential, the State Environmental Authority (SEMA) Normative Instruction n • 14 of 10 December 2014 (www.legisweb.com.br/legislacao/?id=278555, accessed 8 November 2022) established that pine plantations in southern Brazil must be restricted to areas previously occupied by species of this genus. As a result, the search for alternative commercial activities has increased to ensure the optimization of land use before tree logging, in addition to postponing the time-demanding regeneration process of the pine stands. On the other hand, unlike timber extraction, pine resin represents a short-term abundant, sustainable, and renewable carbon biomass source. Therefore, in southern Brazil, resin tapping operations have recently been intensified as a profitable activity that indirectly contributes to the local mitigation of greenhouse gas effects. Overall, despite the not unexpectedly low soil carbon stock, plant biomass total carbon stock was higher for all the tree analyzed ages in comparison to the values obtained by both destructive analyses and allometric predictions recorded in the literature, considering the same or other pine species with similar ages. Annual slash pine resin production in Brazil and the C estimates herein described for this nonwood product indicate relevant carbon sequestration increments per year in tapped pine plantations. In addition, individual pine trees can provide resin for several years prior to felling, which promotes further atmospheric carbon removal and storage.
The present study provided valuable primary results regarding carbon capture and sequestration in Pinus plantations subjected to the influence of soil types, ages, and management practices (resin collection and chemically stimulated resin collection). Future studies can draw on these results to analyze the bioeconomic impacts of Pinus plantations concerning their consequences for carbon credit trading. Comparative analysis of total carbon sequestered annually per hectare may be appropriate for certification and auditing of carbon credits and the consequent decision-making by producers regarding the adoption of one or another system of forest stand management considering the highest possible additional returns derived from the marketing of carbon credits. It is hoped that this novel information on carbon stocks of exotic slash pine plantations tapped for resin will provide a framework to value the contribution of the resin industry regarding carbon credits, as well as represent an additional tool to guide decision-making in forestry policies.
Conclusions
The biomass of coastal slash pine plantations in subtropical climates is relatively high compared to that of related species of comparable age in temperate zones. This profile is seen despite limited soil fertility, variations in tree age, water availability, and site, highlighting the environmental resilience and plasticity of this forest species. Although all tree fractions contribute to carbon content, most carbon is associated with shoots, particularly trunks, an aspect to consider in genetic selection programs aimed at carbon sink activity. Resin yield constitutes a relevant component of carbon allocation and retention, notably in paste-stimulated resinosis. The inclusion of resin extraction in stand carbon credit computation is recommended, especially considering its continuous exploration over several years and the significant carbon residence time in many of its multiple derivatives.
Supplementary Materials: The following supporting information can be downloaded at: https:// www.mdpi.com/article/10.3390/biology12020324/s1, Table S1: Physicochemical analysis of samples collected at different soil depths in three slash pine plantations; Figure S1. Height (a) total dry biomass (shoot plus root fractions) (b) and dry biomass separated by treatments (c) of slash pine trees growing at three different plantations. Site A: 14-, Site B: 24-, Site C: 26-year-old slash pine stands (age at the installation of the experiments). Pine trees were felled in 2010 and 2011 (Years I and II, respectively). Lowercase letters compare tree height (a) and weight (c) in different treatments within sites and crop years. In b, lowercase letters compare total dry weight among sites and crop years. Bars sharing a letter are not significantly different by Tukey test (p ≤ 0.05). Bars not showing letters indicate no statistical differences among treatments within the same site; Figure S2. Biomass partitioning of slash pine trees growing at three different age plantations. Site A: 14-year-old, site B: 24-year-old, site C: 26-year-old. The percentage was calculated based on the biomass weight (kg) of 45 trees per assessed pine stand (the panel on the tree trunk is not related to the treatment but is merely illustrative; Figure S3 | 2023-02-19T16:13:46.680Z | 2023-02-01T00:00:00.000 | {
"year": 2023,
"sha1": "937027041d7732c017c924b0318b18cd806cfbdf",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "3696e6a4b259241f7a69ea1feb54c63bc906491e",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.